W1siziisimnvbxbpbgvkx3rozw1lx2fzc2v0cy9jbnrlbgxldgvjig5ldy9qcgcvbmv3lwjhbm5lci1kzwzhdwx0lmpwzyjdxq

Senior Software Engineer

  • Location

    New York City

  • Sector:

    Software Engineering

  • Job type:

    Permanent

  • Salary:

    Circa $210,000

  • Contact:

    Jason Rumney

  • Contact email:

    jason@intelletec.com

  • Job ref:

    AAJR03838343

  • Consultant:

    Jason Rumney

As a senior software engineer on the Pipeline Engineering Team, you will be responsible for building the systems that support the ingestion and processing of data on the platform. Data is the lifeblood and all software applications revolve around the scalable and performant handling of financial datasets.  This is a great opportunity for an experienced engineer to take a significant role in the development of data-driven applications that form the foundation of a truly cutting-edge product. 

Responsibilities:

  • Contribute to the design and development of our data workflow management platform
  • Build tools to wrangle datasets of small and large volumes of data into cleaned, normalized, and enriched datasets
  • Develop new distributed systems from scratch
  • Mentor fellow teammates on design patterns and best practices

About you:

  • You love building elegant solutions that scale you bring deep experience in the architecture and development of quality backend production systems, specifically in Java
  • You love working on high-performing teams, collaborating with team members, and improving our ability to deliver delightful experiences to our clients
  • You are excited by the opportunity to solve challenging technical problems, and you find learning about data fascinating
  • You possess a strong desire to work for a small, fast-paced startup which is growing rapidly
  • You’re excited by the challenge of extracting order out of the chaos of messy and highly inconsistent financial datasets

Skills:

  • 7+ years of professional experience in a production environment
  • Expertise in core Java, some Python experience is an advantage
  • Proficiency in multiple programming languages
  • Experience with data pipeline development, ETL and/or other big data processes
  • Experience working in a cloud-based environment, such as GCP or AWS
  • Experience working independently, or with minimal guidance
  • Strong problem solving and troubleshooting skills 
  • Ability to exercise judgment to make sound decisions Excellent communication and interpersonal skills, as well as a sense of humor
  • BS degree or higher in a technical discipline

Additional experience:

  • Experience with popular big data / distributed computing frameworks, eg. Spark, Hive, Kafka, Map Reduce, Flink
  • Familiarity with container-orchestration technologies, eg. Kubernetes, YARN
  • RDBMS SQL and NoSQL, structured and unstructured data, BigQuery