Intelletec is partnered with a leading provider in consumer credit ratings. Dedicated to providing value beyond the rating through independent and prospective credit opinions, they offer global perspectives shaped by strong local market experience and credit market expertise. The additional context, perspective, and insights provided have helped fund a century of growth and enable consumers to make significant credit judgments with confidence.
- Build data pipelines and applications to stream and process datasets at low latencies.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, NoSQL, Kafka using AWS Big Data technologies.
- Collaborate with Data Product Managers, Data Architects, and other Data Engineers to design, implement, and deliver successful data solutions.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.
- Track data lineage, ensure data quality, and improve discoverability of data.
- Work in Agile Environment (Scrum) and interact with multi-functional teams (Product Owners, Scrum Masters, Developers, Designers, Data Analysts)
- Strong experience developing in Java.
- 5+ years of data engineering experience developing large data pipelines
- Strong SQL and NoSQL skills and ability to create queries to extract data and build performant datasets.
- Experience with relational SQL and NoSQL databases, any RDBMS (Oracle, Postgres) and NoSQL (Cassandra, Mongo, or Redis, etc.).
- Hands-on experience with distributed systems such as Spark, Hadoop (HDFS, Hive, Presto, PySpark) to query and process data.
- Hands-on experience with message queuing and stream data processing (Kafka Streams).
- Strong analytic skills related to working with unstructured datasets.
- Hands-on experience in using AWS cloud services: EC2, Lambda, S3, Athena, Glue, and EMR.