Logo for CodersBrain

KAFKA LEAD

Roles & Responsibilities

  • Strong experience in ETL optimization and designing big data processes using Apache Spark or similar technologies.
  • Proficiency in building scalable data pipelines with SQL, Python, Spark, or PySpark, with advanced knowledge in at least one programming language.
  • Experience maintaining and enhancing Confluent Kafka architecture, including design principles and CI/CD deployment procedures.
  • Knowledge of real-time streaming applications, Kafka producers, consumers, and streams, with a background in distributed systems and data architecture.

Requirements:

  • Lead the development and optimization of data pipelines and streaming applications using Kafka and Spark.
  • Maintain and enhance Kafka architecture, ensuring adherence to design principles and deployment procedures.
  • Provide technical leadership and mentorship to junior engineers in data engineering best practices.
  • Collaborate in a fast-paced, agile environment to deliver scalable, real-time data processing solutions.

Job description

Expertise in ETL optimization, designing, coding, and tuning big data processes using Apache Spark or similar technologies.

Experience building robust and scalable data integration (ETL) pipelines using SQL, Python, Spark or PySpark. Advanced knowledge in one of the programming language is must.

Maintain and enhance Confluent Kafka architecture, Confluent Kafka design principles, CICD Deployment procedures

Experience with building streaming applications with Confluent Kafka (Confluent Kafka preferred but opensource Kafka acceptable)

Development experience using Kafka producers, consumers, and streams (Confluent Kafka preferred but opensource Kafka acceptable)

Experience with building data pipelines and applications to stream and process datasets at low latencies.

Experience with realtime and scalable systems development using Apache Kafka or Confluent Kafka or Kafka Streams.

Show efficiency in handling data tracking data lineage, ensuring data quality, and improving discoverability of data.

Good understanding of AWS technologies (S3, AWS Glue, CDK, ECS, EMR, Redshift, Athena)

Sound knowledge of distributed systems and data architecture (lambda) design and implement batch and stream data processing pipelines, knows how to optimize the distribution, partitioning, and MPP of highlevel data structures.

Knowledge of Engineering and Operational Excellence using standard methodologies.

Experience with process improvement, workflow, bench marking and or evaluation of business processes.
Familiarity with CICD process.

Work in a fast paced agile environment.

Experience providing technical leadership and mentoring other junior engineers for best practices on data engineering.

Experience in building RESTAPIs for data transfers.

Background in Java and Spring framework is a plus

Proficiency in one of JIRA, Atlassian, and Git is must.
Requirements
Experience with realtime and scalable systems development using Apache Kafka or Confluent Kafka or Kafka Streams.
Experience providing technical leadership and mentoring other junior engineers for best practices on data engineering.
Experience in building RESTAPIs for data transfers.
Background in Java and Spring framework is a plus
Proficiency in one of JIRA, Atlassian, and Git is must.

Related jobs

Other jobs at CodersBrain

We help you get seen. Not ignored.

We help you get seen faster — by the right people.

🚀

Auto-Apply

We apply for you — automatically and instantly.

Save time, skip forms, and stay on top of every opportunity. Because you can't get seen if you're not in the race.

✨

AI Match Feedback

Know your real match before you apply.

Get a detailed AI assessment of your profile against each job posting. Because getting seen starts with passing the filters.

Upgrade to Premium. Apply smarter and get noticed.

Upgrade to Premium

Join thousands of professionals who got noticed and hired faster.