Match score not available

Kafka Data

Remote: 
Full Remote
Contract: 
Experience: 
Junior (1-2 years)
Work from: 

Offer summary

Qualifications:

Proficiency in Apache Kafka, data storage systems, big data frameworks, Skilled with Java, Python, Scala, SQL, and NoSQL databases, Degree in Computer Science or related field, Cloud platform experience is a plus.

Key responsabilities:

  • Build & maintain data pipelines using Apache Kafka
  • Collaborate with architects & engineers for robustness
  • Incorporate scalable techniques for large data sets
  • Implement Kafka producers and consumers
CodersBrain logo
CodersBrain SME https://www.codersbrain.com/
201 - 500 Employees
See more CodersBrain offers

Job description



Dear Candidate,

Hi Candidate.
Greetings from Coders Brain Technology Pvt. Ltd.
Coders Brain is a global leader in its services, digital, and business solutions that partners with its clients to simplify, strengthen, and transform their businesses. We ensure the highest levels of certainty and satisfaction through a deep-set commitment to our clients, comprehensive industry expertise, and a global network of innovation and delivery centers.
Position: Permanent with Coders Brain Technology Pvt. Ltd.


Role : kafka data
Total Experience: 7+ years 
Location : Remote 
Client : Ellow


Mandatory skills : 
Experience with Azure/MS Data Platforms: Postgres Single and Flex, Azure SQL, MS SQL server, Kubernetes 
• Terraform (Preferred not Compulsorry), Bicep or other IaC. 
• Helm (helm deployments as well writing charts). 
• Azure DevOps pipelines. 
• Engineer, Data Engineer, or similar roles with a strong focus on Kafka and data engineering within the Azure environment. 
• Experience with Data pipelines for deploying database code – Postgres, Azure SQL, and SQL Server


Role Description
This is a full-time role for a Kafka Data Engineer. The Kafka Data Engineer will be responsible for building and maintaining high-performance distributed data pipelines utilizing Apache Kafka. They will need to work closely with data architects and software engineers to ensure data pipeline robustness, correctness, and recoverability. They will need to incorporate scalable techniques to manage large amounts of data and access various data sources as well as implement Kafka producers and consumers. This is a remote role.

Qualifications
  • Strong proficiency with Apache Kafka (including Kafka Connect, Kafka Streams, and Kafka Security)
  • Experience with various data storage systems and data formats (e.g. AWS S3, Hadoop, Avro, Parquet)
  • Experience with big data processing frameworks (e.g. Spark)
  • Experience with developing solutions utilizing Java, Python, and Scala
  • Strong experience with SQL and NoSQL databases
  • Experience working with message queuing, stream processing, and highly scalable systems
  • Bachelor's degree or higher in Computer Science, Software Engineering, or a related field
  • Excellent communication and collaboration skills with the ability to work effectively in a remote environment
  • Experience with cloud platforms such as AWS, Microsoft Azure, or Google Cloud Platform is a plus


Required profile

Experience

Level of experience: Junior (1-2 years)
Industry :
Management Consulting
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Verbal Communication Skills
  • Organizational Skills

Data Engineer Related jobs