Help us maintain the quality of our job listings. If you find any issues with this job post, please let us know.
Select the reason you're reporting this job:
Coders Brain is a global leader in IT services, digital and business solutions that partners with its clients to simplify, strengthen and transform their businesses. We ensure the highest levels of certainty and satisfaction through a deep-set commitment to our clients, comprehensive industry expertise and a global network of innovation and delivery centers.
We achieved our success because of how successfully we integrate with our clients.
Coders Brain is a global leader in its services, digital, and business solutions that partners with its clients to simplify, strengthen, and transform their businesses. We ensure the highest levels of certainty and satisfaction through a deep-set commitment to our clients, comprehensive industry expertise, and a global network of innovation and delivery centers.
Position: Permanent with Coders Brain Technology Pvt. Ltd.
Role : kafka data
Total Experience: 7+ years
Location : Remote
Client : Ellow
Mandatory skills :
Experience with Azure/MS Data Platforms: Postgres Single and Flex, Azure SQL, MS SQL server, Kubernetes
• Terraform (Preferred not Compulsorry), Bicep or other IaC.
• Helm (helm deployments as well writing charts).
• Azure DevOps pipelines.
• Engineer, Data Engineer, or similar roles with a strong focus on Kafka and data engineering within the Azure environment.
• Experience with Data pipelines for deploying database code – Postgres, Azure SQL, and SQL Server
Role Description
This is a full-time role for a Kafka Data Engineer. The Kafka Data Engineer will be responsible for building and maintaining high-performance distributed data pipelines utilizing Apache Kafka. They will need to work closely with data architects and software engineers to ensure data pipeline robustness, correctness, and recoverability. They will need to incorporate scalable techniques to manage large amounts of data and access various data sources as well as implement Kafka producers and consumers. This is a remote role.
Qualifications
Strong proficiency with Apache Kafka (including Kafka Connect, Kafka Streams, and Kafka Security)
Experience with various data storage systems and data formats (e.g. AWS S3, Hadoop, Avro, Parquet)
Experience with big data processing frameworks (e.g. Spark)
Experience with developing solutions utilizing Java, Python, and Scala
Strong experience with SQL and NoSQL databases
Experience working with message queuing, stream processing, and highly scalable systems
Bachelor's degree or higher in Computer Science, Software Engineering, or a related field
Excellent communication and collaboration skills with the ability to work effectively in a remote environment
Experience with cloud platforms such as AWS, Microsoft Azure, or Google Cloud Platform is a plus
Required profile
Experience
Level of experience:Junior (1-2 years)
Industry :
Management Consulting
Spoken language(s):
English
Check out the description to know which languages are mandatory.