Match score not available

Senior Data Engineer (Kafka, Python, Elasticsearch) (CPT Remote)

Remote: 
Full Remote
Contract: 
Experience: 
Senior (5-10 years)
Work from: 
South Africa

Offer summary

Qualifications:

Proven experience in designing data pipelines, Expertise in managing Elasticsearch clusters, Strong proficiency with ETL processes, Good programming skills with Python, Scala or Java, Knowledge of DevOps and automation practices.

Key responsabilities:

  • Design, implement, and maintain data pipelines
  • Develop and maintain high-performance Elasticsearch clusters
  • Collaborate with cross-functional teams for ETL
  • Troubleshoot data pipeline and Elasticsearch issues
  • Continuously monitor and optimise data performance
Datafin Recruitment logo
Datafin Recruitment Human Resources, Staffing & Recruiting TPE https://www.datafin.com/
11 - 50 Employees
See more Datafin Recruitment offers

Job description

Senior Data Engineer (Kafka, Python, Elasticsearch) (CPT Remote) IT - Analyst, Data Management

Cape Town - Western Cape ~ Remote

ENVIRONMENT:

AN award-winning leader in contact centre AI software seeks a passionate Data Engineer with expertise in Kafka pipelines and a thorough understanding of Elastic, looking to contribute to cutting-edge technology and make a difference in the Financial Services industry. You will design, implement, and maintain robust data pipelines; troubleshoot data pipeline and Elasticsearch issues while ensuring data infrastructure aligns with business needs. The ideal candidate will have proven experience in designing and implementing data pipelines including End-to-End Testing of analytics pipelines & managing and optimising Elasticsearch clusters, including performance tuning and scalability. You will also be proficient with Python, Scala or Java and DevOps.

DUTIES:

  • Design, implement, and maintain robust data pipelines, ensuring the efficient and reliable flow of data across systems.
  • Develop and maintain Elasticsearch clusters, fine-tuning them for high performance and scalability.
  • Collaborate with cross-functional teams to Extract, Transform, and Load (ETL) data into Elasticsearch for advanced analytics and search capabilities.
  • Troubleshoot data pipeline and Elasticsearch issues, ensuring the integrity and availability of data for analytics and reporting.
  • Participate in the design and development of data models and schemas to support business requirements.
  • Continuously monitor and optimise data pipeline and Elastic performance to meet growing data demands.
  • Collaborate with Data Scientists and Analysts to enable efficient data access and query performance.
  • Contribute to the evaluation and implementation of new technologies and tools that enhance Data Engineering capabilities.
  • Demonstrate strong analytical, problem-solving, and troubleshooting skills to address data-related challenges.
  • Collaborate effectively with team members and stakeholders to ensure data infrastructure aligns with business needs.
  • Embody the company values of playing to win, putting people over everything, driving results, pursuing knowledge, and working together.
  • Implement standards, conventions and best practices.

REQUIREMENTS:

  • Proven experience in designing and implementing data pipelines.
  • Experience with End-to-End Testing of analytics pipelines.
  • Expertise in managing and optimising Elasticsearch clusters, including performance tuning and scalability.
  • Strong proficiency with data extraction, transformation, and loading (ETL) processes.
  • Familiarity with data modeling and schema design for efficient data storage and retrieval.
  • Good programming and scripting skills using languages like Python, Scala, or Java.
  • Knowledge of DevOps and automation practices related to Data Engineering.

As a Data Engineer with a focus on Kafka pipelines and Elastic, you will work with the following technologies:

Data Pipelines:

  • Kafka / ksqlDB
  • Python
  • Redis

Data Storage and Analysis:

  • Elasticsearch, cluster management and optimisation
  • AWS S3
  • PostgreSQL

DevOps:

  • AWS

Advantageous

  • Experience with Data Engineering in an Agile / Scrum environment.
  • Familiarity with ksqlDB / Kafka or other stream processing frameworks.
  • Familiarity of Data Lakes and the querying thereof.
  • Experience with integrating Machine Learning models into data pipelines.
  • Familiarity with other data-related technologies and tools.

ATTRIBUTES:

  • Strong analytical and problem-solving abilities, with a keen attention to detail.
  • Excellent communication and collaboration skills to work effectively with cross-functional teams.
  • A commitment to staying up to date with the latest developments in Data Engineering and technology.
  • Alignment with company values and a dedication to driving positive change through data.

Apply for this Job

Required profile

Experience

Level of experience: Senior (5-10 years)
Industry :
Human Resources, Staffing & Recruiting
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Analytical Thinking
  • Problem Solving
  • Troubleshooting (Problem Solving)
  • Detail Oriented
  • Collaboration
  • Communication

Data Engineer Related jobs