We are looking for a Middle+/Senior Data Engineer (ETL, Python, PySpark):
Technology Stack: Python, SQL, AWS, PySpark, Snowflake (must), Github action (must), Terraform (optional), (Airflow, Datadog or Dynatrace are plus)
Customer Description:
Our Client is a leading global management consulting firm.
Numerous enterprise customers across industries rely on our Client's platform and services.
Project Description:
This project is part of a data initiative within the firm’s secure technology ecosystem.
The focus is on building and maintaining robust data pipelines that collect and process data from multiple enterprise systems such as Jira, GitHub, AWS, ServiceNow, and other cloud infrastructure platforms.
The objective is to enable leadership to gain actionable insights aligned with strategic outcomes, and to support product and service teams in targeting the right user groups and measuring the effectiveness of various GenAI productivity initiatives.
Project Phase: ongoing
Project Team: 10+
Soft Skills:
Hard Skills / Need to Have:
Hard Skills / Nice to Have (Optional):
📩 Ready to Join?
We look forward to receiving your application and welcoming you to our team!
Factorial HR
Natuvion
YozmaTech-Empowering Tech Entrepreneurs
ZEALS
EUROPEAN DYNAMICS