Offer summary
Qualifications:
Proficiency in Python, Experience with PySpark, Advanced experience in NoSQL and Relational Databases, Advanced SQL for Datalake/Lakehouse scenarios, Intermediate experience in cloud (preferably AWS).
Key responsabilities:
- Develop and maintain data pipelines (ETL/ELT/EL)
- Support Data Engineering, Data Science, and B.I teams
- Ensure code adheres to best practices and standards
- Autonomously deliver complex features and projects
- Implement improvements to engineering team workflows