Offer summary
Qualifications:
Solid experience with Databricks and PySpark, Proficiency in SQL for large data volumes, Experience with distributed data architectures, Familiarity with cloud architecture (AWS), Experience with version control tools (Git).
Key responsabilities:
- Develop, optimize, and maintain scalable data pipelines
- Integrate different data sources and implement ingestion strategies
- Ensure data quality and automate monitoring processes
- Document implemented solutions and architectures
- Adhere to data security and governance practices