Offer summary
Qualifications:
Bachelor's or Master's in Computer Science or related field, At least 5 years of experience in data engineering on AWS, Proficient in AWS services like Glue, Redshift, and Lambda, Experience with big data technologies such as Hadoop and Spark, Hands-on experience with SQL and Python.
Key responsabilities:
- Design and implement scalable data pipelines using AWS services
- Collaborate with cross-functional teams to meet data needs
- Optimize workflows for performance and cost-efficiency on AWS
- Develop and maintain ETL processes ensuring data quality
- Document architecture and design decisions for reference