Offer summary
Qualifications:
Minimum 3 years experience as Data Engineer, Strong knowledge of Hadoop ecosystem and Big Data technologies, Experience with at least one cloud provider (AWS, Azure, GCP), Proficiency in programming languages like SQL, Python, Java, Scala, Understanding of DevOps practices (Git, Jenkins, Ansible, Docker, Terraform, Kubernetes).
Key responsabilities:
- Design, implement and optimize data models
- Control and evaluate model quality
- Deploy and industrialize data collection, ingestion, storage pipelines
- Monitor and ensure pipeline operation in production
- Define development best practices and implement CI/CD tools