Offer summary
Qualifications:
At least 2 years experience with big data (Spark, Hadoop, Hive, Kafka), programming with Spark/Scala, different database structures including SQL, working in agile CI/DevOps paradigm, expertise in data integration and management with high volume.Key responsabilities:
- Ingesting and processing data with Spark and Scala