Sr AWS Data Engineer (Scala Developer)

Work set-up: 
Full Remote
Contract: 
Experience: 
Senior (5-10 years)
Work from: 

Offer summary

Qualifications:

Minimum of 10+ years of experience in data engineering., Strong proficiency in Scala and Python development., Hands-on experience with Hadoop ecosystem components like Hive, Pyspark, HDFS, Spark, Kafka., Knowledge of AWS services such as Lambda, EMR, and Data Pipelines..

Key responsibilities:

  • Develop and maintain data pipelines using Spark, Kafka, and AWS services.
  • Write complex SQL and Hive queries for data analysis.
  • Collaborate with teams to design scalable data solutions.
  • Ensure data quality and optimize data workflows.

Cubetech Solutions logo
Cubetech Solutions
2 - 10 Employees
See all jobs

Job description

Sr Data Engineer with Scala Development.(Must)
Hadoop, , Spark, Python,Kafka (Remote)
Experience- 10+ Years
Required Skills:
Experience in Hadoop ecosystem components: HIVE,Pyspark, HDFS, SPARK, Scala, Streaming,( Kafka)
Strong experience in Scala, Phython development
Proficient with writing Hive and Impala Queries
Ability to write complex SQL queries
Experience with AWS Lambda, EMR, Clusters, Partitions, Datapipelines

please send resumes only with 10+exp to jobs@cubetechus.com

Required profile

Experience

Level of experience: Senior (5-10 years)
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Data Engineer Related jobs