Sr Hadoop Big Data engineer (Remote)

Work set-up: 
Full Remote
Contract: 
Experience: 
Senior (5-10 years)
Work from: 

Offer summary

Qualifications:

Minimum 10 years of development and hands-on coding experience., Strong expertise in Hadoop ecosystem components such as HIVE, Pyspark, HDFS, SPARK, Scala, Streaming (Kinesis, Kafka)., Proficiency in PySpark and Python development., Ability to write complex SQL queries and Hive/Impala queries..

Key responsibilities:

  • Develop and maintain big data solutions using Hadoop ecosystem tools.
  • Design and implement data pipelines and streaming solutions on AWS.
  • Write complex SQL and Hive queries for data analysis.
  • Collaborate with teams to optimize data processing workflows.

Cubetech Solutions logo
Cubetech Solutions
2 - 10 Employees
See all jobs

Job description

Sr Hadoop Big Data Engineer-

Required Skills:

  • Experience in Hadoop ecosystem components: HIVE,Pyspark, HDFS, SPARK, Scala, Streaming,(kinesis, Kafka)
  • Strong experience in PySpark, Phython development
  • Proficient with writing Hive and Impala Queries
  • Ability to write complex SQL queries
  • Experience with AWS Lambda, EMR, Clusters, Partitions, Data pipelines    Must have 10+ years  development and hands on coding experience

Please send resume to hr@cubetechus.com or apply at the below link

Required profile

Experience

Level of experience: Senior (5-10 years)
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Hadoop Developer Related jobs