Minimum of 10+ years of development and hands-on coding experience., Proficiency in Hadoop ecosystem components such as HIVE, Pyspark, HDFS, SPARK, Scala, Streaming (Kinesis, Kafka)., Strong experience in PySpark and Python development., Ability to write complex SQL queries and Hive/Impala queries..
Key responsibilities:
Develop and maintain big data pipelines using Hadoop and Spark technologies.
Write and optimize complex SQL and Hive/Impala queries.
Work with AWS services like Lambda, EMR, and manage data clusters.
Collaborate with teams to design scalable data solutions.
Report This Job
Help us maintain the quality of our job listings. If you find any issues with this job post, please let us know.
Select the reason you're reporting this job: