Minimum 10 years of development and hands-on coding experience., Strong expertise in Hadoop ecosystem components such as HIVE, Pyspark, HDFS, SPARK, Scala, Streaming (Kinesis, Kafka)., Proficiency in PySpark and Python development., Ability to write complex SQL queries and Hive/Impala queries..
Key responsibilities:
Develop and maintain big data solutions using Hadoop ecosystem tools.
Design and implement data pipelines and streaming solutions on AWS.
Write complex SQL and Hive queries for data analysis.
Collaborate with teams to optimize data processing workflows.
Report This Job
Help us maintain the quality of our job listings. If you find any issues with this job post, please let us know.
Select the reason you're reporting this job: