Minimum 6+ years of experience in creating Spark jobs using Java or Scala.
Strong knowledge of data loading, transformation, and enrichment techniques.
Experience with Big Data tools such as Hive and HBase.
Proficiency in Spark Streaming, SQL, and Data Warehouse concepts.
Requirements:
Develop and maintain Spark jobs for data processing.
Handle data loading, transformation, and enrichment tasks.
Work with Big Data tools like Hive and HBase.
Analyze and troubleshoot data processing issues.
Job description
Location : ChennaiHyderabadBangalore
Details JD:
1. Minimum 6+ years of experience in creating SPARK Jobs using JavaScala
2. Should have very good experience in developing data loading and transformation tasks using external sources, merge data, perform data enrichment and load in to target data destinations
3. Must have good knowledge on Big data tools HIVE and HBASE tables
4. Should have experience on Spark Streaming
5. Must have good knowledge on SQL
6. Must have good knowledge on Data warehouse concepts
7. Must have good analytical skills to analyse the issue
8. Should have Handson UnixLinux knowledge
9. Knowledge on AWS, PySpark will be an advantage.