1. Minimum 6+ years of experience in creating SPARK Jobs using JavaScala
2. Should have very good experience in developing data loading and transformation tasks using external sources, merge data, perform data enrichment and load in to target data destinations
3. Must have good knowledge on Big data tools HIVE and HBASE tables
4. Should have experience on Spark Streaming
5. Must have good knowledge on SQL
6. Must have good knowledge on Data warehouse concepts
7. Must have good analytical skills to analyse the issue
8. Should have Handson UnixLinux knowledge
9. Knowledge on AWS, PySpark will be an advantage.