Minimum 5 years of experience as a Big Data Engineer.
Proficiency in Apache Spark, Databricks, and Scala.
Experience with cloud platforms, especially Azure.
Knowledge of data modeling and version control systems like Git.
Requirements:
Design and develop data processing pipelines using Spark and Databricks.
Collaborate with cross-functional teams to improve data infrastructure.
Troubleshoot and resolve complex data engineering issues.
Implement data orchestration and transformation solutions.
Job description
5+ years of experience as a Big Data Engineer with a strong focus on Apache Spark and Databricks.
Proficiency in Scala for data processing and transformation.
Experience with version control systems, particularly Git and GitHub.
Strong knowledge of cloud computing platforms, with a preference for experience with Azure.
Handson experience with Azure Data Factory or similar data orchestration tools.
Familiarity with data modeling concepts and best practices.
Strong problemsolving skills and the ability to troubleshoot complex data engineering issues.
Excellent communication and collaboration skills to work effectively within a crossfunctional team.
Certifications in relevant technologies (e.g., Azure Data Engineer, Databricks) are a plus.
If you are a seasoned Big Data Engineer with expertise in Spark, Scala, Azure, and a passion for solving complex data engineering challenges, we encourage you to apply. Join our team and play a crucial role in shaping our data infrastructure and analytics capabilities.