Description: We are seeking a skilled Databricks Data Engineer to design, build, and optimize large-scale data pipelines solutions. In this role, you will leverage Databricks, Spark, and cloud data platforms to deliver reliable, high‐performance data products that support business initiatives.
Key Responsibilities
1. Develop and optimize ETL/ELT pipelines using Databricks and Apache Spark.
2. Build scalable data solutions on Azure using Databricks.
3. Ensure data quality, reliability, and performance across all workflows.
4. Collaborate with IT and Business teams to deliver curated datasets.
5. Automate workflows using Databricks Jobs, Workflows, and CI/CD.
6. Implement best practices in data governance, security, and monitoring.
Required Skills
1. Strong experience with Databricks, Spark (PySpark/Scala), and SQL.
2. Hands‐on experience with cloud platforms (Azure/AWS/GCP).
3. Knowledge of Delta Lake, data modeling, CDC and distributed data processing.
4. Familiarity with Git, CI/CD pipelines, and workflow orchestration.
5. Solid understanding of data architecture and performance optimization.