Strong experience with Databricks, Python, and PySpark
Hands-on experience with Apache Spark and big data processing
Proficiency in SQL and working with Azure cloud platform
Experience with ETL development and performance tuning
Requirements:
Develop and maintain scalable data pipelines using Databricks, PySpark, and Python
Work with structured and unstructured data in cloud-based environments
Implement data transformation, cleansing, and integration solutions
Collaborate with data engineers, analysts, and stakeholders to meet business requirements
Job description
Role: Databricks engineer with Python, Pyspark experience Canada/Remote Contract The below mentioned JD can be used as reference:
Skilled Databricks Engineer with expertise in Python and PySpark to design, develop, and optimize data pipelines and workflows. Experience working with Apache Spark, ETL processes, and big data architectures Azure environment.
Key Responsibilities:
• Develop and maintain scalable data pipelines using Databricks, PySpark, and Python
• Work with structured and unstructured data in cloud-based environments
• Implement data transformation, cleansing, and integration solutions
• Collaborate with data engineers, analysts, and stakeholders to meet business requirements
Requirements:
• Strong experience with Databricks, Python, and PySpark
• Hands-on experience with Apache Spark and big data processing
• Proficiency in SQL and working with cloud Azure cloud platform
• Experience with ETL development and performance tuning
• Knowledge of data lake, data warehouse, and data modeling principles