Hands-on experience with Azure Databricks is essential., Proficient in Python programming and solid understanding of Spark., Experience with automated unit testing and CI/CD pipelines is required., Familiarity with SDLC and agile methodologies is necessary..
Key responsabilities:
Design, develop, and maintain ETL/ELT data pipelines using Azure Databricks and Spark.
Optimize data transformation workflows for performance and cost-efficiency.
Collaborate with analysts and data scientists to deliver clean, reliable data.
Implement automated unit tests and contribute to team standards through code reviews.
Report This Job
Help us maintain the quality of our job listings. If you find any issues with this job post, please let us know.
Select the reason you're reporting this job:
emagine is a high-end business & IT consulting company challenging the way businesses leverage high-end expertise.Enabled by our vast network of expert consultants, we tailor our services to the unique needs of each business, powering progress, solving challenges, and delivering real results.As the world evolves, scalability becomes increasingly important in modern, technology-driven organizations. And it is around this critical need that emagine stands out with a unique business model and delivery capacity.With emagine, companies gain the flexibility needed to navigate and succeed in a complex and ever-changing technological landscape. Through a seamless and tailored delivery model, we help you scale your business.We help organizations across all industries and sectors; these are our services:• Team extension: Scale with dedicated consultants on time & material• Nearshoring as a Service: Leverage a fully scalable development department in Poland • Teams: Power up with a dedicated team to help you build, develop and execute your projects• Managed Services: Let us tailor and manage your projects, delivering on your specific requirements and needsemagine was founded in 1989 and has a long track record of delivering expertise and business impact for blue-chip companies across Europe.Today, we are 400+ permanent employees working from departments in 10 countries. Furthermore, we own three state-of-the-art nearshore centres in Poland and one offshore centre in India.With 40.000 experts in our network and 4500+ partnered consultants on active contracts, we currently help our 500+ clients worldwide with high-end expertise.
Join a modern data engineering team focused on delivering scalable, cloud-native data solutions. You’ll work with Azure, Databricks, and Spark to build high-performance data pipelines that support business analytics, data science, and reporting. The environment is agile, collaborative, and quality-driven—with strong practices around CI/CD, testing, and performance optimization.
What You'll Do
Design, develop, and maintain robust ETL/ELT data pipelines using Azure Databricks and Spark
Optimize data transformation workflows for performance and cost-efficiency
Build and deploy data pipelines through CI/CD workflows using Azure DevOps (or similar)
Work closely with analysts, data scientists, and product teams to deliver clean, reliable data
Implement and monitor automated unit tests, ensuring code quality and maintainability
Contribute to team standards through code reviews and knowledge sharing
Follow SDLC principles in agile team setups
Tech Stack
Azure Databricks – core data processing platform
Azure Data Factory, Azure Data Lake Storage, Azure DevOps
Spark (including Spark SQL) – for distributed processing
Python – for scripting, transformation, and orchestration
Git, CI/CD pipelines – version control and automation
Requirements
Hands-on experience with Azure Databricks (must-have)
Experience with Azure services such as Data Factory
Proficient in Python programming
Solid understanding of Spark, including Spark SQL and performance optimization
Experience with automated unit testing and code quality best practices
Working knowledge of CI/CD pipelines (Azure DevOps or similar)
Familiarity with SDLC and agile methodologies
English proficiency at B2 level
Nice to Have
Experience with Snowflake
Knowledge of dbt, Airflow, or Data Mesh architecture
Background in regulated industries such as pharma or finance
What We Offer
Opportunity to work on high-impact, data-driven projects with modern architecture
Long-term collaboration with flexible B2B or Employment Contract options
Private healthcare and sports benefits (available for both contract types)
Learning & development budget with time allocated for upskilling
Friendly, quality-focused team and cutting-edge tech stack
Required profile
Experience
Spoken language(s):
English
Check out the description to know which languages are mandatory.