Logo for Awin

Data Engineer (f/m/d)

Roles & Responsibilities

  • Hands-on experience with Databricks (PySpark, Spark SQL, Delta Lake, Lakebase), PostgreSQL.
  • Experience with large, distributed datasets.
  • Proficiency in Python, PySpark and SQL.
  • Experience with data modeling, curated datasets, semantic layers, and medallion architecture.

Requirements:

  • Build and maintain ETL/ELT pipelines using Databricks, PySpark, Spark SQL, and Delta Lake.
  • Develop production ready notebooks, workflows, and data lake integrations.
  • Apply best practices for Spark optimization (partitioning, caching, avoiding shuffle, file compaction).
  • Support the development of finance-specific data pipelines, including datasets used for financial bookings, reconciliations, and ERP integrations.

Job description

Purpose of position

Data sits at the heart of the company. This role will work closely with the Finance Data & Process squad, supporting the development of financial data pipelines and datasets that enable accounting processes, ERP integrations, and operational finance reporting. This includes translating finance and accounting requirements into robust, auditable, and scalable data solutions

As a Data Engineer, your role focuses on designing, building, and optimizing data pipelines, curated datasets, and analytical data models within Azure, AWS, and Databricks environments. The position involves working with largescale datasets, improving performance and reliability, and translating business logic into well-structured tables, metrics, and transformation rules.

Key Responsibilities

Data Engineering & Pipelines

  • Build and maintain ETL/ELT pipelines using Databricks, PySpark, Spark SQL, and Delta Lake.
  • Develop production ready notebooks, workflows, and data lake integrations.
  • Apply best practices for Spark optimization (partitioning, caching, avoiding shuffle, file compaction).
  • Support the development of finance-specific data pipelines, including datasets used for financial bookings, reconciliations, and ERP integrations.

Dataset Modeling & Business Logic

  • Design curated datasets, semantic layers, and data marts that power analytics and reporting.
  • Partner with business stakeholders to understand requirements, operational challenges and decision-making needs, ensuring alignment between business expectations and technical constraints.
  • Convert business requirements into data models, defining tables, metrics, KPIs, and transformation rules.
  • Work closely with product owners and analysts to align datasets with business processes.
  • Support the structuring of datasets used in financial processes (e.g. revenue, costs, working capital, reconciliations).
  • Document data models, lineage, logic, and dataset behavior clearly and consistently, with particular attention to traceability and auditability of financial data.

Performance, Monitoring & Reliability

  • Work with very large datasets, optimizing transformations and storage for scale and cost.
  • Tune and maintain MSSQL databases (indexes, running processes, performance diagnostics).
  • Implement robust data validation, schema enforcement, and quality checks across pipelines.

Teamwork & Delivery

  • Collaborate with engineers, analysts, and business stakeholders to deliver reliable data solutions.
  • Communicate effectively with both technical and non‑technical audiences.

Qualifications

Required

  • Hands‑on experience with Databricks (PySpark, Spark SQL, Delta Lake, Lakebase), PostgresSQL.
  • Background working with large, distributed datasets.
  • Proficiency in Python, PySpark and SQL.
  • Experience with data modeling, curated datasets, semantic layers, and medallion architecture.
  • Experience with AWS (in particular Lambda, CloudWatch, and Step Functions).
  • Competence in using Datadog or similar observability/monitoring platforms.
  • Strong debugging, problem solving, and communication skills.
  • Comfortable operating in Agile environments.
  • Strong commitment to thorough documentation.

Preferred

  • Experience with Power BI, Tableau, or Luzmo.
  • Understanding of CI/CD practices for data pipelines.
  • Bachelor’s degree in Computer Science, Data Engineering, or related field.

 

Our Offer

  • Flexi-Week and Work-Life Balance: We prioritise your mental health and wellbeing, offering you a flexible four-day Flexi-Week at full pay and with no reduction to your annual holiday allowance. We also offer a variety of different paid special leaves.
  • Remote Working Allowance: You will receive a monthly allowance to cover part of your running costs. In addition, we will support you in setting up your remote workspace appropriately.
  • Flexi-Office: We offer an international culture and flexibility through our Flexi-Office and hybrid/remote work possibilities to work across Awin regions
  • Meal Vouchers: You will be supported with a certain net sum to spend it on a variety of lunches.
  • Health & Wellbeing: The insurance covers several types of health, vision and / or dental treatments for you and for up to one additional family member.
  • Remote Working Furniture Package: After 3 months of employment, you will be eligible for a furniture package, which should enable you to set up a proper workplace at your remote working location
  • Appreciation: Thank and reward colleagues by sending them a voucher through our peer-to-peer program.

Established in 2000, Awin is proud of our dynamic, social and inclusive culture.

Like all businesses, we’ve had to adapt and nurture our culture in a virtual environment. Our virtual ‘Life @ Awin’ hub brings our colleagues from across the globe together for various social activities.

Diversity & Inclusion are paramount to us, and we proudly pursue and hire diverse team members. We champion uniqueness and authenticity; this is who we are at our core. Our network of affiliate partnerships are diverse and transparent, as are the employees powering our vision to build the world’s leading open partner ecosystem. We welcome all backgrounds, identities, and experiences. If you need support at any point in the application or interview process, please let us know.

Awin is part of the Axel Springer group. Learn more at axelspringer.com/en/, and explore the Axel Springer Essentials here: axelspringer.com/en/inside/the-essentials-what-we-have-adapted-and-why  

Apply now to begin the next stage of your career at a progressive company that supports both your professional and personal development.

#LI-RS1

Data Engineer Related jobs

Other jobs at Awin

We help you get seen. Not ignored.

We help you get seen faster — by the right people.

🚀

Auto-Apply

We apply for you — automatically and instantly.

Save time, skip forms, and stay on top of every opportunity. Because you can't get seen if you're not in the race.

AI Match Feedback

Know your real match before you apply.

Get a detailed AI assessment of your profile against each job posting. Because getting seen starts with passing the filters.

Upgrade to Premium. Apply smarter and get noticed.

Upgrade to Premium

Join thousands of professionals who got noticed and hired faster.