Logo for CIVITTA

Big Data Engineer | Digital & Data

Roles & Responsibilities

  • Hands-on experience with PySpark for large-scale data processing
  • Strong knowledge of Apache Kafka for real-time data streaming
  • Cloud platform experience across Azure, AWS, and/or GCP
  • Proven ability to design and optimize ETL/ELT pipelines

Requirements:

  • Design and implement data processing systems, including data warehouses, data lakes, and real-time processing platforms
  • Configure and manage technologies such as Hadoop, Spark, and Kafka, as well as cloud environments across Azure, AWS, and GCP
  • Build and maintain automated ETL/ELT processes for data collection, cleansing, and transformation
  • Ensure seamless, reliable data flow between diverse systems and sources, with a strong focus on data quality and consistency

Job description

Join Civitta - an international company with 750+ colleagues across 20+ countries. We focus on management consulting, funding, and digital solutions. Originating from Central and Eastern Europe, we also deliver projects across Central Asia, the Middle East, and the United States.

Help businesses turn strategy into digital solutions: from AI-powered marketing to data-driven products and custom software. We combine business thinking with technical execution.
Every day, you might build an e-commerce platform using predictive analytics, design a high-impact digital campaign, or develop a data-driven product that drives measurable results.

Take the step towards your journey with us and join us as a Big Data Engineer in the EU countries!


You will:
  • Design and implement data processing systems, including data warehouses, data lakes, and real-time processing platforms;
  • Configure and manage technologies such as Hadoop, Spark, and Kafka, as well as cloud environments across Azure, AWS, and GCP;
  • Build and maintain automated ETL/ELT processes for data collection, cleansing, and transformation;
  • Ensure seamless, reliable data flow between diverse systems and sources, with a strong focus on data quality and consistency;
  • Optimize data systems for high-volume, high-velocity workloads;
  • Design and implement distributed computing solutions that maintain performance at scale, proactively identifying and resolving bottlenecks.

  • Requirements:
  • Hands-on experience with PySpark for large-scale data processing;
  • Strong knowledge of Apache Kafka for real-time data streaming;
  •  Cloud platform experience across Azure, AWS, and/or GCP;
  • Proven ability to design and optimize ETL/ELT pipelines;
  • Familiarity with Hadoop ecosystems and distributed computing principles;
  • Solid understanding of data warehouse and data lake architectures;
  •  Nice to haves: experience with infrastructure-as-code tools (Terraform, Bicep), knowledge of data governance and security best practices, exposure to orchestration tools such as Apache Airflow or Azure Data Factory.

  • Benefits:
  • Flexible Working Hours – Manage your workday with flexibility and the option to work from home when needed, while enjoying our city-centre office as a convenient, collaborative workspace;
  • Culture & Connection – From team bonding activities like Christmas parties and summer events to spontaneous celebrations, monthly breakfasts, or team lunches, we celebrate wins—big or small—together;
  • Competitive salary.
  • Data Engineer Related jobs

    Other jobs at CIVITTA

    We help you get seen. Not ignored.

    We help you get seen faster — by the right people.

    🚀

    Auto-Apply

    We apply for you — automatically and instantly.

    Save time, skip forms, and stay on top of every opportunity. Because you can't get seen if you're not in the race.

    AI Match Feedback

    Know your real match before you apply.

    Get a detailed AI assessment of your profile against each job posting. Because getting seen starts with passing the filters.

    Upgrade to Premium. Apply smarter and get noticed.

    Upgrade to Premium

    Join thousands of professionals who got noticed and hired faster.