Match score not available

Senior Data Engineer (with Spark, Airflow)

extra holidays - extra parental leave
Remote: 
Full Remote
Contract: 
Experience: 
Senior (5-10 years)
Work from: 

Offer summary

Qualifications:

5+ years of experience in data engineering, Skilled in SQL and relational databases, Experience with Big Data pipelines and architectures, Knowledge of Apache Spark and Java, Experience in ETL processes with Apache Airflow.

Key responsabilities:

  • Create and maintain optimal data transformation pipelines
  • Work with complex financial data sets
  • Identify and implement process improvements
  • Build scalable data infrastructure using open-source technologies
  • Collaborate with cross-functional teams for data-related issues
accesa.eu logo
accesa.eu Information Technology & Services Large https://careers.accesa.eu/
1001 - 5000 Employees
See more accesa.eu offers

Job description

Company Description

Company Description

Accesa is a leading technology company headquartered in Cluj-Napoca, with offices in Oradea, Bucharest, Timisoara and 20 years of experience in turning business challenges into opportunities and growth.

A value-driven organisation, it has established itself as a partner of choice for major brands in Retail, Manufacturing, Finance, and Banking. It covers the complete digital evolution journey of its customers, from ideation and requirements setup to software development and managed services solutions.

With more than 1,200 IT professionals, Accesa also has a fast-growing footprint, establishing itself as an employer of choice for IT professionals who are passionate about problem-solving through technology. Coming together in strong tech teams with a customer-centric approach, they enable businesses to grow, delivering value for our clients, partners, industry, and community.

Job Description

One of our clients operates prominently in the financial sector, where we enhance operations across their extensive network of 150,000 workstations and support a workforce of 4,500 employees. As part of our commitment to optimizing data management strategies, we are migrating data warehouse (DWH) models into data products within the Data Integration Hub (DIH). 

Responsibilities:

  • Drive Data Efficiency: Create and maintain optimal data transformation pipelines.  

  • Master Complex Data Handling: Work with large, complex financial data sets to generate outputs that meet functional and non-functional business requirements.  

  • Lead Innovation and Process Optimization: Identify, design, and implement process improvements such as automating manual processes, optimizing data delivery, and re-designing infrastructure for higher scalability. 

  • Architect Scalable Data Infrastructure: Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using open-source technologies. 

  • Unlock Actionable Insights: Build/use analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.  

  • Collaborate with Cross-Functional Teams:Work clients and internal stakeholders, including Senior Management, Departments Heads, Product, Data, and Design teams, to assist with data-related technical issues and support their data infrastructure needs. 

Qualifications

Must have:

  • 5+ years of experience in a similar role, preferably within Agile teams 

  • Skilled in SQL and relational databases for data manipulation 

  • Experience in building and optimizing Big Data pipelines and architectures 

  • Familiarity with innovative technologies in message queuing, stream processing, and scalable big data storage solutions 

  • Knowledge of Apache Spark framework and object-oriented programming in Java; experience with Python is a plus.   

  • Proven experience in performing data analysis and root cause analysis on diverse datasets to identify opportunities for improvement.  

  • Experience with ETL processes, including scheduling and orchestration using tools like Apache Airflow (or similar) 

  • Automate CI/CD pipelines using ArgoCD, Tekton, and Helm to streamline deployment and improve efficiency across the SDLC 

  • Manage Kubernetes deployments (e.g. OpenShift), focusing on scalability, security, and optimized container orchestration 

  • Strong analytical skills in working with both structured and unstructured data 

 

Nice to have:  

  • Expertise in processing large, disconnected datasets to extract actionable insights 

  • Technical skills in the following areas are a plus: relational databases (e.g. PostgreSQL), Big Data Tools: (e.g. Databricks), and workflow management (e.g. Airflow), and backend development using Spring Boot. 

Additional Information

Enjoy our holistic benefits program that covers the four pillars that we believe come together to support our wellbeing, covering social, physical, emotional wellbeing, as well as work-life fusion.

  • Physical: premium medical package for both our colleagues and their children, dental coverage up to a yearly amount, eyeglasses reimbursement every two years, voucher for sport equipment expenses, in-house personal trainer
  • Emotional: individual therapy sessions with a certified psychotherapist, webinars on self-development topics
  • Social: virtual activities, sports challenges, special occasions get-togethers
  • Work-life fusion: yearly increase in days off, flexible working schedule, birthday, holiday and loyalty gifts for major milestones

Required profile

Experience

Level of experience: Senior (5-10 years)
Industry :
Information Technology & Services
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Analytical Skills
  • Problem Solving
  • Collaboration

Data Engineer Related jobs