This is a remote position.
We are looking for a Senior Data Engineer to design, build and maintain scalable data systems that support analytics and machine learning initiatives. This role operates in a hybrid environment (SaaS and on-premise) and plays a key part in structuring and optimising the data flow across the platform.
Responsibilities:
Design, develop and maintain scalable ETL/ELT data pipelines using Python
Process and integrate data from multiple formats and sources, including JSON, CSV and XML
Build and manage data transformations and orchestration flows using DBT and tools such as Airflow, Prefect or Dagster
Ensure data governance, quality and security standards are upheld across data systems
Extend, maintain and optimise the Elastic Hierarchy data framework
Work closely with analytics, machine learning and product teams to deliver reliable, business-ready datasets
Support data operations in hybrid environments (SaaS and on-premise)
Requisitos
Requirements / Skills:
Undergraduate or postgraduate degree in Computer Science, Data Engineering or a related field
Solid experience in Python development
Experience with the AWS data ecosystem, including services such as S3, Glue, Lambda, EMR, EC2, Redshift and RDS
Practical experience with Snowflake, MongoDB and PostgreSQL
Experience with DBT and at least one orchestration tool (Airflow, Prefect or Dagster)
Knowledge of data mapping, attribution and reconciliation processes
Ability to work in hybrid and on-premise environments
High level of English, both written and spoken