1–4 years of experience as a data engineer or in a similar data-focused technical role
Practical experience building DBT pipelines and working with Snowflake
Familiarity with Airflow or other orchestration tools
Experience with development and collaboration tools such as JIRA, Confluence, or similar
Requirements:
Build and maintain data ingestion and curation pipelines using DBT, Streamsets, and related tools
Develop scalable and reliable Snowflake data models and objects
Work as part of an agile development team, following established development standards and engineering best practices
Contribute to documentation, technical specifications, and code reviews
Job description
This is a remote position.
You will contribute to the development of a large‑scale Data & Analytics platform designed to manage, integrate, and analyze high‑volume structured and unstructured data from diverse source systems.
The platform supports data discovery, data exploration, and enterprise‑wide analytics, enabling data teams and business stakeholders to work efficiently with a wide range of datasets.
The current environment has recently transitioned from a Cloudera‑based architecture to an ecosystem built on Snowflake, DBT, Airflow, and Streamsets, with ongoing expansion to support additional data sources and broader user groups.
What You Will Do
Build and maintain data ingestion and curation pipelines using DBT, Streamsets, and related tools.
Develop scalable and reliable Snowflake data models and objects.
Work as part of an agile development team, following established development standards and engineering best practices.
Collaborate with technical product managers and senior engineers to translate requirements into technical solutions.
Contribute to documentation, technical specifications, and code reviews.
Support quality assurance processes by validating pipelines, transformations, and data outputs.
Grow into taking more responsibility over time, including supporting new team members and owning small development streams.
Requirements
1–4 years of experience as a data engineer or in a similar data‑focused technical role.
Practical experience building DBT pipelines and working with Snowflake.
Familiarity with Airflow or other orchestration tools.
Experience with development and collaboration tools such as JIRA, Confluence, or similar.
Ability to learn quickly and work independently when needed.
Comfortable working with complex datasets from diverse systems—even without always knowing the full business context.
Strong professional proficiency in English.
Benefits
Remote work
Food ticket
Health insurance
Training and access to certification vouchers
Work-life balance initiatives
Referral program
Birthday off
Give back day – enjoy a day to give back to society!