Logo for ioet

Senior Data Engineer

Roles & Responsibilities

  • 5+ years of experience working specifically as a Data Engineer (analytics-only roles do not count)
  • 4+ years hands-on experience with complex SQL transformations, including query optimization and understanding logical and physical execution plans
  • Strong experience with Python for data engineering use cases (scripting, automation, transformations); PySpark or other distributed processing frameworks are highly valued
  • Hands-on experience with DBT and Airflow in production environments

Requirements:

  • Design and implement scalable, production-grade ETL/ELT pipelines
  • Take full ownership of the technical architecture, design, and implementation of the data platform and related tooling
  • Apply strong non-SQL programming skills (Python, PySpark, or similar) to data processing, automation, and pipeline development
  • Collaborate with stakeholders and engineering leaders to define and execute the data platform roadmap

Job description

At ioet, a leading software company with a talented team across LATAM, we provide Software Engineering as a service to clients worldwide. Join us for exciting professional challenges, working on projects ranging from innovative startups to globally recognized brands. Our positions are full-time, remote, and offer competitive compensation in USD.

We are looking for a highly experienced Senior Data Engineer to design and build a new data product from the ground up. This role requires deep, hands-on experience beyond SQL, including Python and/or PySpark or other non-SQL programming languages, and a proven track record of working with distributed data processing and transformation systems. Strong experience with modern open data stack technologies such as DBT and Airflow is essential.


You will be responsible for designing and implementing scalable, production-ready ETL/ELT pipelines, with a strong emphasis on SQL for complex transformations combined with Python (and ideally PySpark) for data processing, automation, and orchestration, supporting fast-growing and evolving business needs that demand high-quality, reliable data.
Given that this is an early-stage initiative, you will play an active role in translating business and functional requirements into concrete technical solutions, as well as building prototypes and proofs of concept that can evolve into long-term platform components.

This role has a strong emphasis on designing and owning production-grade data pipelines and ETL/ELT systems. Candidates must have explicitly 5+ years of professional experience as a Data Engineer (5 years is the minimum requirement). Data analysis or analytics-focused roles do not qualify unless accompanied by substantial hands-on data engineering experience in building, maintaining, and scaling pipelines.

Responsibilities

  • Design and implement scalable, production-grade ETL/ELT pipelines.

  • Take full ownership of the technical architecture, design, and implementation of the data platform and related tooling.

  • Apply strong non-SQL programming skills (Python, PySpark, or similar) to data processing, automation, and pipeline development.

  • Collaborate closely with stakeholders and engineering leaders to define and execute the data platform roadmap.

  • Act as a subject matter expert, providing technical guidance and best practices to leadership and the wider engineering organization.


Requirements

  • 5+ years of experience working specifically as a Data Engineer (analytics-only roles do not count).

  • 4+ years of hands-on experience with complex SQL transformations, including query optimization and understanding logical & physical execution plans.

  • Strong experience with Python for data engineering use cases (scripting, automation, transformations); PySpark or other distributed processing frameworks are highly valued.

  • Hands-on experience with DBT and Airflow in production environments.

  • Solid experience working with AWS.

  • Strong knowledge of modern data warehouse architectures and big data modeling (relational, dimensional, large-scale logs/events).

  • Experience with DuckDB, pandas, and Jupyter Notebooks is a plus.

  • Knowledge of Spark (batch and/or streaming), Spark SQL, and PySpark is a strong plus.

  • Proven experience working with multiple stakeholders to deeply understand data domains and deliver solutions aligned with business needs.

  • Strong English communication skills (minimum B2 level).

  • CV and application must be submitted in English.

  • Based in Latin America.

Benefits:

  • Remote work

  • Flexible schedule

  • Collaboration with international clients

  • USD compensation

  • Paid Holidays and Vacations

  • Paid family and sick leaves

  • English classes

  • Educational and wellness bonus

  • Structured career plan with regular salary reviews

  • Emphasis on personal growth and mentorship

Are you ready to be part of the ioet journey?

Get your CV in English and Apply Now.

If you are curious to know more about our culture, technologies, and blogs, visit www.ioet.com

Data Engineer Related jobs

Other jobs at ioet

We help you get seen. Not ignored.

We help you get seen faster — by the right people.

🚀

Auto-Apply

We apply for you — automatically and instantly.

Save time, skip forms, and stay on top of every opportunity. Because you can't get seen if you're not in the race.

✨

AI Match Feedback

Know your real match before you apply.

Get a detailed AI assessment of your profile against each job posting. Because getting seen starts with passing the filters.

Upgrade to Premium. Apply smarter and get noticed.

Upgrade to Premium

Join thousands of professionals who got noticed and hired faster.