Logo for tapouts

Senior Data Engineer

Roles & Responsibilities

  • 5+ years of experience in data engineering or a related field
  • Advanced English
  • Strong proficiency in SQL — writing complex queries, optimizing performance, and data modeling
  • Strong proficiency in Python — building ETL/ELT pipelines, scripting, and automation

Requirements:

  • Design, build, and maintain robust, scalable data pipelines (batch and real-time/streaming)
  • Design and develop dashboards that surface key business metrics and enable data-informed decision-making
  • Write clean, production-grade Python code for data ingestion, transformation, and automation
  • Build and manage cloud-native data infrastructure on AWS, GCP, or Azure

Job description

Senior Data Engineer

Location: Remote Type: Full-Time 

**We will only consider candidates who are located in Latin America

About Us

At tapouts, we believe in the boundless potential of every child. Our mission goes beyond teaching skills; we are dedicated to nurturing the emotional and psychological well-being of the next generation. Imagine being part of a team that transforms the lives of a million children and their families. By joining tapouts, you are not just taking on a job but using your talents for a deeply rewarding cause.

About the Role

We are looking for a Senior Data Engineer to join our growing data team. In this role, you will be responsible for designing, building, and maintaining scalable data infrastructure that powers our analytics, AI initiatives, and business operations. 

This is a hands-on role for someone who thrives in fast-paced environments, thinks like a platform architect, and is passionate about building data systems that matter.

Key Responsibilities

  • Design, build, and maintain robust, scalable data pipelines (batch and real-time/streaming).
  • Design and develop dashboards that surface key business metrics and enable strategic, data-informed decision-making.
  • Develop and optimize complex SQL queries, stored procedures, and data models
  • Write clean, production-grade Python code for data ingestion, transformation, and automation
  • Build and manage cloud-native data infrastructure on AWS, GCP, or Azure
  • Implement and maintain data lakehouse architectures (e.g., Delta Lake, Apache Iceberg)
  • Support ML workflows including feature engineering, model training pipelines, and MLOps integration
  • Ensure data quality, governance, and lineage tracking across all data assets
  • Collaborate with data scientists and analysts to deliver trusted, well-documented datasets
  • Monitor pipeline performance, troubleshoot issues, and optimize for cost and efficiency
  • Contribute to the development of internal data platform tools and frameworks
  • Apply data governance best practices and ensure compliance with data privacy regulations (GDPR, LGPD)

What We're Looking For

  • A platform-first mindset — you think beyond individual pipelines and consider ownership, reliability, and long-term maintainability
  • A data-driven approach — you use metrics to measure pipeline health and continuously improve
  • Strong communication skills — you can collaborate with technical and non-technical stakeholders
  • Comfort working in ambiguous, fast-moving environments and bringing structure to chaos
  • A passion for continuous learning — you stay current with the latest tools and trends in data engineering

Requirements

Must-Have:

  • 5+ years of experience in data engineering or a related field
  • Advanced English
  • Strong proficiency in SQL — writing complex queries, optimizing performance, and data modeling
  • Strong proficiency in Python — building ETL/ELT pipelines, scripting, and automation
  • Experience with cloud platforms: AWS, GCP, or Azure
  • Hands-on experience with data orchestration tools (Apache Airflow, Prefect, or similar)
  • Experience with big data frameworks (Apache Spark, Kafka, Flink, or similar)
  • Familiarity with data warehousing solutions (Snowflake, BigQuery, Redshift, or similar)
  • Strong understanding of data modeling, schema design, and data architecture principles

Nice to Have:

  • Experience with dbt (data build tool) and the modern data stack
  • Familiarity with streaming and event-driven architectures
  • Knowledge of MLOps and AI pipeline support
  • Experience with data mesh or data platform engineering
  • Familiarity with data governance frameworks and tools (data lineage, data cataloging)

Benefits

tapouts is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will be considered regardless of race, color, religion, gender, sexual orientation, national origin, genetics, disability, or age.

Join us in our mission to empower children with the social and emotional skills they need to succeed!

Data Engineer Related jobs

Other jobs at tapouts

We help you get seen. Not ignored.

We help you get seen faster — by the right people.

🚀

Auto-Apply

We apply for you — automatically and instantly.

Save time, skip forms, and stay on top of every opportunity. Because you can't get seen if you're not in the race.

AI Match Feedback

Know your real match before you apply.

Get a detailed AI assessment of your profile against each job posting. Because getting seen starts with passing the filters.

Upgrade to Premium. Apply smarter and get noticed.

Upgrade to Premium

Join thousands of professionals who got noticed and hired faster.