Senior Data Engineer (Redshift)

Work set-up: 
Full Remote
Contract: 
Experience: 
Senior (5-10 years)
Work from: 

Offer summary

Qualifications:

Minimum 5 years of experience in data engineering or backend development., Strong proficiency with AWS services, especially Redshift., Expertise in Python and SQL for data transformation and automation., Experience with data modeling tools like dbt and building scalable data pipelines..

Key responsibilities:

  • Build and optimize reliable ETL/ELT data pipelines.
  • Design and implement effective data models to support analytics.
  • Collaborate with cross-functional teams to understand data requirements.
  • Ensure data quality, security, and compliance across data workflows.

Welltech logo
Welltech Information Technology & Services Scaleup https://welltech.com/
201 - 500 Employees
See all jobs

Job description

🚀 Who Are We?

Welcome to Welltech—where health meets innovation! 🌍 As a global leader in Health & Fitness industry, we’ve crossed over 200 million installs with three lifechanging apps, all designed to boost wellbeing for millions. Our mission? To transform lives through intuitive nutrition trackers, powerful fitness solutions, and personalized wellness journeys—all powered by a diverse team of over 700 passionate professionals with presence across 5 hubs.

Why Welltech? Imagine joining a team where your impact on global health and wellness is felt daily. At Welltech, we strive to be proactive wellness partners for our users, while continually evolving ourselves.

What Were Looking For

As a Senior Data Engineer, you will play a crucial role in building and maintaining the foundation of our data ecosystem. You’ll work alongside data engineers, analysts, and product teams to create robust, scalable, and highperformance data pipelines and models. Your work will directly impact how we deliver insights, power product features, and enable datadriven decisionmaking across the company.

This role is perfect for someone who combines deep technical skills with a proactive mindset and thrives on solving complex data challenges in a collaborative environment.

Challenges You’ll Meet:

  • Pipeline Development and Optimization: Build and maintain reliable, scalable ETLELT pipelines using modern tools and best practices, ensuring efficient data flow for analytics and insights.

  • Data Modeling and Transformation: Design and implement effective data models that support business needs, enabling highquality reporting and downstream analytics.

  • Collaboration Across Teams: Work closely with data analysts, product managers, and other engineers to understand data requirements and deliver solutions that meet the needs of the business.

  • Ensuring Data Quality: Develop and apply data quality checks, validation frameworks, and monitoring to ensure the consistency, accuracy, and reliability of data.

  • Performance and Efficiency: Identify and address performance issues in pipelines, queries, and data storage. Suggest and implement optimizations that enhance speed and reliability.

  • Security and Compliance: Follow data security best practices and ensure pipelines are built to meet data privacy and compliance standards.

  • Innovation and Continuous Improvement: Test new tools and approaches by building Proof of Concepts (PoCs) and conducting performance benchmarks to find the best solutions.

  • Automation and CICD Practices: Contribute to the development of robust CICD pipelines (GitLab CI or similar) for data workflows, supporting automated testing and deployment.

    • You Should Have:

      • 5+ years of experience in data engineering or backend development, with a strong focus on building productiongrade data pipelines.

      • Solid experience working with AWS services (Redshift is a must),

      • Solid experience working with AWS services (Spectrum, S3, RDS, Glue, Lambda, Kinesis, SQS).

      • Proficient in Python and SQL for data transformation and automation.

      • Experience with dbt for data modeling and transformation.

      • Good understanding of streaming architectures and microbatching for realtime data needs.

      • Experience with CICD pipelines for data workflows (preferably GitLab CI).

      • Familiarity with event schema validation tools solutions (Snowplow, Schema Registry).

      • Excellent communication and collaboration skills.
        Strong problemsolving skills—able to dig into data issues, propose solutions, and deliver clean, reliable outcomes.

      • A growth mindset—enthusiastic about learning new tools, sharing knowledge, and improving team practices.

        • Tech Stack You’ll Work With:

          • Cloud: AWS (Redshift, Spectrum, S3, RDS, Lambda, Kinesis, SQS, Glue, MWAA)

          • Languages: Python, SQL

          • Orchestration: Airflow (MWAA)

          • Modeling: dbt

          • CICD: GitLab CI (including GitLab administration)

          • Monitoring: Datadog, Grafana, Graylog

          • Event validation process: Iglu schema registry

          • APIs & Integrations: REST, OAuth, webhook ingestion

          • Infraascode (optional): Terraform

            • Bonus Points Nice to Have:

              • Experience with additional AWS services: EMR, EKS, Athena, EC2.

              • Handson knowledge of alternative data warehouses like Snowflake or others.

              • Experience with PySpark for big data processing.

              • Familiarity with event data collection tools (Snowplow, Rudderstack, etc.).

                • Interest in or exposure to customer data platforms (CDPs) and realtime data workflows.

Required profile

Experience

Level of experience: Senior (5-10 years)
Industry :
Information Technology & Services
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Growth Mindedness
  • Collaboration
  • Communication
  • Problem Solving

Data Engineer Related jobs