Logo for Residex

Data Engineer

Roles & Responsibilities

  • 5+ years of experience building production data pipelines and data products at scale, with ability to design data models that serve downstream analytics and reporting needs.
  • Deep SQL development skills including complex queries, window functions, CTEs, stored procedures, and performance optimization for SQL Server (source) and Snowflake (target).
  • Strong proficiency with dbt and ETL/ELT tooling, with experience in version control, testing, and documentation as a data product development framework, and familiarity with Git-based CI/CD pipelines.
  • Experience designing dimensional models (star and snowflake schemas) that balance analytical flexibility with performance, and collaborating with BI developers and analysts to enable downstream analytics.

Requirements:

  • Design and build data products in Snowflake that serve as the foundation for analytics, dashboards, and operational insights—treating data models as intentional products with clear interfaces, documentation, and performance SLAs that enable downstream teams to build with confidence.
  • Architect scalable ETL/ELT pipelines extracting data from production SQL Server databases and transforming it into analytics-ready data products in Snowflake, ensuring reliability, accuracy, and usability across thousands of healthcare communities.
  • Translate legacy business logic into maintainable data layer transformations using dbt, ensuring rules are auditable, reusable, and well-documented.
  • Establish data contracts, semantic definitions, quality guarantees, and DevOps practices (Git, CI/CD, infrastructure as code, monitoring) to deploy and evolve data products safely.

Job description

Job Type
Contract
Description

Key Responsibilities:

  • Design and build data products in Snowflake that serve as the foundation for analytics, dashboards, and operational insights—treating data models as intentional products with clear interfaces, documentation, and performance SLAs that enable downstream teams to build with confidence
  • Architect scalable ETL/ELT pipelines extracting data from production SQL Server databases and transforming it into analytics-ready data products in Snowflake, ensuring reliability, accuracy, and usability across thousands of healthcare communities
  • Collaborate deeply with the Data UI/UX Developer and domain experts to understand how data will be consumed and design data models that anticipate analytics needs, reduce friction, and enable self-service exploration rather than just meeting immediate requirements
  • Translate business logic embedded in legacy application code and stored procedures into maintainable, well-documented data layer transformations using modern tools such as dbt, ensuring business rules are accurate, auditable, and positioned as reusable data products
  • Build dimensional data models in Snowflake including star and snowflake schemas that balance query performance, analytical flexibility, and maintainability—designing data structures that empower rather than constrain downstream analytics development
  • Champion data product thinking by establishing clear data contracts, semantic definitions, and quality guarantees that give BI developers, analysts, and business users confidence in the data they're building on
  • Implement DevOps best practices for data pipelines including version control (Git), CI/CD automation, infrastructure as code, monitoring, and alerting to ensure data products are deployed reliably and evolve safely as requirements change
  • Establish and maintain data quality frameworks including validation rules, reconciliation processes, and automated testing to ensure analytics products meet healthcare industry accuracy standards and data consumers can trust the foundation they're building on
  • Document data lineage, transformation logic, and business rules using tools like Dataedo or equivalent data catalog platforms, creating living documentation that helps downstream teams understand what data means, where it comes from, and how to use it effectively
  • Work closely with the Data Architect to implement warehouse architecture decisions including schema design, indexing strategies, partitioning, and query optimization that support sub-second dashboard response times and enable scalable self-service analytics
  • Optimize pipeline performance and cost efficiency in Snowflake through query tuning, materialized views, clustering, and efficient data loading patterns while maintaining the usability and accessibility of data products
  • Engage with data consumers (Data UI/UX Developer, analysts, domain experts) to gather feedback on data product usability, identify pain points in data access or structure, and continuously evolve data models to better serve their workflows
  • Support data governance initiatives including implementing access controls, audit logging, and HIPAA-compliant data handling practices for protected health information (PHI) while ensuring appropriate data discoverability and access for authorized users
Requirements

What We're Looking For:

  • 5+ years of experience building production data pipelines and data products at scale, with demonstrated ability to design data models that serve downstream analytics and reporting needs effectively
  • Strong data product mindset—you understand that data engineering isn't just about moving data, it's about creating reliable, well-documented, usable data assets that enable others to build analytics products that improve how people work
  • Deep SQL development skills including complex queries, window functions, CTEs, stored procedures, and performance optimization for both SQL Server (source) and Snowflake (target)
  • Hands-on experience with Snowflake architecture including warehouses, databases, schemas, stages, streams, tasks, and understanding of Snowflake-specific optimization techniques
  • Strong proficiency with ETL/ELT tools, with dbt strongly preferred for transformation logic, version control, testing, and documentation as a data product development framework
  • Demonstrated DevOps mindset including experience with Git workflows, CI/CD pipelines (GitHub Actions, GitLab CI, or similar), infrastructure automation, and deployment best practices
  • Experience designing dimensional models (star schema, snowflake schema, slowly changing dimensions) that balance analytical flexibility with query performance and are intuitive for BI developers and analysts to consume
  • Track record of collaborating with BI developers, analysts, and business stakeholders to understand how data will be used and designing models that enable rather than constrain downstream analytics development
  • Strong data quality orientation with experience implementing validation frameworks, reconciliation processes, and automated testing to ensure data products meet accuracy standards and inspire user confidence
  • Experience with data documentation and lineage tools such as Dataedo, Atlan, Alation, or similar data catalog platforms for creating accessible, maintainable documentation that helps data consumers understand and use data effectively
  • Understanding of data contracts, semantic layers, and data product interfaces that establish clear expectations between data producers and consumers
  • Healthcare or senior living industry experience strongly preferred; familiarity with EHR data structures, clinical workflows, regulatory compliance requirements, and PHI data handling under HIPAA
  • Excellent collaboration and communication skills with ability to translate technical data concepts into terms that resonate with BI developers, analysts, and business users
  • Strong documentation skills for capturing not just what data models contain, but why they're structured the way they are and how downstream teams should use them
  • Experience extracting data from SQL Server including understanding of change data capture (CDC), incremental loading patterns, and handling large-scale data migrations
  • Comfortable working independently while actively seeking feedback from data consumers to continuously improve data product usability and effectiveness
  • Experience with real-time data pipelines, streaming architectures (Kafka, event-driven patterns), or message queuing systems is a plus
  • Familiarity with Python or other scripting languages for data processing, automation, and pipeline orchestration is a plus

Why Join Residex?

  • Purpose That Matters: Be part of a mission that directly impacts the quality of life for seniors and the caregivers who serve them.
  • Real Platform Ownership: Build the data foundation that powers analytics and insights for thousands of healthcare communities nationwide—creating data products that enable teams to deliver life-changing insights to healthcare professionals.
  • High-Trust Leadership: Work shoulder-to-shoulder with a CPO and technical lead who value autonomy, vision, and results.
  • Rapid Growth, Real Impact: Join at a high-growth inflection point with the resources, customers, and market tailwinds to go big.
  • A Culture of Craft and Care: We take pride in what we build and care deeply about the people we build it for—and with.


Ready to Build Data Products That Power Intelligent Care?

If you're ready to engineer data products that don't just move data but fundamentally enable better insights and better care—this is your call. Let's transform intelligent care together.


Data Engineer Related jobs

Other jobs at Residex

We help you get seen. Not ignored.

We help you get seen faster — by the right people.

🚀

Auto-Apply

We apply for you — automatically and instantly.

Save time, skip forms, and stay on top of every opportunity. Because you can't get seen if you're not in the race.

AI Match Feedback

Know your real match before you apply.

Get a detailed AI assessment of your profile against each job posting. Because getting seen starts with passing the filters.

Upgrade to Premium. Apply smarter and get noticed.

Upgrade to Premium

Join thousands of professionals who got noticed and hired faster.