Senior Data Engineer

Work set-up: 
Full Remote
Contract: 
Experience: 
Senior (5-10 years)
Work from: 

Offer summary

Qualifications:

5+ years of software engineering experience with a focus on data systems., Strong proficiency in Python programming and SQL querying., Experience with cloud infrastructure, particularly AWS and Terraform., Background in designing and implementing big data architectures and ETL pipelines..

Key responsibilities:

  • Design and maintain scalable MLOps infrastructure for model deployment and monitoring.
  • Develop and manage data pipelines for extracting, transforming, and loading data.
  • Collaborate with data scientists to ensure data and model workflows are production-ready.
  • Debug and resolve issues in data pipelines and ML systems.

Rentable logo
Rentable Information Technology & Services SME https://www.rentable.co/

Job description

Your New Team

The ApartmentIQ team is revolutionizing how property managers optimize their operations. We provide a powerful platform that centralizes critical property data, offering actionable insights and intuitive tools to enhance efficiency and decisionmaking. Our product empowers users to streamline everything from leasing and maintenance to resident communication and financial reporting.

Behind the scenes, our data platform runs on a modern AWSbased tech stack designed to support big data architectures and machine learning models at scale. We believe in fostering an inclusive environment built on mutual trust, continuous learning, and a commitment to simplicity, where every engineer can contribute and grow.

The Role

As a Senior Data Engineer, you’ll be the technical backbone of the data layer that powers Daylight — ApartmentIQ’s revenuemanagement product that delivers realtime rent recommendations to property managers. You’ll design, build, and own the ingestion framework that pulls operational data from a variety of propertymanagement systems, transforms it into analyticsready models, and serves it to the machinelearning workflows that forecast demand and optimize pricing.

Working handinhand with data scientists, you’ll ensure every byte flowing through Daylight is trustworthy, traceable, and available at the cadence our algorithms require. You’ll architect cloudnative, Terraformmanaged infrastructure; implement scalable batch and streaming ETLELT jobs in Python; and layer in observability, testing, and dataquality guards so teams can iterate on models with confidence. You’ll also build and own core MLOps components that power model training, inference, and deployment — ensuring our ML systems are reliable, repeatable, and productionready.

Beyond coding, you’ll collaborate with product managers, backend engineers, and customerfacing teams to translate business requirements—like a new rent rule or occupancy forecast—into performant data solutions. If you thrive on endtoend ownership, relish tough debugging sessions, and want to see your work directly influence rent recommendations across thousands of units, we’d love to meet you.

Responsibilities

  • Design, build, and maintain scalable MLOps infrastructure to support model training, deployment, monitoring, and continuous integrationcontinuous delivery (CICD) of ML models.
  • Develop and manage robust data pipelines to extract, transform, and load (ETLELT) data from a variety of structured and unstructured sources.
  • Collaborate with data scientists and ML engineers to understand model requirements and ensure production readiness of data and model workflows.
  • Debug complex data issues and ML pipeline failures, collaborating closely with data scientists and ML engineers to diagnose root causes in data or algorithm behavior.
  • Debug data algorithmic related problems in production for userfacing applications
  • Design and optimize data storage solutions using modern data warehousing, and relational database systems.
  • Codify and manage cloud infrastructure using Infrastructure as Code tools, primarily Terraform, to ensure reproducibility, scalability, and auditability across environments.
  • Implement observability, alerting, and data quality frameworks to ensure pipeline health and uphold data integrity.
    • Qualifications

      • 5+ years of software engineering experience, including 3+ years working directly with dataintensive systems, pipelines, and infrastructure.
      • Display a strong sense of ownership and delivering endtoend systems — from architecture and implementation to CICD, observability, and infrastructure management.
      • Runs toward problems: has zero tolerance for bugsissues, leans into complex issues, and proactively investigates and resolves failures.
      • Strong debugging capabilities; seeks root causes, not bandaids — whether it’s a data anomaly, algorithmic quirk, or system failure.
      • Strong Python experience — can write clear, idiomatic code and understands best practices.
      • Comfortable writing SQL queries to analyze relational data.
      • Experience with Terraform or other InfrastructureasCode tools for provisioning cloudbased infrastructure (e.g., AWS, GCP).
      • Handson experience designing and implementing big data architectures, streaming or batch ETL pipelines, and understanding the tradeoffs between complexity, performance, and cost.
      • Experience with data lakes, data warehouses, relational databases, and document stores, and when to use each.
      • Math or CS background preferred andor experience working with algorithms.
      • Uses LLMs and AI agents to enhance engineering productivity and explore solution spaces creatively.
      • Operates effectively in fastpaced, startup environments; adapts quickly and communicates clearly.
      • Strong collaborator and communicator, deeply integrated with the team, and proactively shares context and decisions.
        • Bonus Skills and Experience

          • Ruby andor Ruby on Rails framework.
          • Writing performant code using methods like parallelism and concurrency.
          • AWS services: SageMaker, Lambda, Redshift, OpenSearch, Kinesis.
          • Experience with distributed systems.
            • Why Our Team

              • 100% remote across the U.S., with quarterly inperson gatherings for team offsites
              • Competitive Compensation
              • Flexible vacation and parental leave policies
              • Medical, Dental, and Vision Insurance
              • 100% paid ShortTerm Disability, LongTerm Disability, and Life Insurance Program
              • 401k Program
              • A supportive, learningfirst culture where you’ll help shape the next generation of AIdriven marketing tools for the apartment rental industry

Required profile

Experience

Level of experience: Senior (5-10 years)
Industry :
Information Technology & Services
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Collaboration
  • Communication
  • Problem Solving

Data Engineer Related jobs