Machinify is the leading provider of AIpowered software products that transform healthcare claims and payment operations. Each year, the healthcare industry generates over $200B in claims mispayments, creating incredible waste, friction and frustration for all participants: patients, providers, and especially payers. Machinify’s revolutionary AIplatform has enabled the company to develop and deploy, at light speed, industryspecific products that increase the speed and accuracy of claims processing by orders of magnitude.
Why This Role Matters
As a Data Engineer, you’ll be at the heart of transforming raw external data into powerful, trusted datasets that drive payment, product, and operational decisions. You’ll work closely with product managers, data scientists, subject matter experts, engineers, and customer teams to build, scale, and refine production pipelines — ensuring data is accurate, observable, and actionable.
You’ll also play a critical role in onboarding new customers, integrating their raw data into our internal models. Your pipelines will directly power the company’s ML models, dashboards, and core product experiences. If you enjoy owning endtoend workflows, shaping data standards, and driving impact in a fastmoving environment, this is your opportunity.
Design and implement robust, productiongrade pipelines using Python, Spark SQL, and Airflow to process highvolume filebased datasets (CSV, Parquet, JSON).
Lead efforts to canonicalize raw healthcare data (837 claims, EHR, partner data, flat files) into internal models.
Own the full lifecycle of core pipelines — from file ingestion to validated, queryable datasets — ensuring high reliability and performance.
Onboard new customers by integrating their raw data into internal pipelines and canonical models; collaborate with SMEs, Account Managers, and Product to ensure successful implementation and troubleshooting.
Build resilient, idempotent transformation logic with data quality checks, validation layers, and observability.
Refactor and scale existing pipelines to meet growing data and business needs.
Tune Spark jobs and optimize distributed processing performance.
Implement schema enforcement and versioning aligned with internal data standards.
Collaborate deeply with Data Analysts, Data Scientists, Product Managers, Engineering, Platform, SMEs, and AMs to ensure pipelines meet evolving business needs.
Monitor pipeline health, participate in oncall rotations, and proactively debug and resolve production data flow issues.
Contribute to the evolution of our data platform — driving toward mature patterns in observability, testing, and automation.
Build and enhance streaming pipelines (Kafka, SQS, or similar) where needed to support nearrealtime data needs.
Help develop and champion internal best practices around pipeline development and data modeling.
4+ years of experience as a Data Engineer (or equivalent), building productiongrade pipelines.
Strong expertise in Python, Spark SQL, and Airflow.
Experience processing largescale filebased datasets (CSV, Parquet, JSON, etc) in production environments.
Experience mapping and standardizing raw external data into canonical models.
Familiarity with AWS (or any cloud), including file storage and distributed compute concepts.
Experience onboarding new customers and integrating external customer data with nonstandard formats.
Ability to work across teams, manage priorities, and own complex data workflows with minimal supervision.
Strong written and verbal communication skills — able to explain technical concepts to nonengineering partners.
Comfortable designing pipelines from scratch and improving existing pipelines.
Experience working with largescale or messy datasets (healthcare, financial, logs, etc.).
Experience building or willingness to learn streaming pipelines using tools such as Kafka or SQS.
Bonus: Familiarity with healthcare data (837, 835, EHR, UB04, claims normalization).
Real impact — your pipelines will directly support decisionmaking and claims payment outcomes from day one.
High visibility — partner with ML, Product, Analytics, Platform, Operations, and Customer teams on critical data initiatives.
Total ownership — you’ll drive the lifecycle of core datasets powering our platform.
Customerfacing impact — you will directly contribute to successful customer onboarding and data integration.
Were hiring across multiple levels for this role. Final level and title will be determined based on experience and performance during the interview process.
Equal Employment Opportunity at Machinify
Machinify is committed to hiring talented and qualified individuals with diverse backgrounds for all of its positions. Machinify believes that the gathering and celebration of unique backgrounds, qualities, and cultures enriches the workplace.
See our Candidate Privacy Notice at: https:www.machinify.comcandidateprivacynotice
Clio - Cloud-Based Legal Technology
VIPRE Security Group
Growth Acceleration Partners
MTP Brasil
Zoom