Logo for Dr. Berg Nutritionals, Inc

Senior Data Engineer at Dr. Berg Nutritionals

Roles & Responsibilities

  • 5–8+ years of professional data engineering experience, with 2–3+ years in Azure (Data Factory, Synapse, Fabric)
  • Strong SQL skills including query tuning, execution plans, and indexing strategy
  • Production-level proficiency in C# and/or Python for building custom data connectors and pipelines; experience with messy real-world APIs (e.g., Amazon SP-API, NetSuite)
  • Infrastructure-as-code experience using Bicep, Terraform, or ARM templates; real on-call experience with runbooks and alerting

Requirements:

  • Build and maintain end-to-end ingestion pipelines from sources such as Amazon SP-API, Shopify, NetSuite, Klaviyo, Recharge, YouTube, GA4, and more into the data warehouse with proper error handling and idempotency
  • Own orchestration and scheduling to ensure data freshness for finance reconciliations and to manage API quotas and streaming needs
  • Implement monitoring, alerts, on-call readiness, and post-incident reviews; participate in a weekly on-call rotation
  • Enforce data contracts and governance; optimize performance and cost (partitioning, incremental loads); maintain infrastructure-as-code and CI/CD for pipelines

Job description

About Dr. Berg Nutritionals

Dr. Berg Nutritionals is one of the largest health education and supplement companies in the world, built around Dr. Eric Berg's YouTube channel (approximately 15 million subscribers, 7,000+ videos, and 111 million weekly views). The business generates $173M+ in annual revenue across Amazon, Shopify, Walmart, and TikTok Shop, with a rapidly growing subscription base.

We are a founder-led, Clearwater-based company with a lean engineering culture. Most of our custom infrastructure runs on Azure (App Services, PostgreSQL, Blob Storage) and is written in C# .NET, including our production AI agents built on Microsoft Agent Framework (Semantic Kernel and AutoGen). We use Microsoft 365 (Teams, SharePoint, Planner, Forms, Power Automate) for collaboration and Trello for some workflows.

Role Summary

We are building a unified data warehouse and AI intelligence system that will give our CEO and leadership team a single, trustworthy view of the business — financial performance by SKU and channel, customer lifetime value by acquisition source, video-to-commerce attribution across 7,000+ pieces of content, and the operational nervous system of the company.

The Senior Data Engineer is the foundational hire for this system. You will own the pipelines that move data from every source (Amazon SP-API, Shopify, NetSuite, Klaviyo, Recharge, YouTube, GA4, and others) into our warehouse reliably, accurately, and on time. Downstream of your work sit our analytics models, our AI strategic analyzer, and ultimately the weekly CEO Strategic Brief that drives decision-making across the company.

This is not a greenfield research role. We have a live business with real revenue that depends on data being right. If a pipeline breaks at 2am on Thursday, the CEO's Friday morning brief is wrong, and the decisions made from it are wrong. We are looking for someone who takes that responsibility seriously and has the craft to do it well.

What You'll Do

In Your First 90 Days

  • Partner with the Head of Data (or the CIO directly) to complete a technical audit of every existing data source — what's flowing, what's broken, what's missing
  • Replace our current manual CSV-based Klaviyo ingestion with a direct API pipeline
  • Stand up the first production pipelines for Amazon SP-API, Shopify, and NetSuite, with proper monitoring and alerting
  • Establish our infrastructure-as-code practice (Bicep or Terraform) and CI/CD pipeline for data engineering changes
  • Document everything — pipeline architecture, runbooks, on-call procedures

Ongoing Core Responsibilities

Build and maintain ingestion pipelines. You will own the end-to-end pipelines from source systems into our warehouse. This includes Amazon Selling Partner API, Shopify Admin API, NetSuite (SuiteAnalytics Connect), Klaviyo, Recharge, YouTube Data and Analytics APIs, GA4 (via BigQuery export), Google Ads, Meta Ads, Triple Whale, and approximately 15 additional sources across our Layer 1–5 data model. For each pipeline you will design the ingestion approach, build it with proper error handling and idempotency, establish incremental-load patterns where appropriate, and monitor it in production.

Own orchestration and scheduling. You decide what runs when, in what order, and with what dependencies. Financial data needs to be fresh before finance’s morning reconciliation. YouTube analytics need to respect daily API quotas across 7,000+ videos. Klaviyo events need to stream continuously. This is your call to make — and your responsibility to get right.

Monitoring, alerting, and on-call. Every pipeline you build needs health checks: row counts within expected ranges, schema validation, freshness SLAs, and data quality gates. You will configure Azure Monitor alerts, decide what pages someone overnight versus what can wait, and lead post-incident reviews. You will take part in a one-in-four weekly on-call rotation once the team is fully staffed.

Performance and cost optimization. Our data volumes are substantial — YouTube analytics alone is 7,000+ videos × daily metrics × multiple channels. You will own partitioning strategy, query tuning, incremental processing patterns, and monthly cost reviews. At our scale, this work directly saves tens of thousands of dollars per year in warehouse compute.

Source system and vendor API management. When Shopify deprecates an endpoint, when Amazon changes reporting structure, when NetSuite releases a new ODBC driver — you're the person who reads the release notes, tests the change, and adapts the pipelines. You will own API keys, service accounts, rate-limit tracking, and vendor support escalations for data-source APIs.

Enforce data contracts. You define and enforce the contracts between source systems and downstream consumers — what fields exist, what's never null, what ranges are valid. When a source system violates its contract, your pipelines stop and alert rather than passing bad data downstream to our AI analyzer. This is what structurally prevents hallucinations.

Infrastructure-as-code and CI/CD. Pipelines are defined as code (Bicep, Terraform, or ARM templates) and deployed through peer review. You will own this practice, along with the dev/staging/production environment separation that lets us move fast without breaking the weekly brief.

What You Won't Do

It's worth being explicit about role boundaries so you know where your work ends and where teammates take over:

  • Define business metrics. The Analytics Engineer owns what 'contribution margin' means in SQL; you make sure the inputs to that calculation are clean and trustworthy.
  • Build the AI analyzer itself. The AI / ML Engineer owns that layer; you provide them the reliable warehouse tables they reason from.
  • Approve vendor contracts or budgets. The Head of Data owns procurement; you recommend tooling based on technical fit.
  • Build dashboards or answer ad-hoc business questions. The Analytics Engineer and business-intelligence layer handle that.


Qualifications, Experience & Skills

Required

  • 5–8+ years of professional data engineering experience, with at least 2–3 years working primarily in Azure (Data Factory, Synapse, Fabric, or comparable).
  • Strong SQL — not just query-writing, but query tuning, execution plan analysis, and indexing strategy.
  • Production-level proficiency in C# and/or Python for custom connector work.
  • Demonstrated experience building pipelines against messy real-world APIs — ideally Amazon SP-API, NetSuite, or similarly difficult commerce/ERP sources. This is non-negotiable. Experience with only 'clean' SaaS APIs like Stripe or Salesforce is not a substitute.
  • Infrastructure-as-code experience using Bicep, Terraform, or ARM templates.
  • Real on-call experience — you know what good runbooks and alerting look like because you've been paged at 2am and you know what made the difference between a five-minute fix and a five-hour fire.
  • Strong written communication — because much of your work is documenting decisions and runbooks that others will rely on.

Preferred

  • Direct experience with dbt or a comparable transformation framework
  • Experience with Microsoft Fabric specifically, or a strong point of view on Fabric vs. Synapse vs. Snowflake
  • Familiarity with Microsoft Agent Framework (Semantic Kernel, AutoGen) or comparable agent orchestration systems
  • E-commerce or direct-to-consumer industry experience, particularly at multi-channel scale
  • Experience with vector databases (Azure AI Search, pgvector, Pinecone) for AI-retrieval use cases
  • Prior experience as the first or founding data engineer at a growing company

How We'll Know You're the Right Fit

Traits we're specifically looking for, beyond technical ability:

  • You take reliability personally.
  • You optimize for the reader, not the writer.
  • You push back thoughtfully.
  • You're comfortable being the only one (for now).

Work from Home Requirements

  • Up-to-date Mac or Windows computer with anti-virus protection
  • Reliable high-speed internet connection
  • Quiet, distraction-free workspace

Compensation & Benefits

We are intentional about creating a work environment that supports both high-quality work and the people doing it.

Pay Range: Competitive, based on experience.

For employee (W-2) engagements: We offer a competitive benefits package and performance-based bonuses tied to outcomes.

Hours of Work: Must be available for communication during business hours, Monday – Friday, 9am – 6pm EST

Location: Fully remote; not travel to the Clearwater office is expected.

Engagement Type: Employee (W-2) full-time salaried, exempt OR Independent Contractor.

Dr. Berg Nutritionals is an equal-opportunity employer. We welcome applicants from all backgrounds and do not discriminate on the basis of race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, veteran status, or any other protected characteristic.

Data Engineer Related jobs

Other jobs at Dr. Berg Nutritionals, Inc

We help you get seen. Not ignored.

We help you get seen faster — by the right people.

🚀

Auto-Apply

We apply for you — automatically and instantly.

Save time, skip forms, and stay on top of every opportunity. Because you can't get seen if you're not in the race.

AI Match Feedback

Know your real match before you apply.

Get a detailed AI assessment of your profile against each job posting. Because getting seen starts with passing the filters.

Upgrade to Premium. Apply smarter and get noticed.

Upgrade to Premium

Join thousands of professionals who got noticed and hired faster.