📈 Who We Are:
We are rebuilding the energy transaction, making it transparent and fair.
Our goal is to put power back where it belongs, in the hands of customers and to take on one of the most critical problems of our century, access to low cost electricity.
tem exists to fix a broken global energy market that’s long favoured legacy operators, intermediaries, and opaque pricing. Today’s electricity system was not designed for rapid decarbonisation, AI-driven efficiency or fair access for the actual users - businesses and generators.
We’ve built the first AI native transaction infrastructure to reinvent how electricity is bought, sold and priced. Our technology is designed to cut out the inefficient fees, automate complex market flows, and bring transparency and fairness to energy transactions at scale.
In late 2025, after extraordinary growth, we closed a $75 million Series B - led by Lightspeed Venture Partners with participation from Albion, Atomico, Allianz, Hitachi Ventures, Hitachi Ventures, Schroders Capital and others - positioning us for global expansion, deeper product innovation and category leadership.
We’re scaling internationally and building toward a future where AI-driven infrastructure is foundational to electricity markets worldwide.
Since launch, our modern utility product, known as RED, has already facilitated thousands of business customers and billions in energy transaction value, proving that modern software and AI can transform an industry built on legacy systems.
At tem, we’re not just building another energy company, we’re rearchitecting market infrastructure so that transparency, efficiency and sustainability become the default, not the exception.
🏅 The Role:
Rosso is tem's core IP, the transaction infrastructure that prices electricity for thousands of businesses, balances portfolios in real time, and sits on the critical path for every deal tem closes. The machine learning models inside Rosso forecasting, pricing, and optimisation are what make those decisions possible. Every inference shapes the prices our customers see.
Today, tem's ML platform has solid foundations: Metaflow for orchestration, AWS Batch for compute, and automated CI/CD pipelines already in place. That's got Rosso to where it is. But as the number of model types grows and Rosso scales, the platform needs the next layer: structured experiment tracking, a model registry, production monitoring, and self-service tooling that lets ML engineers move at pace without being blocked on infrastructure.
This role exists to build that layer and define what the platform looks like at scale. You will join the Rosso service alongside a Senior MLOps Engineer in a cross-functional team of ML engineers and software engineers. The destination is a platform that is genuinely self-service: ML engineers can run experiments, compare models, and ship to production without external intervention. It needs to scale across long-horizon forecasting tasks, real-time pricing engines, and large-scale optimisation workloads — not just the models that exist today.
The concrete work ahead is specific: experiment tracking and a model registry are not yet in place. Backtesting infrastructure critical to the forecasting mission needs to be built. Shadow deployments will allow new models to be validated in production before they go live. And the platform needs to be designed for heterogeneous workloads, not just the models that exist today. This is a technical leadership role: you'll define the platform strategy and set the direction for the MLOps, while remaining hands-on in the most critical architectural decisions. The right person has seen ML platforms scale well and has learned from the times they haven't. You'll bring that judgment to a platform that can't afford expensive detours.
🚀 Responsibilities:
Own the ML platform strategy: Define the roadmap from Level 1 to Level 2, making architectural decisions ahead of when they'd otherwise become blockers. Keep the platform aligned to Rosso's commercial trajectory.
Build the foundations: Lead the design and build of experiment tracking, model registry, automated pipeline infrastructure, and production monitoring across all model types.
Deliver backtesting and shadow deployments: Build the infrastructure the forecasting and pricing teams need to validate models reliably against historical data and in production before they go live.
Set technical direction: Provide the architectural vision and standards the Senior MLOps Engineer executes against. This is a force-multiplier relationship, not a management one.
Partner across the team: Work closely with ML engineers and software engineers to understand what the platform needs to unlock the next wave of Rosso capabilities. Translate those needs into principled platform decisions.
Choose the right tools: Evaluate the MLOps tooling ecosystem with clear eyes. Make choices that fit tem's scale and workload mix not what's fashionable.
Drive deployment reliability: Push toward more frequent, reliable model deployment cycles as Rosso moves from batch-heavy workflows toward live, near-real-time processes.
Define best practices: Establish standards for how models are trained, versioned, deployed, and monitored across the team. Create a platform ML engineers trust.
What success looks like:
MLOps is no longer a bottleneck, ML engineers are unblocked to focus on model quality
The time to deploy new machine learning models goes from days to minutes
The core features required from the machine learning platform are delivered before they block progress e.g. backtesting and experiment tracking
🎯 Requirements:
Must-Haves:
Scaled an ML platform from early-stage: Demonstrable experience taking an ML platform from early stages to best-in-class infrastructure at a fast-moving company. You've been there, done it, and you're comfortable with the messiness and ambiguity that comes with scale-up life.
ML pipeline expertise: Deep experience across the whole MLOps lifecycle with ML pipeline orchestration (Metaflow, Prefect, Airflow or equivalent) and ML infrastructure (Sagemaker, Vertex AI, Chalk, or equivalent).
Model lifecycle tooling: Hands-on experience building or operating experiment tracking systems (MLflow, W&B, or similar), model registries, and governance tooling for model fleets at scale. Knows what good looks like and what to avoid.
Broad MLOps tooling knowledge: Across the ecosystem monitoring, drift detection, CI/CD for ML, containerisation, IaC (Terraform, AWS CDK). Able to evaluate trade-offs and make principled choices for a specific context, not just default to what they know.
Technical leadership track record: Evidence of setting platform direction, influencing cross-functional teams, and defining standards at Staff+ level. Raises the quality bar through design reviews, code reviews, and mentoring. Knows when to drive strategy and when to get into the weeds.
Heterogeneous workload experience: Experience designing and operating platforms serving heterogeneous workloads (e.g. forecasting, classification, operations research, etc), not just one model type across batch and real time applications.
Python, AWS + IaC: Strong Python; hands-on experience with AWS and infrastructure-as-code (Terraform, AWS CDK).
Bonus points:
Worked in a role where ML is at the core of the product
Familiarity with Metaflow specifically
Experience with operations research, large-scale optimisation in a production context
Experience working with business critical time series forecasting models
Exposure to reinforcement learning in a production setting
Exposure to production LLM workloads e.g. fine tuning
🗣️ Interview Process:
Our processes normally take around 2-3 weeks from first call to offer - please let us know about any adjustments to timelines that may be required.
First call with our Talent Team (30 mins). This is to understand your experience, motivations, and discuss the role in more detail.
Behaviour Interview with Tim, Head of Data (60 mins). This is your chance to really understand the role, the expectations, and ensure alignment on ways of working.
Technical Interview with the Team (90 mins). You'll meet with potential peers in this session and work through a live technical exercise.
Culture-Add Interview with Stakeholders (45 mins). The final session will be with two cross-functional stakeholders, and will explore how your values align with ours, and is designed to be a genuine two-way conversation, your chance to understand what it's really like to work at tem.
We welcome applications from people of all backgrounds, experiences, and identities, including those that are traditionally underrepresented in the tech and energy sectors. If you’re excited about this role but not sure you meet every requirement, we’d still love to hear from you. Your unique perspective could be exactly what we’re looking for.

sofatutor

Nagarro

First American

tem

tem

tem

tem

tem