Logo for Inworld AI

Staff / Principal Machine Learning Engineer, Serving - Switzerland

Roles & Responsibilities

  • Inference optimization: deep understanding of modern serving frameworks and techniques (e.g., vLLM, TRT-LLM).
  • Model acceleration: hands-on experience with quantization, distillation, caching strategies, continuous batching, paged attention, and speculative decoding.
  • High-performance systems: proficiency in C++, CUDA, Rust, or highly optimized Python; ability to profile and optimize NVIDIA GPU performance.
  • Distributed systems scaling: experience with Kubernetes, Ray, custom load balancing, multi-GPU/multi-node inference, and handling thousands of concurrent connections.

Requirements:

  • Take models from the research team, containerize them, optimize their serving, and ensure reliable production operation.
  • Design and implement low-latency, high-throughput inference pipelines across multi-GPU/multi-node deployments.
  • Profile and optimize code paths, apply quantization, distillation, caching, continuous batching, and speculative decoding to maximize performance.
  • Collaborate across research, backend, and platform teams to own end-to-end delivery from research prototype to production.

Job description

About Inworld

Inworld is a product-oriented research lab of top AI researchers and engineers, developing best-in-class realtime multimodal models and the only realtime orchestration platform optimized for thousands of queries per second.

We’ve raised more than $125M from Lightspeed, Section 32, Kleiner Perkins, Microsoft’s M12 venture fund, Founders Fund, Meta and Stanford, among others. Our technology has powered experiences from companies such as NVIDIA, Microsoft Xbox, Niantic, Logitech Streamlabs, Wishroll, Little Umbrella and Bible Chat. We’ve also been recognized by CB Insights as one of the 100 most promising AI companies globally and have been named one of LinkedIn's Top 10 Startups in the USA.

Who We're Looking For

A year ago, reliably working agentic systems and sub-second multimodal inference at scale barely existed. Nobody has a decade of experience here. So we're not screening for a resume template — we're looking for strong people from varied backgrounds who learn fast, thrive in ambiguity, and can show us what they've built, broken, and understood.

Experience We Find Useful

You don't need all of this. But you need enough to make a case.

  • Inference Optimization. Deep understanding of modern serving frameworks and techniques like vLLM or TRT-LLM.

  • Model Acceleration. Hands-on experience with quantization, distillation, caching strategies , continuous batching, paged attention, and speculative decoding.

  • High-Performance Systems. Proficiency in C++, CUDA, Rust, or highly optimized Python. You know how to profile code and squeeze every ounce of performance out of NVIDIA GPUs.

  • Distributed Systems & Scaling. Experience with Kubernetes, Ray, custom load balancing, multi-GPU/multi-node inference, and reliably handling thousands of concurrent connections.

  • Public work. Non-trivial systems programming projects, open-source contributions to major inference engines, or deep-dive technical write-ups.

  • Full-cycle ownership. You can take a model from the research team, containerize it, optimize its serving, and ensure it runs reliably in production.

  • Background. PhD in CS, Physics, Math, or equivalent practical experience building backend or ML systems.

  • Professional fluency in English (written and spoken) is required, as you will be collaborating daily with our US-based leadership and engineering teams.

Who Thrives Here

  • You don’t need a roadmap to start walking; you’re comfortable picking a direction and building the map as you go.

  • You believe engineering isn't finished until it’s shipped and stable. You have a bias for impact over purely theoretical optimizations.

  • You don't just ship code; you obsess over the why. You’re the first to question an architecture if you think there’s a better way to solve the core latency or throughput problem.

  • You aren't satisfied with "the PM said so." You thrive on deep context and want to understand the fundamental logic behind every decision we make.

What Working Here Is Like

We hand you unclear problems and expect you to make them clear. We value engineers who say "I don't know yet" and then design the benchmark or prototype that finds out. We treat performance, latency, and reliability as first-class product features, not a box to check before launch. Impact comes before everything else, though we support sharing work and open-source contributions that move the field forward. Your work should be visible. Flat structure, fast iterations, minimal process theater.

Location & Employment

  • Location: remote within Switzerland

  • Employment type: Full-time, permanent employment

  • Hiring model: Employment via Employer of Record (EOR)

Candidates must already have the legal right to work in Switzerland, as visa sponsorship is not available for this role. For candidates interested in relocating to the San Francisco Bay Area in the future, full U.S. visa and relocation support may be available, subject to business needs and applicable legal and work authorization requirements.

Machine Learning Engineer Related jobs

Other jobs at Inworld AI

We help you get seen. Not ignored.

We help you get seen faster — by the right people.

🚀

Auto-Apply

We apply for you — automatically and instantly.

Save time, skip forms, and stay on top of every opportunity. Because you can't get seen if you're not in the race.

AI Match Feedback

Know your real match before you apply.

Get a detailed AI assessment of your profile against each job posting. Because getting seen starts with passing the filters.

Upgrade to Premium. Apply smarter and get noticed.

Upgrade to Premium

Join thousands of professionals who got noticed and hired faster.