Logo for Nebius Group

Senior ML Engineer (Token Factory)

Roles & Responsibilities

  • Profound understanding of theoretical foundations of machine learning and reinforcement learning
  • Deep expertise in modern deep learning for language processing and generation
  • Experience training large models on multiple computational nodes
  • Strong software engineering skills (Python) and experience with modern DL frameworks (JAX)

Requirements:

  • Contribute to building an inference fine-tuning platform for foundation models (text, vision, audio, multimodal) at scale
  • Drive Advanced Fine-Tuning initiatives (LoRA-based and full-parameter) to improve model quality and training efficiency
  • Develop and optimize inference pipelines and training workflows in JAX, including exploration of architectures and scaling laws
  • Investigate low-precision training and inference techniques (FP8, NVFP4/MXFP4) optimized for modern hardware

Job description

Why work at Nebius
Nebius is leading a new era in cloud computing to serve the global AI economy. We create the tools and resources our customers need to solve real-world challenges and transform industries, without massive infrastructure costs or the need to build large in-house AI/ML teams. Our employees work at the cutting edge of AI cloud infrastructure alongside some of the most experienced and innovative leaders and engineers in the field.

Where we work
Headquartered in Amsterdam and listed on Nasdaq, Nebius has a global footprint with R&D hubs across Europe, North America, and Israel. The team of over 1400 employees includes more than 400 highly skilled engineers with deep expertise across hardware and software engineering, as well as an in-house AI R&D team.

The role

Token Factory is a part of Nebius Cloud, one of the world’s largest GPU clouds, running tens of thousands of GPUs. We are building an inference & fine-tuning platform that makes every kind of foundation model — text, vision, audio, and emerging multimodal architectures — fast, reliable, and effortless to train & deploy at massive scale.

Some directions we currently working on and which you can be a part of:
  • Advanced Fine-Tuning: Enhancing fine-tuning methodologies - both LoRA-based and full-parameter - for cutting-edge LLMs (e.g., GPT-OSS, Kimi K2.5, DeepSeek V3.1/V3.2, GLM-4.7), focusing on both model quality and training efficiency.

  • Inference Optimization: Identifying LLM inference bottlenecks to drive production speedups. This involves building model training and evaluation pipelines in JAX for speculative decoding, experimenting with architectures (dense/MoE, auto-regressive/parallel), and deriving scaling laws to guide resource allocation.                                                                                
  • Low Precision Training & Inference: Investigating low-precision (FP8, NVFP4/MXFP4) methodologies for supervised fine-tuning and reinforcement learning - spanning both inference and training - optimized for modern hardware

We expect you to have:

  • A profound understanding of theoretical foundations of machine learning and reinforcement learning.

  • Deep expertise in modern deep learning for language processing and generation

  • Experience with training large models on multiple computational nodes

  • Reasonable understanding of performance aspects of large neural network training (sharding strategies, custom kernels, hardware features etc.)

  • Strong software engineering skills (we mostly use Python)

  • Deep experience with modern deep learning frameworks (we use JAX)

  • Proficiency in contemporary software engineering approaches, including CI/CD, version control and unit testing

  • Strong communication and leadership abilities

Nice to have:

  • Previous experience working with language models or other similar NLP technologies.

  • Familiarity with important ideas in LLM space, such as MHA, RoPE, ZeRO/FSDP, Flash Attention, quantization

  • A track record of building and delivering products (not necessarily ML-related) in a dynamic startup-like environment.

  • Strong engineering skills, including experience in developing large distributed systems or high-load web services.

  • Open-source projects that showcase your engineering prowess

  • Excellent command of the English language, alongside superior writing, articulation, and communication skills.

What we offer 

  • Competitive salary and comprehensive benefits package.
  • Opportunities for professional growth within Nebius.
  • Flexible working arrangements.
  • A dynamic and collaborative work environment that values initiative and innovation.

We’re growing and expanding our products every day. If you’re up to the challenge and are excited about AI and ML as much as we are, join us!

ML Ops Engineer Related jobs

Other jobs at Nebius Group

We help you get seen. Not ignored.

We help you get seen faster — by the right people.

🚀

Auto-Apply

We apply for you — automatically and instantly.

Save time, skip forms, and stay on top of every opportunity. Because you can't get seen if you're not in the race.

AI Match Feedback

Know your real match before you apply.

Get a detailed AI assessment of your profile against each job posting. Because getting seen starts with passing the filters.

Upgrade to Premium. Apply smarter and get noticed.

Upgrade to Premium

Join thousands of professionals who got noticed and hired faster.