System Engineer (AI Studio)

Work set-up: 
Full Remote
Contract: 
Work from: 
Netherlands

Offer summary

Qualifications:

Proficiency in C++ or expertise in GPU programming with focus on high-performance coding and memory management., Experience in GPU programming or systems-level software development, including operating system internals or device drivers., Hands-on experience with profiling and debugging tools to optimize performance on CPUs and GPUs., Solid understanding of CPU/GPU architecture and memory hierarchy..

Key responsibilities:

  • Develop and optimize low-level kernels and runtime components for AI inference.
  • Improve performance of inference engines on GPU platforms.
  • Profile and debug system-level and hardware-level performance issues.
  • Collaborate with ML and backend teams to optimize end-to-end execution.

Nebius logo
Nebius Scaleup https://nebius.com/
201 - 500 Employees
See all jobs

Job description

Why work at Nebius
Nebius is leading a new era in cloud computing to serve the global AI economy. We create the tools and resources our customers need to solve real-world challenges and transform industries, without massive infrastructure costs or the need to build large in-house AI/ML teams. Our employees work at the cutting edge of AI cloud infrastructure alongside some of the most experienced and innovative leaders and engineers in the field.

Where we work
Headquartered in Amsterdam and listed on Nasdaq, Nebius has a global footprint with R&D hubs across Europe, North America, and Israel. The team of over 800 employees includes more than 400 highly skilled engineers with deep expertise across hardware and software engineering, as well as an in-house AI R&D team.

About the role:

AI Studio is a part of  Nebius Cloud, one of the world’s largest GPU clouds, running tens of thousands of GPUs. We are building an inference platform that makes every kind of foundation model — text, vision, audio, and emerging multimodal architectures — fast, reliable, and effortless to deploy at massive scale.

 

Responsibilities:

  • Develop and optimize low-level kernels and runtime components for AI inference 
  • Improve performance of inference engines GPU platforms 
  • Profile and debug system-level and hardware-level performance issues 
  • Integrate support for new hardware architectures (Hopper, Blackwell, Rubin) 
  • Collaborate with ML and backend teams to optimize end-to-end execution 

 

Required Qualifications:

  • Strong proficiency in C++, OR expertise in GPU programming with a focus on low-level high-performance coding and memory management 
  • Experience in GPU programming or systems-level software development, e.g. operating system internals, kernel modules, or device drivers 
  • Hands-on experience with profiling and debugging tools to identify performance issues on both CPUs and GPUs, and the ability to optimize code based on those findings. 
  • Solid understanding of CPU/GPU architecture and memory hierarchy 

 

Preferred Qualifications: 

  • Experience with GPU computing programming: CUDA, ROCm, CUTLASS, Cute, ThunderKittens, Triton, Pallas, Mosaic GPU 
  • Familiarity with ML inference runtimes (e.g. TensorRT, TVM) 
  • Knowledge of Linux internals, drivers, or compiler toolchains 
  • Experience with tools like perf, VTune, Nsight, or ROCm profiler 
  • Familiarity with popular inference engines (e.g. such as vLLM, sglang, TGI) 

What we offer 

  • Competitive salary and comprehensive benefits package.
  • Opportunities for professional growth within Nebius.
  • Hybrid working arrangements.
  • A dynamic and collaborative work environment that values initiative and innovation.

We’re growing and expanding our products every day. If you’re up to the challenge and are excited about AI and ML as much as we are, join us!

Required profile

Experience

Industry :
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Collaboration
  • Problem Solving

System Engineer Related jobs