Our client operates one of the largest GPU infrastructures in the world — 100,000+ GPUs. Their infrastructure doubles in size every year. We’re looking for engineers who love getting deep into Linux systems, pushing hardware and software to their limits, and making the world’s fastest AI and HPC workloads run even faster
You’ll join a small, senior team that works between the hardware and Linux OS layers, solving performance problems that affect tens of thousands of GPUs. This is hands-on, high-impact engineering where microsecond gains matter and every optimization is felt at global scale.
Trace, profile, tune and optimize Linux kernel & subsystems (CPU scheduling, memory management, networking stack) for GPU clusters and InfiniBand fabrics
Troubleshoot and resolve complex performance bottlenecks
Integrate and validate new GPU hardware & infra (KVM/QEMU, PCIe devices, Kubernetes)
Improve monitoring, alerting, and automation for large-scale, distributed systems
Occasionally assist customers in optimizing workloads
Key requirements (non-negotiable):
Solid Linux internals knowledge, with kernel tracing, profiling and tuning experience (eg. perf, ftrace, eBPF, sysctl, kgdb etc.)
Excellent programming skills, C or C++ system-level code, with a good grasp of data structures & algorithms
Experience in performance optimization (eg. high-load/high-throughput, low-latency, low-jitter, memory bypasses, zero-copy, lock-free, synchronization across large-scale clusters etc.)
Scripting or development skills in Go, Python, or similar
Nice-to-haves (not key):
Large-scale clusters (GPU or CPU)
Virtualization stacks (KVM/QEMU), Slurm, Kubernetes
Deep learning frameworks (eg. PyTortch, Tensorflow...)
GPU-specific stack (eg. CUDA, NCCL....)
Love solving deep technical challenges, care about performance downto the microsecond, and want to work on infrastructure that pushes the limits of what’s possible.
Salary: up to 160k + 25% bonus.
Flexible working arrangements.
A dynamic and collaborative work environment that values initiative and innovation.
Location: Amsterdam or full-remote from anywhere within the EU/EER

OpenAI

Rockset

NVIDIA

Xsolla

The Next Chapter

The Next Chapter

The Next Chapter