Logo for NVIDIA

Senior Systems Software Engineer - Deep Learning Solutions

Roles & Responsibilities

  • Master’s degree or equivalent in Computer Science, Electrical Engineering, or related field; 12+ years industry experience with 8+ years in deep learning model optimization, inference engineering, or neural network compilation; ability to reason about model architectures at the operator/kernel level.
  • Deep knowledge of current DL architectures (transformers, attention variants, ViT, multi-modal/VLMs) and experience with diffusion/state-space models.
  • Expert knowledge of GPU architecture fundamentals, CUDA, and low-level performance optimization; experience with TensorRT, compiler IRs, or equivalent inference optimization toolchains.
  • 5+ years of embedded/edge software experience delivering production inference solutions in power-limited, latency-sensitive environments; strong C/C++, embedded OS (QNX/Linux), memory management; parallel programming (CUDA/OpenMP).

Requirements:

  • Address customer and partner optimization challenges: engage with automotive OEMs and robotics associates to analyze, debug, and improve their deep learning models on NVIDIA platforms; deliver solutions rather than mere recommendations.
  • Own performance benchmarking: drive MLPerf Edge and industry benchmarks; define methodology, ensure reproducibility, and translate results into actionable optimization priorities.
  • Deliver TensorRT and compiler-stack solutions for edge: create and deploy inference solutions on Jetson, DRIVE, and GPU+ARM platforms; develop Proofs of Readiness and collaborate with the compiler team on Torch-TRT and MLIR-TRT to bridge performance gaps.
  • Collaborate across teams and represent NVIDIA externally: partner with compiler, runtime, and hardware teams; contribute to build reviews and internal roadmaps; present at conferences and partner events.

Job description

NVIDIA is a global leader in physical AI, powering self-driving cars, humanoid robots, intelligent environments, and medical devices. Our software platforms are central to this mission. We help innovators build products that save lives, enhance working conditions, and improve living standards globally! We are hiring a Senior Engineer to become part of our team as a technical authority in deep learning inference optimization for autonomous vehicles and robotics on edge hardware. This role requires a hands-on expert who can inspect model architectures down to the operator level. They will uncover performance bottlenecks through kernel traces and evaluate how modern architectures (transformers, vision-language models, diffusion/flow matching, state space models) function on GPU and SOC. The work performed directly advances how autonomous vehicles and robots sense and respond in the real world, with instant impact!

This group addresses some of the toughest optimization problems in the industry, operating at the crossroads of innovative model architectures, compiler technology, and embedded hardware. We work in close partnership with automotive OEMs, robotics collaborators, and internal hardware teams to expand the limits of what can be achieved on edge devices.

What you'll be doing:

  • Address customer and partner optimization challenges: Engage directly with prominent automotive OEMs and robotics associates to analyze, debug, and improve their deep learning models on NVIDIA platforms. We emphasize delivering solutions rather than just recommendations.

  • Own performance benchmarking: Drive efforts to achieve leading results on MLPerf Edge and industry benchmarks, as well as closed-source engagements with key partners. Define methodology, ensure reproducibility, and turn results into actionable optimization priorities.

  • Evaluate emerging model architectures: Analyze new DL architectures, including vision encoders, multi-modal VLMs, hybrid SSM-Transformer backbones, diffusion/flow matching decoders, and multi-camera tokenizers, for compilation feasibility, memory footprint, and latency on target SOCs.

  • Collaborate across teams: Partner with our compiler, runtime, and hardware teams to connect model-level insight with platform capabilities.

  • Contribute to build reviews and help develop internal roadmap priorities based on real customer workload patterns.

  • Represent NVIDIA externally: Share our deep learning optimization expertise at conferences, webinars, and partner events. Help elevate the broader team by bringing back insights and establishing guidelines.

  • Deliver TensorRT and compiler-stack solutions for edge: Create and deploy inference solutions on Jetson, DRIVE, and GPU + ARM platforms for AV and robotics workloads. Develop Proofs of Readiness (PORs) and work closely with our compiler team on Torch-TRT, MLIR-TRT, and related frameworks to bridge performance gaps.

What we need to see:

  • Master’s degree or equivalent experience in Computer Science, Electrical Engineering, or a related field.

  • 12 + years of industry experience with over 8 years in deep learning model optimization, inference engineering, or neural network compilation. You need to be adept at interpreting and reasoning about model architectures at the operator/kernel level, not only operating them.

  • Over 5 years of validated expertise in embedded/edge software, with experience delivering production inference solutions within power-limited, latency-sensitive deployment environments.

  • Deep knowledge of current DL architectures: transformers, attention variants, vision encoders (ViT), multi-modal/vision-language model frameworks, and experience with diffusion models and/or state space models.

  • Expert knowledge of GPU architecture fundamentals, CUDA, and low-level performance optimization using heterogeneous computing. Experience with TensorRT, compiler IRs, or equivalent inference optimization toolchains.

  • Solid understanding of embedded operating system internals (QNX/Linux), memory management, C/C++, and embedded/system software concepts.

  • Background in parallel programming (e.g., CUDA, OpenMP) and experience reasoning about memory hierarchies, data movement, and compute utilization.

  • Demonstrated capability to collaborate directly with external partners and customers in a deep technical role, solving their workload issues, identifying performance problems, and providing solutions within production limitations.

Ways to Stand Out from the Crowd:

  • Experience with ML compiler frameworks (TVM, MLIR, XLA, Triton) or contributing to inference runtime development.

  • Production deployment experience with autonomous vehicle perception or planning stacks, understanding the full pipeline from sensor input through trajectory output.

  • Familiarity with the Physical AI model landscape: VLM + action expert architectures, end-to-end driving models, or robot foundation models.

  • Contributions to MLPerf benchmarks and large-scale industry performance optimization efforts.

  • Experience with automotive safety standards (ISO 26262, SOTIF) and their implications for inference system development.

  • Experience leading technical initiatives across globally distributed engineering teams.

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 225,000 CAD - 275,000 CAD.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until March 2, 2026.

This posting is for an existing vacancy. 

NVIDIA uses AI tools in its recruiting processes.

System Engineer Related jobs

Other jobs at NVIDIA

We help you get seen. Not ignored.

We help you get seen faster — by the right people.

🚀

Auto-Apply

We apply for you — automatically and instantly.

Save time, skip forms, and stay on top of every opportunity. Because you can't get seen if you're not in the race.

AI Match Feedback

Know your real match before you apply.

Get a detailed AI assessment of your profile against each job posting. Because getting seen starts with passing the filters.

Upgrade to Premium. Apply smarter and get noticed.

Upgrade to Premium

Join thousands of professionals who got noticed and hired faster.