Match score not available

Member of Technical Staff - AI Inference Engineer

75% Flex
Remote: 
Full Remote
Contract: 
Experience: 
Senior (5-10 years)
Work from: 
California (USA), Massachusetts (USA), United States

Offer summary

Qualifications:

Extensive experience in CUDA, C++, and Triton, Proficiency in building inference stacks using ggml, vllm, DeepSpeed.

Key responsabilities:

  • Collaborate with ML Teams effectively
  • Optimize low-level primitives for efficient model execution
  • Stay up-to-date with advancements in ML inference
Liquid AI logo
Liquid AI Information Technology & Services Startup http://liquid.ai/
11 - 50 Employees
See more Liquid AI offers

Job description

Logo Jobgether

Your missions

As we prepare to deploy our models across various device types, including GPUs, CPUs, and NPUs, we're seeking an expert who can optimize inference stacks tailored to each platform. We're looking for someone who can take our models, dive deep into the task, and return with a highly optimized inference stack—leveraging existing frameworks like ggml, vllm, and DeepSpeed to deliver exceptional throughput and low latency.

The ideal candidate is a highly skilled engineer with extensive experience in CUDA, C++, and Triton, as well as a deep understanding of GPU, CPU, and NPU architectures. They should be self-motivated, capable of working independently, and driven by a passion for optimizing performance across diverse hardware platforms. Proficiency in building and enhancing inference stacks using frameworks like ggml, vllm, and DeepSpeed is essential. Additionally, experience with mobile development and expertise in cache-aware algorithms will be highly valued.

Responsibilities
  • Collaborate with ML Teams: Requires proficiency in Python and PyTorch to effectively interface with machine learning staff at a technical level.

  • Hardware Awareness: Must understand modern hardware architecture, including cache hierarchies and memory access patterns, and their impact on performance.

  • Proficient in Coding: Expertise in Python, PyTorch, and either CUDA, Triton, or C++ is essential for this role.

  • Optimization of Low-Level Primitives: Responsible for optimizing core primitives to ensure efficient model execution.

  • Self-Guided and Ownership: Ability to independently take a PyTorch model and inference requirements (e.g., maximize GPU throughput or minimize CPU latency) and deliver a fully optimized stack with minimal guidance.

  • Research-Driven: Should stay up-to-date with advancements in ML inference, such as new quantization techniques or speculative decoding, while maintaining focus on delivering practical solutions.
  • Required profile

    Experience

    Level of experience: Senior (5-10 years)
    Industry :
    Information Technology & Services
    Spoken language(s):
    English
    Check out the description to know which languages are mandatory.

    Technical Support Engineer Related jobs