Were here to help the smartest minds on the planet build Superintelligence. The labs pushing the edge? They run on Lambda. Our gear trains and serves their models, our infrastructure scales with them, and we move fast to keep up. If you want to work on massive, worldchanging AI deployments with people who love action and hard problems, were the place to be.
If youd like to build the worlds best deep learning cloud, join us.
*Note: This position requires presence in our San Francisco, or San Jose office location 4 days per week; Lambda’s designated workfromhome day is currently Tuesday.
In the world of distributed AI training and inference, raw GPU and CPU horsepower is just a part of the story. Highperformance networking and storage are the critical components that enable and unite these systems, making groundbreaking AI training and inference possible.
The Lambda Infrastructure Engineering organization forges the foundation of highperformance AI clusters by welding together the latest in AI storage, networking, GPU and CPU hardware.
Our expertise lies at the intersection of:
HighPerformance Distributed Storage Solutions and Protocols: We engineer the protocols and systems that serve massive datasets at the speeds demanded by modern clustered GPUs.
Dynamic Networking: We design advanced networks that provide multitenant security and intelligent routing without compromising performance, using the latest in AI networking hardware.
Compute Clustering and Virtualization: We enable cuttingedge virtualization and clustering that allows AI researchers and engineers to focus on AI workloads, not AI infrastructure, unleashing the full compute bandwidth of clustered GPUs.
AI training and inference relies on petabytes of data hosted on large, highperformance storage arrays. At Lambda, the Infrastructure Storage Team’s job is to ensure that the data powering AI is fast, performant, and available across a variety of access protocols (fit for purpose).
Were looking for an experienced Senior Software Engineer to join our storage team. Youll join a team responsible for developing and implementing storage software for our nextgeneration onpremise storage solutions. This role requires expertise in distributed systems, and an indepth understanding of file, block, and object storage protocols. Youll work on building scalable and resilient storage services that power our AI and machine learning infrastructure.
What You’ll Do:
Design, develop, and maintain software for storage systems, focusing on performance, scalability, and reliability.
Implement and optimize storage protocol APIs for file (e.g., NFS, SMB), block (e.g., iSCSI, Fibre Channel), and object (e.g., S3) access.
Develop distributed systems for managing and orchestrating storage resources across multiple storage solutions and redundant arrays.
Collaborate with hardware and system architects to integrate software with various storage solutions, including NVMe and GPUaccelerated storage.
Troubleshoot and debug complex issues in a production data center environment.
Contribute to the full software development lifecycle, from requirements gathering and design to deployment and maintenance.
You Have:
Bachelors or Masters degree in Computer Science or a related field.
5+ years of experience in software development for storage systems.
Proven experience with distributed systems programming and concepts such as load balancers, datadurability, consensus algorithms, fault tolerance, and data consistency.
Strong programming skills in languages such as C, C++, Go, or Python.
Deep understanding of storage protocols, including:
File: NFS, SMB, Lustre
Block: iSCSI, Fibre Channel
Object: S3, Swift
Experience with Linux kernel internals and systemlevel programming.
Familiarity with containerization technologies like Docker and Kubernetes and running production workloads in these environments.
Familiarity with CICD and QA practices for distributed systems development environments.
Nice to Have
Experience with AIML workloads and the unique storage challenges they present.
Knowledge of data center networking and highspeed interconnects (e.g., InfiniBand, RoCE).
Experience with performance tuning and optimization of storage systems.
Familiarity with hardware acceleration technologies, specifically GPUs and DPUs.
Salary Range Information
The annual salary range for this position has been set based on market data and other factors. However, a salary higher or lower than this range may be appropriate for a candidate whose qualifications differ meaningfully from those listed in the job description.
About Lambda
Founded in 2012, ~400 employees (2025) and growing fast
We offer generous cash & equity compensation
Our investors include Andra Capital, SGW, Andrej Karpathy, ARK Invest, Fincadia Advisors, G Squared, InQTel (IQT), KHK & Partners, NVIDIA, Pegatron, Supermicro, Wistron, Wiwynn, US Innovative Technology, Gradient Ventures, Mercato Partners, SVB, 1517, Crescent Cove.
We are experiencing extremely high demand for our systems, with quarter over quarter, year over year profitability
Our research papers have been accepted into top machine learning and graphics conferences, including NeurIPS, ICCV, SIGGRAPH, and TOG
Health, dental, and vision coverage for you and your dependents
Arrow Electronics
Jane.app
Zoom
Latitude AI
AEM