You want to build at the cutting edge of AI, pushing the limits of scalable AI security. At Lakera, we are not just another research lab: we are engineering the next generation of security foundation models with immediate impact at scale. As a foundational member, you will shape our approach, influence key decisions, and build systems that secure AI applications and agentic systems at scale.
We are building the next generation of security foundation models, pushing the limits of what’s possible in AI security. To do that, we need world-class research engineers who can design and scale the infrastructure that makes it all possible. This isn’t just about supporting research, it’s about driving it. You’ll work on the cutting edge of distributed training, optimizing large-scale LLMs from the ground up, and engineering systems that push the frontier of AI security. From scaling training across GPUs to optimizing inference, you’ll be at the heart of building the foundation that enables AI to be deployed securely at scale.
About Lakera
Lakera is on a mission to ensure AI does what we want it to do. We are heading towards a future where AI agents run our businesses and personal lives. Here at Lakera, we're not just dreaming about the future; we're building the security foundation for it. We empower security teams and builders so that their businesses can adopt AI technologies and unleash the next phase of intelligent computing.
We work with Fortune 500 companies, startups, and foundation model providers to protect them and their users from adversarial misalignment. We are also the company behind Gandalf, the world’s most popular AI security game.
Lakera has offices in San Francisco and Zurich.
We move fast and work with intensity. We act as one team but expect everyone to take substantial ownership and accountability. We prioritize transparency at every level and are committed to always raising the bar in everything we do. We promote diversity of thought as we believe that creates the best outcomes.
Example Projects
Scale training of large-scale security foundation models to very large parameters spaces.
Design and optimize distributed training pipelines for efficient LLM post training.
Implement reinforcement learning-based post-training methods for LLMs at scale.
Develop adversarial training pipelines to harden AI systems against real-world threats.
Engineer robust ML infrastructure to support high-performance AI security research.
About You
You love to build fast. You thrive in designing and scaling AI systems and making models actually work in production. You are excited by real-world AI security problems and see engineering as a way to solve them at scale. You enjoy working in a fast-paced, impact-driven environment where you own the full ML stack and can push boundaries.
We are looking for at least one of the following:
Proven track record of advancing research through engineering, whether by implementing novel ML techniques, scaling complex experiments, or optimizing training and inference for cutting-edge models.
Experience architecting and optimizing large-scale distributed systems, whether in AI, cloud infrastructure, networking, or other demanding environments.
A track record of scaling impactful research efforts, ensuring that new ideas can move from concept to large-scale deployment.
Experience scaling LLM training across multiple GPUs/TPUs, optimizing for efficiency and performance.
Experience building robust ML pipelines that support large-scale training and inference in production.
An advanced degree in ML, AI, or a related field is a plus, but not a requirement, we care about real-world impact over credentials.
If you’re ready to push the limits of AI security and build at scale, let’s talk.
S21sec
Newfold Digital
CM - Christelijke Mutualiteit
BairesDev
Marlowe Fire and Security