We are working on training and scaling up multimodal foundation models for systems that can see and understand, show and explain, and eventually interact with our world to effect change.
Were looking for full stack engineers who are passionate about AI tooling, have a strong product sense, and enjoy working at the intersection of research and usability. Youll be part of Lumas applied research team and build interfaces, pipelines, and services that bring the latest model capabilities directly into the hands of creators.
In the morning, you tweak a prototype interface used to test a new spatialcontrol feature for video generation β hooking it into an inference service you helped build last week.
After a quick sync with the Research team, you jump into a session with Technical Art to refine the UI for LoRA training on a stylized dataset.
By afternoon, youβre wiring up a ComfyUI extension to support a new model architecture. You close the day by reviewing feedback from a design partner who used your tool in a client demo.
Build webbased tools and UIs that make advanced model behavior feel intuitive and usable
Design and implement pipelines for model training, finetuning (e.g., LoRA), and realtime inference
Collaborate with Research and Technical Art teams to prototype new workflows grounded in evolving model capabilities
Extend and integrate opensource frameworks like ComfyUI to fit internal and clientspecific use cases
Contribute to both internal experimentation tooling and externalfacing production features
Support productization efforts for enterprise and B2B applications involving multimodal AI
5+ years of experience in software engineering, ideally across both backend and frontend systems
Strong Python and JavaScriptTypeScript skills; fluency in modern web frameworks (e.g., React, Next.js)
Familiarity with ML pipelines and GPUbased compute (e.g., PyTorch, ffmpeg, SLURMKubernetes)
Experience with training and deploying models (LoRA, ControlNet, diffusion models a plus)
Passion for building usable, elegant tools for technical artists, researchers, or creative professionals
Bonus: experience shipping tools that sit between R&D and production, or serve external creative clients
Experience with LoRA, ControlNet, or realtime AI media generation
Tools that shipped between R&D prototypes and productionquality software
Interfaces or systems that helped nonengineers leverage AI models creatively
Work on frontier multimodal models β and build the tools that make them real
Collaborate with an exceptional team of engineers, researchers, and technical artists
Shape the future of how humans interact with generative AI β not just inside Luma, but out in the world
Contribute at the boundary of research, product, and creativity β with real user impact
SOFTSWISS
IRI International
OCTO Technology
N1 Partners Group
Vito Solutions