📐 About this role
As a Platform engineer, MLOps, you will be critical to deploying and managing cuttingedge infrastructure crucial for AIML operations, and you will collaborate with AIML engineers and researchers to develop a robust CICD pipeline that supports safe and reproducible experiments. Your expertise will also extend to setting up and maintaining monitoring, logging, and alerting systems to oversee extensive training runs and clientfacing APIs. You will ensure that training environments are optimally available and efficiently managed across multiple clusters, enhancing our containerization and orchestration systems with advanced tools like Docker and Kubernetes.
This role demands a proactive approach to maintaining large Kubernetes clusters, optimizing system performance, and providing operational support for our suite of software solutions. If you are driven by challenges and motivated by the continuous pursuit of innovation, this role offers the opportunity to make a significant impact in a dynamic, fastpaced environment.
🦸🏻♀️ Your responsibilities:
Work closely with AIML engineers and researchers to design and deploy a CICD pipeline that ensures safe and reproducible experiments.
Set up and manage monitoring, logging, and alerting systems for extensive training runs and clientfacing APIs.
Ensure training environments are consistently available and prepared across multiple clusters.
Develop and manage containerization and orchestration systems utilizing tools such as Docker and Kubernetes.
Operate and oversee large Kubernetes clusters with GPU workloads.
Improve reliability, quality, and timetomarket of our suite of software solutions
Measure and optimize system performance, with an eye toward pushing our capabilities forward, getting ahead of customer needs, and innovating for continual improvement
Provide primary operational support and engineering for multiple largescale distributed software applications
⭐️ Is this you?
You have professional experience with:
Model training
Huggingface Transformers
Pytorch
LLM
TensorRT
Infrastructure as code tools like Terraform
Scripting languages such as Python or Bash
Cloud platforms such as Google Cloud, AWS or Azure
Git and GitHub workflows
Tracing and Monitoring
Familiar with highperformance, largescale ML systems
You have a knack for troubleshooting complex systems and enjoy solving challenging problems
Proactive in identifying problems, performance bottlenecks, and areas for improvement
Take pride in building and operating scalable, reliable, secure systems
Familiar with monitoring tools such as Prometheus, Grafana, or similar
Are comfortable with ambiguity and rapid change
Preferred skills and experience:
Familiar with monitoring tools such as Prometheus, Grafana, or similar
5+ years building core infrastructure
Experience running inference clusters at scale
Experience operating orchestration systems such as Kubernetes at scale
#LIHybrid
🍩 Benefits & perks (US Fulltime employees)
Generous PTO, plus company holidays
Medical, dental, and vision coverage for you and your family
Paid parental leave for all parents (12 weeks)
Fertility and family planning support
Earlydetection cancer testing through Galleri
Health savings account for eligible plans with company contribution
Fundraise Up
Fundraise Up
Fundraise Up
Fundraise Up
Bertoni Solutions