Calix provides the cloud, software platforms, systems and services required for communications service providers to simplify their businesses, excite their subscribers and grow their value.
Calix is seeking a highly skilled AI Ops Engineer to join our cutting-edge AI/ML team. In this role, you will be responsible for building, scaling, and maintaining the infrastructure that powers our machine learning and generative AI applications. You will work closely with data scientists, ML engineers, and software developers to ensure our ML/AI systems are robust, efficient, and production ready.
This is a remote-based position that can be located anywhere in the United States or Canada.
Key Responsibilities:
Design, implement, and maintain scalable infrastructure for ML and GenAI applications
Deploy, operate, and troubleshoot production ML/GenAI pipelines/services
Build and optimize CI/CD pipelines for ML model deployment and serving
Scale compute resources across CPU/GPU architectures to meet performance requirements
Implement container orchestration with Kubernetes
Architect and optimize cloud resources on GCP for ML training and inference
Setup and maintain runtime frameworks and job management systems (Airflow, KubeFlow, MLflow, etc.)
Establish monitoring, logging and alerting for systems observability
Optimize system performance and resource utilization for cost efficiency
Develop and enforce AIOps best practices across the organization
Qualifications:
Bachelor's degree in computer science, Information Technology, or a related field (or equivalent experience).
5+ years of overall software engineering experience
3+ years of focused experience in DevOps/AIOps or similar ML infrastructure roles
Strong experience with containerization and orchestration using Docker and Kubernetes
Demonstrated expertise in cloud infrastructure management, preferably on GCP (AWS or Azure experience also valued)
Proficiency with workflow management such as Airflow & Kubeflow
Strong CI/CD expertise with experience implementing automated testing and deployment pipelines
Experience with scaling distributed compute architectures utilizing various accelerators (CPU/GPU/)
Solid understanding of system performance optimization techniques
Experience implementing comprehensive observability solutions for complex systems
Knowledge of monitoring and logging tools (Prometheus, Grafana, ELK stack).
Strong proficiency in Python
Proficient in at least one of the following performance-oriented programming languages: C, C++, Go, Rust
Familiarity with ML frameworks such as PyTorch and ML platforms like SageMaker or Vertex AI
Excellent problem-solving skills and ability to work independently
Strong communication skills and ability to work effectively in cross-functional teams
#LI-Remote
Compensation will vary based on geographical location (see below) within the United States. Individual pay is determined by the candidate's location of residence and multiple factors, including job-related skills, experience, and education.
For more information on our benefits click here.
There are different ranges applied to specific locations. The average base pay range (or OTE range for sales) in the U.S. for the position is listed below.
San Francisco Bay Area Only:
133,400.00 - 226,600.00 USD AnnualAll Other Locations:
116,000.00 - 197,000.00 USD AnnualCalix
d-Matrix
9H
JUPUS
Aprio