Machine Learning Engineer, Senior Staff Model Factory

extra holidays - extra parental leave
Work set-up: 
Full Remote
Contract: 
Experience: 
Senior (5-10 years)
Work from: 
United States

Offer summary

Qualifications:

Bachelor's degree in Computer Science or related field, with 7+ years of experience or Master's with 5+ years., Strong programming skills in Python and experience with ML frameworks like PyTorch, TensorFlow, or JAX., Hands-on experience with model optimization, quantization, and inference acceleration., Deep understanding of transformer architectures, attention mechanisms, and distributed inference techniques..

Key responsibilities:

  • Design, build, and optimize machine learning deployment pipelines for large-scale models.
  • Implement and enhance model inference frameworks.
  • Develop automated workflows for model development, experimentation, and deployment.
  • Collaborate with research, architecture, and engineering teams to improve model performance and efficiency.

d-Matrix logo
d-Matrix Scaleup https://www.d-matrix.ai
51 - 200 Employees
See all jobs

Job description

At dMatrix, we are focused on unleashing the potential of generative AI to power the transformation of technology. We are at the forefront of software and hardware innovation, pushing the boundaries of what is possible. Our culture is one of respect and collaboration.

We value humility and believe in direct communication. Our team is inclusive, and our differing perspectives allow for better solutions. We are seeking individuals passionate about tackling challenges and are driven by execution. Ready to come find your playground? Together, we can help shape the endless possibilities of AI.

Location:

Hybrid, working onsite at our Santa Clara, CA headquarters 35 days per week.

Job Title: Machine Learning Engineer, Senior Staff—Model Factory

What You Will Do:

dMatrix is a pioneering company specializing in data center AI inferencing solutions. Utilizing innovative inmemory computing techniques, dMatrix develops cuttingedge hardware and software platforms designed to enhance the efficiency and scalability of generative AI applications.

The Model Factory team at dMatrix is at the heart of cuttingedge AI and ML model development and deployment. We focus on building, optimizing, and deploying largescale machine learning models with a deep emphasis on efficiency, automation, and scalability for the dMatrix hardware. If you’re excited about working on stateoftheart AI architectures, model deployment, and optimization, this is the perfect opportunity for you!

What You Will Bring:

  • Design, build, and optimize machine learning deployment pipelines for largescale models.

  • Implement and enhance model inference frameworks.

  • Develop automated workflows for model development, experimentation, and deployment.

  • Collaborate with research, architecture, and engineering teams to improve model performance and efficiency.

  • Work with distributed computing frameworks (e.g., PyTorchXLA, JAX, TensorFlow, Ray) to optimize model parallelism and deployment.

  • Implement scalable KV caching and memoryefficient inference techniques for transformerbased models.

  • Monitor and optimize infrastructure performance across different levels of custom hardware hierarchy—cards, servers, and racks which are powered by the dMatrix custom AI chips

  • Ensure best practices in ML model versioning, evaluation, and monitoring.

    • Required Qualifications:

      • BS in Computer Science with 7+ or MS in Computer Science preferred with 5+ years of strong programming skills in Python and experience with ML frameworks like PyTorch, TensorFlow, or JAX.

      • Handson experience with model optimization, quantization, and inference acceleration.

      • Deep understanding of transformer architectures, attention mechanisms, and distributed inference (tensor parallel, pipeline parallel, sequence parallel).

      • Knowledge of quantization (INT8, BF16, FP16) and memoryefficient inference techniques.

      • Solid grasp of software engineering best practices, including CICD, containerization (Docker, Kubernetes), and MLOps.

      • Strong problemsolving skills and ability to work in a fastpaced, iterative development environment.

        • Preferred Qualifications:

          • Experience working with cloudbased ML pipelines (AWS, GCP, or Azure).

          • Experience with LLM finetuning, LoRA, PEFT, and KV cache optimizations.

          • Contributions to opensource ML projects or research publications.

          • Experience with lowlevel optimizations using CUDA, Triton, or XLA.

            • Why Join Model Factory?

              • Work at the intersection of AI software and custom AI hardware, enabling cuttingedge model acceleration.

              • Collaborate with worldclass engineers and researchers in a fastmoving, AIdriven environment.

              • Freedom to experiment, innovate, and build scalable solutions.

              • Competitive compensation, benefits, and opportunities for career growth.

                • This role is ideal for a selfmotivated engineer interested in applying advanced memory management techniques in the context of largescale machine learning inference. If you’re passionate about implementing and optimizing machine learning models for custom silicon and are excited to explore cuttingedge solutions in model inference, we encourage you to apply.

                  Equal Opportunity Employment Policy

                  dMatrix is proud to be an equal opportunity workplace and affirmative action employer. We’re committed to fostering an inclusive environment where everyone feels welcomed and empowered to do their best work. We hire the best talent for our teams, regardless of race, religion, color, age, disability, sex, gender identity, sexual orientation, ancestry, genetic information, marital status, national origin, political affiliation, or veteran status. Our focus is on hiring teammates with humble expertise, kindness, dedication and a willingness to embrace challenges and learn together every day.

                  dMatrix does not accept resumes or candidate submissions from external agencies. We appreciate the interest and effort of recruitment firms, but we kindly request that individual interested in opportunities with dMatrix apply directly through our official channels. This approach allows us to streamline our hiring processes and maintain a consistent and fair evaluation of al applicants. Thank you for your understanding and cooperation.

Required profile

Experience

Level of experience: Senior (5-10 years)
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Problem Solving

Machine Learning Engineer Related jobs