Research Engineer - Decentralized AI Systems

Remote: 
Full Remote
Contract: 
Work from: 

Offer summary

Qualifications:

Proficiency in AI programming languages such as Python., Experience with distributed systems, cloud computing, or blockchain technologies., Familiarity with AI frameworks and tools like PyTorch and Ray., A background in computer science, engineering, or a related field is preferred..

Key responsibilities:

  • Design and implement components of the DeOS for efficient AI workload orchestration.
  • Develop and optimize software for managing geo-distributed, heterogeneous GPU resources.
  • Collaborate with cross-functional teams to integrate support for various LLMs and AI models.
  • Ensure high availability and fault tolerance in decentralized computing environments.

Yotta Labs logo
Yotta Labs https://yottalabs.ai
2 - 10 Employees
See all jobs

Job description

Location: Remote (Global)

Type: Full-time

Company: Yotta Labs

Apply: careers@yottalabs.ai

🧠 About Yotta Labs

Yotta Labs is pioneering the development of a Decentralized Operating System (DeOS) for AI workload orchestration at a planetary scale. Our mission is to democratize access to AI resources by aggregating geo-distributed GPUs, enabling high-performance computing for AI training and inference on a wide spectrum of hardware—from commodity to high-end GPUs. Our platform supports major large language models (LLMs) and offers customizable solutions for new models, facilitating elastic and efficient AI development.

🛠️ Role Overview

We are seeking a Research Engineer with a passion for decentralized systems and AI infrastructure. In this role, you will contribute to the development of our DeOS framework, focusing on optimizing AI workloads across a heterogeneous network of GPUs. Your work will directly impact the scalability and performance of AI applications deployed on our platform.

🎯 Responsibilities

  • Design and implement components of the DeOS for efficient AI workload orchestration.

  • Develop and optimize software for managing geo-distributed, heterogeneous GPU resources.

  • Collaborate with cross-functional teams to integrate support for various LLMs and AI models.

  • Ensure high availability and fault tolerance in decentralized computing environments.

  • Contribute to open-source projects and engage with the developer community.

Qualifications

  • Proficiency in AI programming languages such as Python

  • Experience with distributed systems, cloud computing, or blockchain technologies.

  • Familiarity with AI frameworks and tools (e.g., PyTorch, Ray).

  • Strong problem-solving skills and the ability to work in a collaborative, remote environment.

  • A background in computer science, engineering, or a related field is preferred.

🌟 Preferred Experience

  • Contributions to open-source projects in AI or decentralized systems.

  • Experience with GPU programming and optimization.

  • Familiarity with frameworks and libraries like vLLM, SGLang, and Verl.

🌐 Why Join Yotta Labs?

  • Be part of a visionary team aiming to redefine AI infrastructure.

  • Work on cutting-edge technologies that bridge AI and decentralized computing.

  • Collaborate with experts from leading institutions and tech companies.

  • Enjoy a flexible, remote work environment that values innovation and autonomy.

📩 How to Apply

Interested candidates should apply directly or send their resume and a brief cover letter to careers@yottalabs.ai. Please include links to any relevant projects or contributions.

Required profile

Experience

Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Collaboration
  • Problem Solving

Related jobs