Match score not available

Member of Technical Staff (Open Role)

Remote: 
Full Remote
Contract: 
Experience: 
Senior (5-10 years)
Work from: 
France, New York (USA), United States

Offer summary

Qualifications:

M.Sc./Ph.D. in computer science or related field, Strong programming skills with focus on machine learning.

Key responsabilities:

  • Build foundational technology for Adaptive
  • Contribute to product roadmap and team collaboration
Adaptive ML logo
Adaptive ML Information Technology & Services Startup https://www.adaptive-ml.com/
11 - 50 Employees
See more Adaptive ML offers

Job description

About the team

Adaptive is helping companies build singular generative AI experiences by democratizing the use of reinforcement learning. We are building the foundational technologies, tools, and products required for models to learn directly from users' interactions and for models to self-critique and self-improve from simple written guidelines. Our tightly-knit team was previously involved in the creation of state-of-the-art open-access large language models such as Falcon-180B. We have closed a $20M seed with Index & ICONIQ, and are looking forward to shipping a first version of our platform, Adaptive Engine, in early 2024.

Our Technical Staff is responsible for building the foundational technology powering Adaptive, in line with requests and requirements identified by our Product and Commercial Staff. We strive to build excellent, robust, and efficient technology, and to conduct at-scale, honest research with high-impact for our roadmap and customers.

About the role

This is an open-role, describing a generic position in our Technical Staff. If any of the below seems like a fit, please apply!

As a Member of Technical Staff, you will help build the foundational technology powering Adaptive, typically by contributing to our internal LLM Stack, Adaptive Harmony. We fundamentally believe that generative AI is best approached as a so-called big science combining large-scale engineering and stringent empirical research. Accordingly, we have a strong bias for doing things at-scale, and for systematic empirical demonstrations.

Some examples of tasks members of our Technical Staff pursue on a daily basis:

  • Develop robust software in Rust, interfacing between easy-to-use Python recipes and high-performance distributed training code running on hundreds of GPUs;

  • Profile and iterate GPU inference kernels in Triton or CUDA, identifying memory bottlenecks and optimizing latency—and decide how to adequately benchmark an inference service;

  • Develop and execute an experiment plan for better understanding the nuances between DPO and PPO in a fair and systematic way;

  • Build data pipelines to support reinforcement learning from noisy and diverse users' interactions across varied tasks;

  • Experiment with novel ways to combine adapters to steer the behaviour of language models;

  • Build hardware correctness tests to identify and isolate faulty GPUs at scale.

We are looking for self-driven, intense individuals, who value technical excellency, honesty, and growth.

Your responsibilities

Generally,

  • Build the foundational technology powering Adaptive, with a focus on high-performance software engineering and large-scale RL research;

  • Contribute to our product roadmap, by identifying promising trends and high-impact findings;

  • Report clearly on your work to a distributed collaborative team, with a bias for asynchronous written communication.

On the engineering side,

  • Write high-quality software in Rust, with a focus on performance and robustness;

  • Profile dedicated GPU kernels in CUDA or Triton, optimizing across latency/compute-bound regimes for complex workloads;

  • Identify and resolve bugs in large distributed systems, at the intersection of software and hardware correctness.

On the research side,

  • Conduct research on large language models or diffusion models, systematically exploring how reinforcement learning can be used to personalize models;

  • Reproduce results from the RL, LLM, and diffusion literature, separating the fluff and the groundbreaking;

  • Own a research agenda, with a bias for at-scale, systematic empirical research.

Nearly all members of our Technical Staff hold a position that is a blend of engineering and research.

Your (ideal) background

The background below is only suggestive of a few pointers we believe could be relevant; we welcome applications from candidates with diverse backgrounds, do not hesitate to get in touch if you think you could be a great fit even if the below doesn't fully describe you.

  • A M.Sc./Ph.D. in computer science, or demonstrated experience in software engineering, preferably with a focus on machine learning;

  • Strong programming skills, especially regarding distributed problems where performance is key;

  • Contributions to relevant open-source projects, such as efficient implementations of models and RL;

  • A track record of publications at top-tier machine learning venues (e.g., NeurIPS, JMLR);

  • Passionate about the future of generative AI, and eager to build foundational technology to help machines deliver more singular experiences.

Benefits
  • Comprehensive medical (health, dental, and vision) insurance;

  • 401(k) plan with 4% matching (or equivalent);

  • Unlimited PTO — we strongly encourage at least 5 weeks each year;

  • Mental health, wellness, and personal development stipends;

  • Visa sponsorship if you wish to relocate to New York or Paris.

Required profile

Experience

Level of experience: Senior (5-10 years)
Industry :
Information Technology & Services
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Collaboration
  • Communication

Help Desk / Technical Support Related jobs