JOB SUMMARY
In this role, you’ll play a pivotal part in building and optimizing data pipelines that transform large, multi-modal datasets into high-quality training inputs for cutting-edge AI models for drug discovery. You’ll help evolve our data pipeline and storage infrastructure to support faster, more reliable turnarounds for research and development of new models.
You’ll join a multidisciplinary team, collaborating closely with ML scientists, software developers and DevOps engineers to improve the performance and reliability of Python-based workflows. As a key contributor, you’ll participate in the design, testing, and maintenance of core software systems, conduct thoughtful code reviews, and champion engineering best practices—including version control, testing, and documentation.
This role is remote, with preference for candidates on the East Coast or UK.
KEY RESPONSIBILITIES
Design and improve data pipelines that process large, multi-modal datasets from a variety of internal and external sources into training datasets for AI models.
Evolve our data storage layer to support analytics, schema evolution, reproducibility, and efficient data access.
Collaborate with ML engineers to improve the performance and reliability of Python-based data processing workflows.
Collaborate on the creation, testing and maintenance of software systems
Code review for pull requests in adjoining areas
Maintenance of and mentorship in software best practices, including version control, testing and documentation
Clear oral communication of work in meetings and company demos, at a level suited to the audience
QUALIFICATIONS
Minimum of 8 years of related experience with a Bachelor’s degree; or 6 years and a Master’s degree; or a PhD with 3 years experience; or equivalent experience.
Proven ability to design flexible, maintainable ETL systems.
Experience with data pipeline orchestration tools such as Prefect, Airflow, Argo, Databricks, or Spark.
Understanding of the ML model lifecycle; prior work with scientific or ML workflows is a plus.
Hands-on experience with multi-terabyte scale data processing.
Familiarity with AWS; Kubernetes experience is a bonus.
Knowledge of data lake technologies such as Parquet, Iceberg, AWS Glue etc.
Strong Python software engineering skills.
Pragmatic mindset — able to evaluate tradeoffs find solutions that empower ML researchers to move quickly.
Background in bioinformatics or chemistry is a plus.
ABOUT IAMBIC THERAPEUTICS
Founded in 2019 and headquartered in San Diego, California, Iambic Therapeutics is disrupting the therapeutics landscape with its unique AI-driven drug-discovery platform. Iambic has assembled a world-class team that unites pioneering AI experts and experienced drug hunters with strong track records of success in delivering clinically validated therapeutics. The Iambic platform has been demonstrated to deliver high-quality, differentiated therapeutics to clinical stage with unprecedented speed and across multiple target classes and mechanisms of action. The Iambic team is advancing an internal pipeline of clinical assets to address urgent unmet patient needs. Learn more about the Iambic team, platform, and pipeline at iambic.ai.
MISSION & CORE VALUES
The culture and work at Iambic Therapeutics are profoundly strengthened by the diversity of our people and our differences in background, culture, national origin, religion, sexual orientation, and life experiences. We are committed to building an inclusive environment where a diverse group of talented humans work together to discover therapeutics and create technologies.
PAY AND BENEFITS
We offer industry leading competitive pay, company paid healthcare, flexible spending accounts, voluntary life Insurance, 401K matching, and uncapped vacation to our team. We are in a brand-new state-of-the art facility in beautiful San Diego with an onsite gym, dining, and easy access to great places to live and play.
H1
Datavail
Compliance & Risks
Ticketmaster
Encora Inc.