Generative AI Engineer - Model Optimization & Evaluation

Remote: 
Full Remote
Contract: 

Offer summary

Qualifications:

PhD or Master's Degree with 3+ years of experience in AI or machine learning., Strong understanding of transformer-based architectures and model optimization techniques., Proficiency in PyTorch and familiarity with CUDA for performance analysis., Excellent communication skills for documenting experiments and collaborating with stakeholders..

Key responsibilities:

  • Design, fine-tune, and optimize transformer-based models focusing on quantization and compression techniques.
  • Develop and maintain model evaluation pipelines and monitor performance trade-offs.
  • Collaborate with domain experts to source and structure high-quality datasets.
  • Document experiments and share findings with both technical and non-technical stakeholders.

RegScale logo
RegScale Computer Hardware & Networking Startup https://regscale.com/
11 - 50 Employees
See all jobs

Job description

RegScale is a continuous controls monitoring (CCM) platform purpose-built to deliver fast and efficient GRC outcomes. We help organizations break out of the slow and expensive realities that plague legacy GRC tools by bridging security, risk, and compliance through controls lifecycle management. By leveraging CCM, organizations experience massive process improvements like 90% faster certification times, and 60% less audit prep time. Today’s expansive security and compliance requirements can only be met with a modern, CCM based approach, and RegScale is the leader in that space.  

Position:
At RegScale, we’re building next-generation automation capabilities that rely on cutting-edge artificial intelligence to accelerate and streamline data workflows across compliance, security, and governance. We're looking for an AI Engineer with a specialized focus in model quantization, fine-tuning, and evaluation, particularly for resource-constrained environments. You will help us push the limits of what’s possible in parity between on-prem and cloud environments, achieving low-latency, cost-efficient AI deployments by shaping and optimizing our transformer-based model workflows.
 
This role requires both a strong understanding of the ML lifecycle (from data preparation to model evaluation) and the ability to reason deeply about computational trade-offs. You should be comfortable working closely with engineers, product leaders, and other stakeholders to adapt the latest advancements in AI into highly efficient, production-grade systems.
 
Key Responsibilities
  • Model Training & Optimization:
    • Design, fine-tune, and optimize transformer-based models with a focus on quantization, distillation, pruning, and other compression techniques. Select and justify approaches based on deployment goals, model constraints, and resource availability.
    • Advise on architectural tradeoffs and deploy models across varied environments (cloud, on-prem, edge).
    • Profile models and optimize performance across different hardware (e.g., consumer-grade GPUs, low-end data center cards). Use and interpret CUDA-level metrics to inform optimizations.
  • Evaluation Frameworks:
    • Develop and maintain rigorous model evaluation pipelines including both standardized benchmarks (e.g., MMLU, SuperGLUE) and custom task-specific tests. Define and monitor performance trade-offs such as accuracy vs latency or cost vs throughput. Design input evaluation strategies (e.g., few-shot vs zero-shot, prompt engineering, sequence length variations).
  • Collaborative Dataset Engineering:
    • Work with domain experts, data engineers, and curators to source, label, clean, and structure high-quality datasets.
    • Evaluate data quality issues and create tooling for dataset diagnostics.
  • Research and Prototyping:
    • Stay current with advancements in model compression, efficient inference, and deployment strategies.
    • Rapidly prototype and test new ideas, bringing practical innovations into the team’s workflow.
  • Documentation & Communication:
    • Clearly document experiments, design decisions, and trade-off analyses. Share findings with both technical and non-technical stakeholders, contributing to engineering design and product planning.

Knowledge, Skills, and Experience:
  • PhD OR Master's Degree plus 3+ years of progressive experience
  • Strong understanding of transformer-based architectures
  • Experience with model optimization: quantization, pruning, distillation, or low-rank adaptation
  • Familiarity with deployment trade-offs: latency, memory, throughput, model size vs accuracy
  • Ability to reason about and debug performance issues across compute environments (cloud vs on-prem, various GPU types)
  • Familiar with CUDA basics – enough to analyze compute requirements, understand bottlenecks, and suggest improvements
  • Hands-on experience with fine-tuning language models on real-world datasets
  • Proficiency with PyTorch
  • Experience with Linux, SSH, scripting, and working on remote machines
  • Strong written and verbal communication skills, including documentation of experiments and design rationale
  • Experience designing evaluation protocols beyond standard metrics (e.g., human-in-the-loop evaluation, complexity-based slicing)
  • Experience with automated benchmarking and robustness testing.
  • Nice to haves: experience with APIs (e.g., Django, Flask, FastAPI)

Required profile

Experience

Industry :
Computer Hardware & Networking
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Communication

AI Operations (AI Ops) Engineer Related jobs