Match score not available

AI Systems Engineer

Remote: 
Full Remote
Contract: 
Work from: 

Offer summary

Qualifications:

Experience with LLMs and understanding of their functionality., Proficient in Python and backend frameworks like FastAPI or Flask., Strong knowledge of Natural Language Processing (NLP) techniques., Familiarity with CI/CD processes and cloud services such as AWS, GCP, or Azure..

Key responsabilities:

  • Build and improve AI pipelines by writing and deploying production Python code.
  • Experiment with LLM tuning strategies to identify project blockers and risks.
  • Own the development of features from prototype to production-ready deployment.
  • Research and implement new tools and techniques to enhance AI capabilities.

Axiom Zen logo
Axiom Zen https://www.axiomzen.co/
51 - 200 Employees
See all jobs

Job description

We’re looking for a curious and inventive AI Systems Engineer to help us push the limits of what AI can do for engineering teams. You’ll be joining a small, fast-moving team where ideas turn into shipped features quickly, and where your work will have an immediate impact. You’re someone who loves solving cutting-edge AI challenges, but also knows how to get things into production. You’ll be hands-on with everything from building core application services to tinkering with LLM internals.

About us:

We’re building AI-first tools to make engineering teams faster and smarter. Our mission is to save developers from busywork and give them time to focus on what they’re actually passionate about — building great products & services. From AI-powered summaries and insight generation to spotting blockers before they derail projects, we’re using LLMs and automation to take the friction out of dev workflows.

We’re still early-stage with our AI-powered products and features, so there’s a lot of room to shape where this goes—and a lot of interesting problems left to solve.

What You'll Do
  • Build and improve dynamic AI pipelines by designing, writing & deploying production Python code
  • Experiment with various LLM tuning strategies to answer complex qualitative questions (e.g. can AI help identify which tasks are blockers or risks)
  • Boost the quality of AI-generated outputs—whether it’s improving summaries, surfacing insights, or generating new categories from scratch
  • Own end-to-end features: from scrappy prototype to stable, production-ready deployment
  • Configure, maintain and deploy distributed application services to cloud environments
  • Get your hands dirty across backend, infrastructure, and AI/ML workflows
  • Iterate fast: tweak prompts, tune models, test outputs, and constantly improve
  • Research new tools, techniques and frameworks to keep us ahead of the curve

  • About You:
  • You’re AI-savvy, you’ve worked with LLMs and understand how they function under the hood
  • You have a builder mindset. You’ve shipped real Python code to production in a team environment and are comfortable with backend frameworks (FastAPI, Flask, etc.)
  • You know how to engineer LLM prompts, validate outputs, and iterate quickly to get high quality results
  • You have experience using Natural Language Processing (NLP) techniques to extract, transform, and parse textual data into meaningful representations suitable for downstream LLM-based operations.
  • You’re used to experimenting and prototyping in Notebooks
  • You have strong DevOps fundamentals, and experience with CI/CD & cloud services (AWS, GCP, Azure)
  • You have experience with monitoring tooling and troubleshooting production issues
  • You’re self-directed, adaptable, and love wearing multiple hats—R&D one day, demoing & debugging your latest pipelines with customers the next
  • You can communicate complex technical concepts to non-technical folks (we may ask you to explain how an LLM works under-the-hood)
  • You care about how your work impacts users and drives business value
  • You’re always testing new tools or reading up on the latest AI trends

  • Bonus Points
  • Experience with TensorFlow, PyTorch, or deploying open-source LLMs (Llama, Mistral, etc.) on your own infra
  • Knowledge of graph databases or vector databases
  • Hands-on with serverless (AWS Lambda) or cloud-native tooling (Kubernetes, Docker)
  • An academic or practical background in ML and/or Natural Language Processing (NLP) or computer science
  • Ideally based in Vancouver (but we’re open to remote across the Americas)
  • Perks:

    Flexible remote work + unlimited vacation (we actually take it)

    Annual learning & development budget (conferences, books, courses)

    Health & wellness perks

    Top-tier gear—whatever you need to do your best work

    A no-ego, collaborative team that’s serious about building something great

    How to apply:
    Send us your resume and/or LinkedIn, plus a link to a project, repo, or anything you’re proud of!

    Required profile

    Experience

    Spoken language(s):
    English
    Check out the description to know which languages are mandatory.

    Other Skills

    • Adaptability
    • Communication
    • Problem Solving

    AI Operations (AI Ops) Engineer Related jobs