Logo for Nagarro

Senior Staff Engineer, Python & LLM

Roles & Responsibilities

  • 7.5+ years of experience in LLM engineering and Python backend development
  • Experience with LLM frameworks, prompt engineering, and deploying cutting-edge LLMs (GPT-4/5, Claude, Gemini, LLaMA, etc.)
  • Strong backend and cloud skills (FastAPI expert; Django/Flask; microservices; REST/GraphQL; AWS/GCP/Azure; Kubernetes)
  • Experience with MLOps/LLMOps and tools for RAG pipelines, embeddings, and model fine-tuning (LoRA/QLoRA/PEFT)

Requirements:

  • Design, implement, and optimize LLM-powered applications using leading and open-source models
  • Develop advanced prompt engineering, system prompts, and structured output pipelines; build RAG pipelines with embeddings and hybrid search
  • Develop multi-agent systems and autonomous AI workflows; fine-tune and serve foundation models using LoRA/QLoRA and modern inference engines
  • Deploy and scale LLM workloads on cloud and GPU/TPU infrastructure; build scalable backend systems with FastAPI, microservices, and CI/CD for AI workflows

Job description

Company Description

👋🏼 We're Nagarro.

We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale across all devices and digital mediums, and our people exist everywhere in the world (17,700 experts across 39 countries, to be exact). Our work culture is dynamic and non‑hierarchical. We are looking for great new colleagues. That is where you come in!

Job Description

REQUIREMENTS:

  • Total experience 7.5+ years
  • Strong hands-on expertise in LLM engineering and Python backend development.
  • Expertise in LLM Application Frameworks, Prompt Engineering with LLMs, Python, FastAP
  • Proven experience building and deploying applications using cutting‑edge LLMs (GPT‑4/5, Claude, Gemini, Mistral, LLaMA, Mixtral, DeepSeek, etc.).
  • Strong experience with RAG pipelines, embeddings, prompt engineering, and multi‑agent systems.
  • Hands-on expertise with LLM frameworks such as LangChain, LlamaIndex, Haystack, DSPy, AutoGen, CrewAI.
  • Deep knowledge of model fine‑tuning techniques such as LoRA, QLoRA, PEFT, adapters.
  • Experience deploying open‑source LLMs using vLLM, TGI, Ollama, LM Studio, Triton, etc.
  • Strong backend engineering experience with FastAPI (expert), Django or Flask, microservices, and distributed systems.
  • Experience implementing REST, GraphQL, and streaming APIs.
  • Hands-on experience with vector databases such as Pinecone, Weaviate, Milvus, Qdrant, FAISS, Chroma.
  • Knowledge of semantic search, hybrid search, embedding pipelines, and enterprise knowledge systems.
  • Strong understanding of cloud platforms (AWS, GCP, Azure), containers, and Kubernetes.
  • Experience with MLOps/LLMOps practices—CI/CD for ML workflows, monitoring, logging, tracing, and model lifecycle management.
  • Bachelor’s/Master’s in CS, AI, Data Science, or equivalent experience.
  • Excellent communication, collaboration, and problem-solving skills.

RESPONSIBILITIES:

  • Design, implement, and optimize LLM-powered applications using leading and open‑source models.
  • Develop advanced prompt engineering, system prompts, and structured output pipelines.
  • Build RAG pipelines with hybrid search, embeddings, and custom retrieval strategies.
  • Develop multi-agent systems and autonomous AI workflows.
  • Fine‑tune, adapt, and serve foundation models using LoRA/QLoRA and modern inference engines.
  • Deploy and scale LLM workloads using vLLM, TGI, Ollama, or GPU/TPU-based systems.
  • Integrate multimodal models across text, image, audio, and video.
  • Build evaluation pipelines for hallucination detection, factual accuracy, quality scoring, and alignment.
  • Implement guardrails, moderation, and safety policies for AI systems.
  • Build scalable backend systems using FastAPI, microservices, event-driven architectures, and secure API frameworks.
  • Optimize backend performance, observability, and reliability.
  • Build ingestion pipelines for document processing, chunking, preprocessing, and semantic indexing.
  • Implement semantic, vector, and hybrid search at scale.
  • Deploy AI systems on cloud platforms, manage Kubernetes inference clusters, and optimize GPU utilization.
  • Set up CI/CD, automated testing, model versioning, and production monitoring for AI workflows.
  • Develop enterprise-grade search, knowledge systems, and document intelligence platforms.
  • Ensure robustness, security, and scalability in all AI and backend systems.
  • Stay updated with the latest GenAI, LLMOps, and backend engineering innovations and share knowledge within the technical community.

Qualifications

Bachelor’s or master’s degree in computer science, Information Technology, or a related field.

Related jobs

Other jobs at Nagarro

We help you get seen. Not ignored.

We help you get seen faster — by the right people.

🚀

Auto-Apply

We apply for you — automatically and instantly.

Save time, skip forms, and stay on top of every opportunity. Because you can't get seen if you're not in the race.

✨

AI Match Feedback

Know your real match before you apply.

Get a detailed AI assessment of your profile against each job posting. Because getting seen starts with passing the filters.

Upgrade to Premium. Apply smarter and get noticed.

Upgrade to Premium

Join thousands of professionals who got noticed and hired faster.