Join a fast-growing global insurtech company as a Senior GenAI ML Engineer and drive the development of next-generation generative AI solutions that transform how financial institutions deliver protection and insurance services worldwide.
You'll be working for an insurtech ecosystem serving customers across 35+ markets in Asia, Europe, North America, and Africa, building and operating AI-driven products that make it easier for customers to get the right protection at the point of need. The company is moving beyond traditional software toward delivering automation, intelligent agents, and operational outcomes that significantly modernize business operations and reduce costs.
This role offers the opportunity to innovate in the insurtech environment, working on LLMs, prompt engineering, system integration, and agents orchestration using frameworks like Langchain, AWS Bedrock, and similar technologies. You'll collaborate closely with the Head of AI, data scientists, engineers, and product managers to deliver GenAI-powered Contact Centers, document processing automation, claims decisioning systems, and other AI-driven solutions across languages and modalities.
Critical Requirements: This is a senior position requiring 6+ years of AI/ML experience with at least 3+ years specifically in ML engineering, NLP, Generative AI, and LLM technologies. You must have hands-on experience with LLM Agentic workflows, RAG systems, prompt engineering, and modern GenAI frameworks. Voice Conversational AI experience is a significant advantage for this role.
You'll be driving the global GenAI strategy and implementation, bringing expertise to leverage Large Language Models across multiple languages and modalities for AI-driven insurance and protection products. The work spans customer-facing applications like intelligent contact centers and chatbots, operational automation including document processing and claims decisioning, and internal AI agents that support business processes and decision-making.
Your core technical responsibilities center on building and integrating generative AI applications for customer interactions using LLMs and orchestration frameworks like Langchain, LangGraph, and LlamaIndex. You'll design, develop, and scale internal AI agents and customer-facing agentic solutions including GenAI-powered contact centers that handle customer inquiries, provide policy information, and support insurance operations across multiple languages and time zones.
Working extensively with AWS Bedrock, you'll design and deploy custom solutions leveraging foundation models from leading providers (Anthropic Claude, Amazon Titan, Cohere, etc.), selecting appropriate models for different use cases, optimizing for performance and cost, and integrating these models into production applications. You'll also handle fine-tuning and evaluating large language models using both proprietary insurance/claims data and external datasets to improve model performance for domain-specific tasks.
Production engineering is critical - you'll build scalable APIs and backend services to support real-time AI inference, ensuring systems can handle high-volume customer interactions with low latency and high reliability. This includes implementing RAG (Retrieval-Augmented Generation) systems to ground LLM outputs in accurate insurance knowledge bases, policy documents, and regulatory information, ensuring responses are factually correct and compliant.
Quality, safety, and governance are paramount in regulated insurance environments - you'll ensure reliability, privacy, and accuracy of GenAI responses by applying rigorous testing and monitoring tools, implement guardrails to prevent harmful or incorrect outputs, and contribute to governance efforts ensuring solutions follow responsible AI principles including transparency, data privacy, and compliance with insurance industry standards and regulations.
Your work will support the global GenAI roadmap, bringing expert insights on technologies to adopt, tracking industry trends, and recommending tools or approaches to improve system performance and capability. You'll collaborate extensively with data science teams to apply generative AI to various business areas including document processing (policy documents, claims forms), claims decisioning (automated claims assessment), and reporting and analytics.
MLOps and LLMOps practices are essential - you'll implement and manage distributed training pipelines for LLMs to ensure scalability and efficiency, establish versioning and deployment practices for LLM applications, monitor model performance and drift in production, and automate retraining workflows. Understanding transformer architectures, prompt engineering techniques, LLM evaluation methodologies, and the latest advances in generative AI is fundamental to the role.
Working in a global, distributed team requires excellent communication - documenting solutions clearly, collaborating with engineering, product, and customer teams to align requirements and outputs, and articulating complex technical ideas to cross-disciplinary internal and external stakeholders. The role demands someone who is proactive, curious, collaborative, adaptable, and excellent at communication.
Core Tech Stack: Python (primary), LLM frameworks (Langchain, LangGraph, LlamaIndex), AWS Bedrock, foundation models
ML Frameworks: PyTorch, scikit-learn, Hugging Face Transformers
Cloud Platform: AWS (preferred) with ML services (Bedrock, SageMaker, Lambda)
Focus Areas: Agentic AI, RAG systems, prompt engineering, LLM fine-tuning, conversational AI
Domain: Insurtech - insurance, protection products, claims processing, customer service
Scale: Global deployment across 35+ markets, multiple languages, high-volume customer interactions
Location: Anywhere Globally (100% Remote)

Morgan Stanley

Edges Wellness Center LLC

WSP in Canada

Digitalenta

Cox Automotive Inc.

HumanIT Solutions

HumanIT Solutions

HumanIT Solutions