Logo for Keep Simple

Senior AI Security Engineer (4625)

Job description

JOB DESCRIPTION


Come work for a large global financial and insurance products company! This is your chance !!


Start a successful career in a renowned company in the international market! Great opportunity!


Global insurance and asset management company seeks a responsible, organized, dynamic and team-oriented person.


RESPONSIBILITIES AND ASSIGNMENTS


Role Summary

We are seeking a Senior AI Security Engineer to own and advance the security posture of our AI-powered products, platforms, and infrastructure. As organizations rapidly adopt LLMs, agentic AI systems, and AI-augmented workflows, the attack surface has fundamentally shifted — and so must our security approach. You will operate at the intersection of cybersecurity and artificial intelligence, defending against novel AI-specific threats while enabling engineering teams to ship AI features quickly and safely.


This role spans the full AI security lifecycle: from threat modeling LLM integrations and designing guardrails against prompt injection, to securing model supply chains, hardening RAG pipelines, and building automated security tooling that scales with our AI platform. You will be the team's go-to authority on AI security risks, responsible AI safeguards, and emerging compliance requirements including the EU AI Act.


Key Responsibilities


AI/LLM Security Architecture & Engineering


  • Design and implement security architectures for LLM-powered applications, AI agents, and copilot experiences;
  • Build guardrails and defensive layers against AI-specific attack vectors: prompt injection (direct and indirect), jailbreaking, data poisoning, model inversion, membership inference, and training data extraction;
  • Architect secure RAG (Retrieval-Augmented Generation) pipelines, ensuring data isolation, access control enforcement, and context boundary integrity;
  • Implement output filtering, content safety classifiers, and toxicity detection for all AI-generated content;
  • Design and enforce authentication, authorization, and rate-limiting for AI/LLM API endpoints and agentic tool-use interfaces;
  • Secure model serving infrastructure including input/output logging, audit trails, and anomaly detection.


AI Threat Modeling & Red Teaming


  • Conduct AI-specific threat modeling for all new AI features and integrations using frameworks such as STRIDE, MITRE ATLAS, and OWASP LLM Top 10;
  • Lead and participate in AI red team exercises: adversarial prompt testing, model robustness evaluation, and data exfiltration simulations;
  • Develop and maintain an AI threat intelligence capability — tracking emerging attack techniques, CVEs in AI/ML frameworks, and adversarial research publications;
  • Build and maintain automated adversarial testing suites for continuous security validation of LLM integrations;
  • Evaluate third-party AI models, APIs, and SaaS tools for security risks before organizational adoption.


AI Governance, Compliance & Responsible AI


  • Drive compliance with AI-specific regulations: EU AI Act, NIST AI RMF, ISO 42001, and industry-specific guidance;
  • Define and enforce AI security policies covering model access, data handling, prompt logging, and output monitoring;
  • Collaborate with legal, compliance, and privacy teams on AI data governance, consent management, and bias auditing;
  • Implement AI model cards, risk assessments, and documentation standards for all deployed models;
  • Monitor and enforce responsible AI principles: fairness, transparency, explainability, and human oversight.


Security Platform & Tooling


  • Build and maintain AI security tooling: automated prompt injection scanners, model vulnerability scanners, and AI-specific SAST/DAST rules;
  • Implement AI-aware security monitoring and alerting in SIEM/SOAR platforms;
  • Develop security guardrails as reusable libraries and middleware that engineering teams can adopt with minimal friction;
  • Create security-as-code patterns for AI deployments: policy-as-code (Open Policy Agent, Cedar), infrastructure security scanning, and secrets management;
  • Instrument AI systems with security telemetry for real-time threat detection and forensic analysis.


Cross-Functional Security Partnership


  • Embed with AI engineering teams during design and development to ensure security is built-in, not bolted-on;
  • Provide security architecture guidance to product managers, AI engineers, and data engineers on secure AI system design;
  • Lead security training and awareness programs focused on AI-specific threats for engineering teams;
  • Participate in incident response for AI/ML-related security events including model compromise, data leaks, and adversarial attacks;
  • Serve as the internal subject matter expert on AI security, presenting at team tech talks and contributing to security documentation.

REQUIREMENTS AND QUALIFICATIONS


Required Qualifications / Skills


  • 7+ years of experience in cybersecurity, application security, or security engineering, with at least 2+ years focused on AI/ML security;
  • Deep understanding of LLM security risks: prompt injection, jailbreaking, data leakage, insecure output handling, and supply chain vulnerabilities (OWASP LLM Top 10);
  • Hands-on experience securing AI/ML systems in production — including model serving, RAG pipelines, agentic AI, and API orchestration layers;
  • Strong software engineering background in Python and at least one of: Go, TypeScript, Rust, or Java;
  • Experience with cloud-native security across AWS, Azure, or GCP — including IAM, network security, encryption, and secrets management;
  • Proficiency with security tooling: SAST, DAST, SCA, SIEM (Splunk, Sentinel, Datadog Security), and vulnerability management platforms;
  • Expertise in authentication/authorization systems: OAuth 2.0, OIDC, SAML, RBAC, ABAC, and zero-trust architecture principles;
  • Strong understanding of Secure SDLC, DevSecOps practices, and shift-left security culture;
  • Excellent communication skills — ability to articulate complex AI security risks to both technical and non-technical stakeholders;
  • Fluent English, both written and spoken;
  • Proven experience in international projects, including collaboration with global and multicultural teams;
  • Strong communication, stakeholder management, and problem-solving skills.


Preferred Qualifications

  • Experience in insurance, financial services, or healthcare — industries with high regulatory and data privacy requirements;
  • Hands-on experience with AI/ML frameworks: LangChain, LangGraph, Hugging Face Transformers, vLLM, Ollama, and AI agent frameworks (CrewAI, AutoGen);
  • Familiarity with AI security tools: Garak, Rebuff, NeMo Guardrails (NVIDIA), Prompt Guard, LLM Guard, Lakera Guard;
  • Experience with vector database security: Pinecone, Weaviate, ChromaDB, pgvector access control and data isolation;
  • Knowledge of emerging AI standards: MCP (Model Context Protocol), Agent-to-Agent (A2A) Protocol, and AI gateway patterns;
  • Security certifications: CISSP, CISM, OSCP, GIAC (GPEN/GWAPT), or cloud-specific security certs (AWS Security Specialty, AZ-500);
  • Experience with AI governance platforms and model risk management frameworks;
  • Published research, blog posts, or conference talks on AI security topics;
  • Experience building AI-powered security tools (using AI to enhance security operations, not just securing AI).


Base Requirements


DevOps Experience

  • All team members must demonstrate hands-on experience with CI/CD pipelines, containerization (Docker/Kubernetes), cloud platforms, and deployment automation.


Infrastructure as Code

  • Proficiency with at least one IaC toolchain (Terraform, Pulumi, CloudFormation/Bicep) is required across all roles — not just DevOps.


Cloud Platforms

  • Working knowledge of at least one major cloud provider (AWS, Azure, or GCP).


Version Control & Collaboration

  • Git-based workflows, code review practices, and collaborative development are expected of every team member.


Experience Requirements

  • Proven delivery experience in international or multi-region projects is required;
  • Previous experience mentoring engineers or acting as a technical lead is strongly preferred.


Education

  • Bachelor's degree in Computer Science, Information Systems, Engineering, or a related field is preferred.

ADDITIONAL INFORMATION


Modelo de contratação:

  • PJ


Forma de atuação:

  • 100% Remota

SEJAM BEM VINDOS A KEEP SIMPLE 👇🏽


Somos uma empresa de consultoria em TI com mais de 10 anos no mercado e contamos com um time de especialistas em recrutamento tech. Nosso processo é 100% focado na experiência de quem tanto importa, o candidato.


Optamos por fazer a diferença e temos orgulho em dizer que todos que passam pela Keep Simple se sentem especiais. Possuímos um ambiente descontraído, colaborativo, e adotamos o ágil de verdade.


Faça parte da nossa história, #vemprakeep 💙🚀


Security Engineer Related jobs

Other jobs at Keep Simple

We help you get seen. Not ignored.

We help you get seen faster — by the right people.

🚀

Auto-Apply

We apply for you — automatically and instantly.

Save time, skip forms, and stay on top of every opportunity. Because you can't get seen if you're not in the race.

AI Match Feedback

Know your real match before you apply.

Get a detailed AI assessment of your profile against each job posting. Because getting seen starts with passing the filters.

Upgrade to Premium. Apply smarter and get noticed.

Upgrade to Premium

Join thousands of professionals who got noticed and hired faster.