Logo for Nagarro

DataOps Engineer

Roles & Responsibilities

  • 8+ years in DataOps, Data Engineering Operations, or Analytics Platform Support with DevOps/SRE practices
  • Proficiency in SQL and Python/Shell scripting for automation and data diagnostics
  • Experience with cloud platforms (AWS mandatory; Azure/GCP exposure a plus) and CI/CD tools (Jenkins, Azure DevOps) with Git
  • Working knowledge of monitoring tools (Datadog, Grafana, Prometheus) and infrastructure-as-code frameworks (Terraform, Ansible) and containerization (Docker, Kubernetes)

Requirements:

  • Manage and support data pipelines, ETL processes, and analytics platforms, ensuring reliability, accuracy, and accessibility
  • Execute data validation, quality checks, and performance tuning using SQL and Python/Shell scripting
  • Implement monitoring and observability using Datadog, Grafana, and Prometheus to track system health and performance; collaborate to integrate data deployments within CI/CD pipelines
  • Apply infrastructure-as-code principles (Terraform, Ansible) for provisioning and automation of data environments; support incident management via ServiceNow and uphold data governance and security standards

Job description

Company Description

👋🏼 We're Nagarro. We are a digital product engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale — across all devices and digital mediums, and our people exist everywhere in the world (18 000+ experts across 39 countries, to be exact). Our work culture is dynamic and non-hierarchical. We're looking for great new colleagues. That's where you come in! By this point in your career, it is not just about the tech you know or how well you can code. It is about what more you want to do with that knowledge. Can you help your teammates proceed in the right direction? Can you tackle the challenges our clients face while always looking to take our solutions one step further to succeed at an even higher level? Yes? You may be ready to join us.

Job Description

We are seeking a DataOps Engineer to support our Tech Delivery and Infrastructure Operations teams by ensuring the reliability, automation, and performance of our analytics and data platforms. The role focuses on DataOps, blending DevOps and SRE practices to sustain and optimize data environments across global business units. You will oversee end-to-end data operations, from SQL diagnostics and pipeline reliability to automation, monitoring, and deployment of analytics workloads on cloud platforms, working closely with Data Engineering, Product, and Infrastructure teams to maintain scalable, secure, high-performing systems.

Key Responsibilities 

  • Manage and support data pipelines, ETL processes, and analytics platforms, ensuring reliability, accuracy, and accessibility 
  • Execute data validation, quality checks, and performance tuning using SQL and Python/Shell scripting 
  • Implement monitoring and observability using Datadog, Grafana, and Prometheus to track system health and performance 
  • Collaborate with DevOps and Infra teams to integrate data deployments within CI/CD pipelines (Jenkins, Azure DevOps, Git) 
  • Apply infrastructure-as-code principles (Terraform, Ansible) for provisioning and automation of data environments 
  • Support incident and request management via ServiceNow, ensuring SLA adherence and root cause analysis 
  • Work closely with security and compliance teams to maintain data governance and protection standards 
  • Participate in Agile ceremonies within Scrum/Kanban models to align with cross-functional delivery squads 

Qualifications

  • 8+ years in DataOps, Data Engineering Operations, or Analytics Platform Support, with good exposure to DevOps/SRE practices 
  • Proficiency in SQL and Python/Shell scripting for automation and data diagnostics 
  • Experience with cloud platforms (AWS mandatory; exposure to Azure/GCP a plus) 
  • Familiarity with CI/CD tools (Jenkins, Azure DevOps), version control (Git), and IaC frameworks (Terraform, Ansible) 
  • Working knowledge of monitoring tools (Datadog, Grafana, Prometheus) 
  • Understanding of containerization (Docker, Kubernetes) concepts 
  • Strong grasp of data governance, observability, and quality frameworks 
  • Experience in incident management and operational metrics tracking (MTTR, uptime, latency) 

Additional Information

This position implies a working schedule that will partly overlap with US timezone. We are looking at candidates who can accommodate a schedule between 12:00pm - 9:00pm Portugal time. 

Field Engineer (Solutions) Related jobs

Other jobs at Nagarro

We help you get seen. Not ignored.

We help you get seen faster — by the right people.

🚀

Auto-Apply

We apply for you — automatically and instantly.

Save time, skip forms, and stay on top of every opportunity. Because you can't get seen if you're not in the race.

✨

AI Match Feedback

Know your real match before you apply.

Get a detailed AI assessment of your profile against each job posting. Because getting seen starts with passing the filters.

Upgrade to Premium. Apply smarter and get noticed.

Upgrade to Premium

Join thousands of professionals who got noticed and hired faster.