Logo for NextLink Group

Data Platform DevOps Engineer

Roles & Responsibilities

  • Bachelor’s or Master’s degree in Computer Science, IT, Data Science, or related field
  • 5+ years in DevOps, Platform Engineering, or SRE; at least 3+ years with Microsoft Azure and Azure DevOps; strong experience with Microsoft Fabric (Power BI, Synapse, Data Factory)
  • Certifications: Microsoft Azure (Administrator / Data Engineer / Solutions Architect); Microsoft Fabric and Power BI certifications
  • Strong scripting and IaC skills (Python, PowerShell, Bash; Terraform/ARM/Ansible); experience with CI/CD, Docker/Kubernetes; knowledge of security, governance, and monitoring

Requirements:

  • Platform engineering: design, build, and maintain Microsoft Fabric components (OneLake, Lakehouse, Data Warehouse, Data Factory, Power BI, Real-Time Intelligence); architect scalable, multi-region data platform solutions; develop Infrastructure as Code (Terraform/ ARM templates)
  • CI/CD and deployment automation: build and manage pipelines in Azure DevOps; implement Fabric REST API deployments and Git-based workflows; define environment strategies (Dev/Test/Prod) and automate provisioning
  • Platform operations and maintenance: monitor performance and resource utilization; implement observability; manage capacity planning, upgrades, DR/backups, and lifecycle management
  • Security, governance, and compliance: implement Microsoft Purview governance; configure RBAC and data protection; manage Entra ID and managed identities; ensure GDPR/HIPAA/ISO alignment and secure networking

Job description

This is a remote position.

Role Overview


Job Title: Data Platform DevOps Engineer
Department: Digital Transformation Department (DTD)
Reporting To: Manager – AI & Data
Technical Reporting: Senior Data Architect
 



Organizational Context

The International Federation of Red Cross and Red Crescent Societies (IFRC) is the world’s largest humanitarian organization, operating through a network of 191 National Societies. IFRC delivers humanitarian assistance across disasters, health emergencies, and crises globally.

The Digital Transformation Department (DTD) leads the organization’s digital strategy, enabling innovation, digital services, and data-driven decision-making across the IFRC network.

The AI & Data Unit is responsible for:

  • Data platform management

  • Data governance and strategy

  • AI enablement and analytics

  • Data product lifecycle management


Role Purpose

The Data Platform DevOps Engineer is responsible for designing, implementing, and managing IFRC’s enterprise data platform using Microsoft Fabric and Azure ecosystem.

This role combines DevOps, platform engineering, and cloud infrastructure expertise to ensure a secure, scalable, and high-performing data platform supporting global humanitarian operations.


Key Responsibilities

1. Platform Engineering & Architecture

  • Design, build, and maintain Microsoft Fabric platform components (OneLake, Lakehouse, Data Warehouse, Data Factory, Power BI, Real-Time Intelligence)

  • Architect scalable, multi-region data platform solutions

  • Develop Infrastructure as Code (IaC) using Terraform, ARM templates, or similar tools

  • Optimize OneLake storage structures, shortcuts, mirroring, and data lake performance

  • Support workloads across data engineering, analytics, and business intelligence


2. CI/CD & Deployment Automation

  • Build and manage CI/CD pipelines using Azure DevOps and Fabric deployment pipelines

  • Implement automated deployment strategies using Fabric REST APIs and Git integration

  • Define branching strategies and environment workflows (Dev, Test, Prod)

  • Automate provisioning, configuration, and deployment of platform components

  • Manage environment-specific configurations and deployment rules


3. Platform Operations & Maintenance

  • Monitor platform performance, health, and resource utilization

  • Implement observability frameworks using tools like Azure Monitor, Prometheus, Grafana

  • Manage capacity planning, cost optimization, and resource allocation

  • Perform platform upgrades, patching, and lifecycle management

  • Ensure disaster recovery, backup, and business continuity readiness


4. Security, Governance & Compliance

  • Implement security and governance using Microsoft Purview

  • Configure RBAC, row/column-level security, and access controls

  • Manage Microsoft Entra ID, service principals, and managed identities

  • Enforce data protection policies (DLP, sensitivity labels, encryption)

  • Ensure compliance with global standards (e.g., GDPR, HIPAA, ISO)

  • Implement network security (private endpoints, encryption keys, secure data sharing)

  • Monitor and respond to security incidents and vulnerabilities


5. Automation & Scripting

  • Develop scripts using Python, PowerShell, Bash, Azure CLI

  • Automate pipeline orchestration, monitoring, and incident response

  • Build internal tools to improve developer productivity and platform usability

  • Enable self-service capabilities while maintaining governance


6. Collaboration & Support

  • Work with data engineers, analysts, and data scientists to optimize platform usage

  • Provide technical guidance on Fabric, pipelines, and best practices

  • Collaborate with security and compliance teams

  • Support incident management and root cause analysis

  • Promote DevOps culture and continuous improvement


7. Documentation & Knowledge Management

  • Maintain technical documentation, runbooks, and SOPs

  • Document architecture, deployment processes, and governance frameworks

  • Track platform inventory, dependencies, and integrations

  • Support audit and compliance documentation requirements


Qualifications

Education

  • Bachelor’s or Master’s degree in Computer Science, IT, Data Science, or related field

Certifications (Preferred)

  • Microsoft Azure (Administrator / Data Engineer / Solutions Architect)

  • Microsoft Fabric and Power BI certifications


Experience

  • 5+ years in DevOps, Platform Engineering, or SRE roles

  • 3+ years of hands-on experience with Microsoft Azure and Azure DevOps

  • Strong experience with Microsoft Fabric or related platforms (Power BI, Synapse, Data Factory)

  • Expertise in Infrastructure as Code (Terraform, ARM, Ansible)

  • Strong scripting skills (Python, PowerShell, Bash, SQL)

  • Experience with CI/CD tools (Azure DevOps, GitHub Actions)

  • Experience with Docker, Kubernetes (AKS/EKS/GKE)

  • Knowledge of cloud storage (Azure Storage, AWS S3, GCP Cloud Storage) and databases

  • Experience in data lakes, data warehousing, and ETL/ELT pipelines

  • Hands-on experience with monitoring tools (Azure Monitor, Prometheus, Grafana, ELK)

  • Experience with Microsoft Fabric workspaces, Git integration, and deployment pipelines

  • Understanding of OneLake architecture and Fabric APIs

  • Experience with multi-region or multi-cloud environments (preferred)

  • Experience in humanitarian/non-profit sector (preferred)


Technical Skills

  • Data lakehouse architecture, medallion model

  • Delta Lake, Parquet, and modern data formats

  • Data pipeline orchestration and automation

  • Cloud security (IAM, encryption, network security)

  • Microsoft Entra ID and identity management

  • Monitoring, observability, and performance optimization

  • Real-time and event-driven data processing

  • MLOps and AI integration (preferred)


Core Competencies

  • Strong problem-solving and analytical skills

  • Effective communication with technical and non-technical stakeholders

  • Collaboration across cross-functional and global teams

  • Attention to detail and quality assurance

  • Continuous learning and adaptability

  • Ability to align technical solutions with business needs


Languages

  • Fluent English (mandatory)

  • Additional language (French, Spanish, or Arabic) – preferred



Data Engineer Related jobs

Other jobs at NextLink Group

We help you get seen. Not ignored.

We help you get seen faster — by the right people.

🚀

Auto-Apply

We apply for you — automatically and instantly.

Save time, skip forms, and stay on top of every opportunity. Because you can't get seen if you're not in the race.

AI Match Feedback

Know your real match before you apply.

Get a detailed AI assessment of your profile against each job posting. Because getting seen starts with passing the filters.

Upgrade to Premium. Apply smarter and get noticed.

Upgrade to Premium

Join thousands of professionals who got noticed and hired faster.