Logo for Concentrix

Data Engineer

Roles & Responsibilities

  • Bachelor’s degree in Computer Science, Information Systems, or a related field
  • 3+ years of hands-on data engineering experience
  • Proficiency in Python and SQL; strong data modeling and ETL/ELT concepts
  • Practical experience with Azure and Databricks, Lakehouse architectures; familiarity with Spark

Requirements:

  • Design, build, and maintain robust data pipelines for ingestion, transformation, and loading into the Azure data lake, including orchestration and SQL performance tuning
  • Develop backend data APIs, support API management, and integrate with external systems and REST APIs; manage streaming data ingestion (e.g., Event Hubs)
  • Implement infrastructure as code in Azure (Terraform, YAML); contribute to CI/CD pipelines using Azure DevOps; enforce data quality checks and governance
  • Collaborate in Agile environments, troubleshoot pipelines and cloud components, and work independently to deliver across multiple projects

Job description

Job Title:

Data Engineer

Job Description

Data Engineer – Remote in Mexico

We're Concentrix, the global technology and services leader that powers the world’s best brands, today and into the future. We’re solution-focused, tech-powered, intelligence-fueled. With unique data and insights, deep industry expertise, and advanced technology solutions, we’re the intelligent transformation partner that powers a world that works, helping companies become refreshingly simple to work, interact, and transact with. We shape new game-changing careers in over 70 countries, attracting the best talent.

We embrace our game-changers with open arms, people from diverse backgrounds, who are curious and willing to learn. Your natural talent to help others and go beyond WOW for our customers will fit right in with what we do and who we are.

As part of the Core Engineering Services team, we are seeking a Data Engineer to support the design, development, and maintenance of scalable data pipelines and cloud-based data solutions. We are looking for a hands-on specialist focused on building, optimizing, and supporting data workflows within our Azure and Databricks environment.

Essential Functions/Core Responsibilities

1. Data Pipeline Engineering & Orchestration

  • Pipeline Development: Build and maintain robust data pipelines for ingesting, transforming, and loading data into the Azure data lake.
  • Workflow Management: Develop and support orchestration and workflow monitoring solutions to ensure reliable data delivery.
  • Performance Tuning: Write and optimize complex SQL queries; improve data performance via advanced query tuning and indexing.

2. Backend & API Integration

  • API Development: Develop backend data APIs and support API management configurations for seamless data exchange.
  • External Integration: Integrate with external systems and REST APIs to facilitate diverse data flows.
  • Streaming & Events: Manage the ingestion of streaming or event-based data (e.g., Event Hubs) into the ecosystem.

3. Infrastructure, DevOps & Quality

  • Infrastructure as Code: Implement and maintain Azure resources using Terraform and YAML-based configurations.
  • CI/CD & Versioning: Contribute to CI/CD pipelines using Azure DevOps and maintain strict version control, logging, and monitoring.
  • Data Governance: Support rigorous data quality checks, validation processes, and adhere to engineering best practices through code reviews.

Candidate Profile

  • Education: Bachelor’s degree in Computer Science, Information Systems, or a related field.
  • Experience: 3+ years of hands-on experience in data engineering roles.
  • Language: Fluent in English.

Technical Core

  • Language Mastery: Proficiency in Python (processing scripts/utilities) and SQL (transformation/analysis).
  • Cloud Ecosystem: Practical experience with Azure and Databricks; familiarity with Lakehouse architectures.
  • Data Modeling: Strong understanding of relational data models (Star/Snowflake, Kimball) and ETL/ELT concepts.
  • Big Data Tools: Experience working with Spark or similar big data technologies.

Professional Experience

  • Technical Problem-Solver: Strong troubleshooting skills related to pipelines, jobs, and cloud components.
  • Collaborative Engineer: Experience working in Agile environments (Jira/GitHub), partnering with technical teams to translate requirements into solutions.
  • Self-Directed: Ability to work independently with minimal oversight, meeting deadlines across multiple simultaneous projects.

Technical Stack Summary

  • Languages: Python, SQL
  • Platforms: Azure, Databricks, Azure Data Lake
  • Orchestration/DevOps: Azure DevOps, Terraform, GitHub, YAML
  • Data/Messaging: Spark, Event Hubs, RESTful APIs
  • Architecture: Lakehouse, Star/Snowflake Schema

Join us and be part of this journey towards greater opportunities and brighter futures.

Location:

MEX Work-at-Home

Language Requirements:

Time Type:

Full time

Data Engineer Related jobs

Other jobs at Concentrix

We help you get seen. Not ignored.

We help you get seen faster — by the right people.

🚀

Auto-Apply

We apply for you — automatically and instantly.

Save time, skip forms, and stay on top of every opportunity. Because you can't get seen if you're not in the race.

AI Match Feedback

Know your real match before you apply.

Get a detailed AI assessment of your profile against each job posting. Because getting seen starts with passing the filters.

Upgrade to Premium. Apply smarter and get noticed.

Upgrade to Premium

Join thousands of professionals who got noticed and hired faster.