Match score not available

Data Engineer

Remote: 
Full Remote
Contract: 
Work from: 

Offer summary

Qualifications:

Proficiency in Python and PySpark, Expertise in Azure cloud services, Experience with data pipeline design, Strong skills in data quality management.

Key responsabilities:

  • Develop and maintain data pipelines
  • Design and implement robust data models
Sequoia Global Services logo
Sequoia Global Services Startup http://www.sequoia-connect.com
11 - 50 Employees
See all jobs

Job description

Description

Our client represents the connected world, offering innovative and customer-centric information technology experiences, enabling Enterprises, Associates, and Society to Rise™.

They are a USD 6 billion company with 163,000+ professionals across 90 countries, helping 1279 global customers, including Fortune 500 companies. They focus on leveraging next-generation technologies, including 5G, Blockchain, Metaverse, Quantum Computing, Cybersecurity, Artificial Intelligence, and more, on enabling end-to-end digital transformation for global customers.

Our client is one of the fastest-growing brands and among the top 7 IT service providers globally. Our client has consistently emerged as a leader in sustainability and is recognized amongst the ‘2021 Global 100 Most sustainable corporations in the World by Corporate Knights. 

We are currently searching for a Data Engineer:

Responsibilities:

  • Develop and maintain data pipelines for batch and streaming data ingestion and processing.
  • Design and implement robust data models and architectures that meet business requirements.
  • Utilize Azure cloud-based data platforms, including Databricks and Delta Live Tables, for data engineering tasks.
  • Ensure data quality through profiling, validation, and root cause analysis, maintaining accuracy, completeness, and consistency.
  • Apply workflow orchestration tools to automate data pipeline execution and manage dependencies.
  • Implement CI/CD practices for automated testing and deployment of data pipelines.
  • Integrate monitoring and alerting mechanisms to track pipeline health and proactively address performance issues.
  • Follow Agile development methodologies, actively participating in sprint activities and adapting to project requirements.

Requirements:

  • Proficiency in Python and PySpark with knowledge of software engineering best practices.
  • Expertise in Azure cloud services for data storage, compute, and security.
  • Experience with data pipeline design and automation.
  • Strong skills in data quality management and root cause analysis.
  • Familiarity with workflow orchestration tools.
  • Knowledge of CI/CD practices for data engineering workflows.
  • Ability to monitor, troubleshoot, and optimize data pipelines.
  • Strong understanding of Agile principles and collaboration in iterative environments.

Languages

  • Advanced Oral English.
  • Native Spanish.

Note:

  • Fully remote

If you meet these qualifications and are pursuing new challenges, Start your application to join an award-winning employer. Explore all our job openings | Sequoia Career’s Page: https://www.sequoia-connect.com/careers/.


Required profile

Experience

Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Collaboration
  • Troubleshooting (Problem Solving)

Data Engineer Related jobs