Logo for Sequoia Global Services

Data Engineer Sr. - TMD

Roles & Responsibilities

  • Proficiency in SQL and experience with relational databases (e.g., Oracle, MySQL, SQL Server)
  • Strong programming skills in Python, PL/SQL, Java, or Scala
  • Experience with big data technologies (Hadoop, Spark, Databricks) and cloud platforms (AWS, Azure, GCP)
  • Hands-on experience with OpenShift or other container orchestration platforms (Kubernetes)

Requirements:

  • Design, build, and maintain scalable data pipelines and infrastructure to support data analytics, reporting, and business intelligence
  • Design, develop, and maintain ETL/ELT pipelines for ingesting and transforming data from multiple sources
  • Build and optimize data models for analytics and reporting
  • Ensure data quality, integrity, and security across all systems

Job description

Description

Our client represents the connected world, offering innovative and customer-centric information technology experiences, enabling Enterprises, Associates, and Society to Rise™.

They are a USD 6 billion company with 163,000+ professionals across 90 countries, helping 1279 global customers, including Fortune 500 companies. They focus on leveraging next-generation technologies, including 5G, Blockchain, Metaverse, Quantum Computing, Cybersecurity, Artificial Intelligence, and more, on enabling end-to-end digital transformation for global customers.

Our client is one of the fastest-growing brands and among the top 7 IT service providers globally. Our client has consistently emerged as a leader in sustainability and is recognized amongst the ‘2021 Global 100 Most sustainable corporations in the World by Corporate Knights. 

We are currently searching for a Data Engineer Sr.:

Responsibilities

  • The Data Engineer is responsible for designing, building, and maintaining scalable data pipelines and infrastructure to support data analytics, reporting, and business intelligence.
  • This role ensures data is accessible, reliable, and optimized for performance across various systems.
  • Design, develop, and maintain ETL/ELT pipelines for ingesting and transforming data from multiple sources.
  • Build and optimize data models for analytics and reporting.
  • Implement and manage data storage solutions (e.g., relational databases, data lakes, cloud storage).
  • Ensure data quality, integrity, and security across all systems.
  • Collaborate with data scientists, analysts, and business teams to understand requirements and deliver solutions.
  • Monitor and improve data pipeline performance and troubleshoot issues.
  • Stay updated with emerging technologies and best practices in data engineering and cloud platforms.

Requirements

  • Proficiency in SQL and experience with relational databases (e.g., Oracle, MySQL, SQL Server).
  • Strong programming skills in Python, PL/SQL, Java, or Scala.
  • Experience with big data technologies (e.g., Hadoop, Spark, Databricks) and cloud platforms (AWS, Azure, GCP).
  • Hands-on experience with OpenShift or other container orchestration platforms (e.g., Kubernetes).
  • Knowledge of data warehousing concepts and tools (e.g., Snowflake, Redshift, BigQuery).
  • Familiarity with workflow orchestration tools (e.g., Airflow, Luigi).
  • Understanding of data governance, security, and compliance.

Prefered

  • Experience with streaming data (Kafka, Kinesis).
  • Background in DevOps practices for data pipelines.
  • Knowledge of machine learning workflows and integration with data pipelines.

Languages

  • Advanced Oral English.
  • Native Spanish.

Note:

  • Fully remote

If you meet these qualifications and are pursuing new challenges, Start your application to join an award-winning employer. Explore all our job openings | Sequoia Career’s Page: https://www.sequoia-connect.com/careers/.


Requirements

  • Proficiency in SQL and experience with relational databases (e.g., Oracle, MySQL, SQL Server).
  • Strong programming skills in Python, PL/SQL, Java, or Scala.
  • Experience with big data technologies (e.g., Hadoop, Spark, Databricks) and cloud platforms (AWS, Azure, GCP).
  • Hands-on experience with OpenShift or other container orchestration platforms (e.g., Kubernetes).
  • Knowledge of data warehousing concepts and tools (e.g., Snowflake, Redshift, BigQuery).
  • Familiarity with workflow orchestration tools (e.g., Airflow, Luigi).
  • Understanding of data governance, security, and compliance.



Data Engineer Related jobs

Other jobs at Sequoia Global Services

We help you get seen. Not ignored.

We help you get seen faster — by the right people.

🚀

Auto-Apply

We apply for you — automatically and instantly.

Save time, skip forms, and stay on top of every opportunity. Because you can't get seen if you're not in the race.

AI Match Feedback

Know your real match before you apply.

Get a detailed AI assessment of your profile against each job posting. Because getting seen starts with passing the filters.

Upgrade to Premium. Apply smarter and get noticed.

Upgrade to Premium

Join thousands of professionals who got noticed and hired faster.