Data Engineer AWS

Work set-up: 
Full Remote
Contract: 
Experience: 
Mid-level (2-5 years)
Work from: 

Offer summary

Qualifications:

Bachelor's or higher degree in Computer Science, Data Engineering, or related field., At least 8 years of experience in building large-scale data pipelines in production environments., Strong proficiency with AWS services such as S3, Glue, Lambda, and Redshift., Hands-on experience with Databricks, Pyspark, and SQL for data processing and analytics..

Key responsibilities:

  • Design, develop, and deploy scalable data pipelines on AWS cloud infrastructure.
  • Implement data processing workflows using Databricks, Spark, and SQL to support analytics and reporting.
  • Build and maintain data orchestration workflows with Apache Airflow for automation and monitoring.
  • Collaborate with data scientists, analysts, and stakeholders to deliver data solutions.

Tiger Analytics logo
Tiger Analytics XLarge http://www.tigeranalytics.com
1001 - 5000 Employees
See all jobs

Job description

Tiger Analytics is a fastgrowing advanced analytics consulting firm. Our consultants bring deep expertise in Data Engineering, Data Science, Machine Learning and AI. We are the trusted analytics partner for multiple Fortune 500 companies, enabling them to generate business value from data. Our business value and leadership has been recognized by various market research firms, including Forrester and Gartner. We are looking for topnotch talent as we continue to build the best global analytics consulting team in the world.

As an AWS Data Engineer, you will be responsible for designing, building, and maintaining scalable data pipelines on AWS cloud infrastructure. You will work closely with crossfunctional teams to support data analytics, machine learning, and business intelligence initiatives. The ideal candidate will have strong experience with AWS services, Databricks, and Snowflake.

Key Responsibilities:

  • Design, develop, and deploy endtoend data pipelines on AWS cloud infrastructure using services such as Amazon S3, AWS Glue, AWS Lambda, Amazon Redshift, etc.
  • Implement data processing and transformation workflows using Databricks, Apache Spark, and SQL to support analytics and reporting requirements.
  • Build and maintain orchestration workflows using Apache Airflow to automate data pipeline execution, scheduling, and monitoring.
  • Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver scalable data solutions.
  • Optimize data pipelines for performance, reliability, and costeffectiveness, leveraging AWS best practices and cloudnative technologies.
    • Requirements

      • 8+ years of experience building and deploying largescale data processing pipelines in a production environment.
      • Handson experience in designing and building data pipelines
      • Strong proficiency in AWS services such as Amazon S3, AWS Glue, AWS Lambda, Amazon Redshift, etc.
      • Strong experience with Databricks, Pyspark for data processing and analytics.
      • Solid understanding of data modeling, database design principles, and SQL and Spark SQL.
      • Experience with version control systems (e.g., Git) and CICD pipelines.
      • Excellent communication skills and the ability to collaborate effectively with crossfunctional teams.
      • Strong problemsolving skills and attention to detail.
        • Benefits

          This position offers an excellent opportunity for significant career development in a fastgrowing and challenging entrepreneurial environment with a high degree of individual responsibility.

Required profile

Experience

Level of experience: Mid-level (2-5 years)
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Detail Oriented
  • Collaboration
  • Communication
  • Problem Solving

Data Engineer Related jobs