Logo for Metasys Technologies

GCP Data Enginee

Roles & Responsibilities

  • 5+ years of experience in Data Engineering or related roles
  • Strong hands-on experience with Databricks (Spark, Delta Lake, Notebooks, Jobs)
  • Strong hands-on experience with Snowflake (data modeling, performance tuning, SQL optimization)
  • Proficiency in Python (PySpark) and advanced SQL

Requirements:

  • Design, develop, and maintain end-to-end data pipelines using Databricks, Snowflake, and cloud-native services
  • Develop and optimize Spark (PySpark/Scala) jobs for large-scale data processing
  • Build and optimize batch and real-time ETL/ELT pipelines
  • Ensure data quality, reliability, performance, and governance standards are met

Job description


GCP Data Engineer
Remote Position
Duration: 12+ months
Potential to convert to perm



Job Summary
Client is seeking an experienced Data Engineer to design, build, and optimize scalable data pipelines and analytics platforms using Databricks and Snowflake. The ideal candidate will have strong expertise in cloud-based data engineering, distributed processing, and modern data lakehouse architectures, enabling data-driven decision-making across the organization.

Key Responsibilities
  • Design, develop, and maintain end-to-end data pipelines using Databricks, Snowflake, and cloud-native services
  • Build and optimize ETL/ELT workflows for structured and semi-structured data
  • Implement data lakehouse architectures leveraging Delta Lake and Snowflake
  • Develop and optimize Spark (PySpark/Scala) jobs for large-scale data processing
  • Ensure data quality, reliability, performance, and scalability
  • Implement data modeling techniques (star/snowflake schema, dimensional modeling) in Snowflake
  • Optimize query performance, clustering, partitioning, and cost management in Snowflake
  • Collaborate with data scientists, analysts, and business stakeholders to deliver analytics-ready datasets
  • Implement CI/CD pipelines and automation for data workflows
  • Monitor and troubleshoot data pipelines and production issues
  • Ensure data governance, security, and compliance standards are met
  • Design, develop, and maintain end-to-end data pipelines using GCP services
  • Build and optimize batch and real-time ETL/ELT pipeline
  • Develop scalable data architectures using BigQuery, Cloud Storage, and Dataflow

Required Skills & Qualifications
  • 5+ years of experience in Data Engineering or related roles
  • Strong hands-on experience with Databricks (Spark, Delta Lake, Notebooks, Jobs)
  • Strong hands-on experience with Snowflake (data modeling, performance tuning, SQL optimization)
  • Proficiency in Python (PySpark) and advanced SQL
  • Experience with cloud platforms: AWS, Azure, or GCP
  • Experience with data ingestion tools (ADF, Fivetran, Airflow, or similar)
  • Solid understanding of data warehousing concepts and lakehouse architectures
  • Experience with Git, CI/CD, and version control
  • Strong analytical and problem-solving skills

Preferred / Nice-to-Have Skills
  • Experience with Databricks Unity Catalog
  • Experience with Snowflake Streams & Tasks
  • Exposure to real-time or streaming data (Kafka, Event Hub, Kinesis)
  • Knowledge of DBT or other transformation frameworks
  • Experience with Terraform or Infrastructure as Code
  • Familiarity with data governance and lineage tools

Related jobs

Other jobs at Metasys Technologies

We help you get seen. Not ignored.

We help you get seen faster — by the right people.

🚀

Auto-Apply

We apply for you — automatically and instantly.

Save time, skip forms, and stay on top of every opportunity. Because you can't get seen if you're not in the race.

✨

AI Match Feedback

Know your real match before you apply.

Get a detailed AI assessment of your profile against each job posting. Because getting seen starts with passing the filters.

Upgrade to Premium. Apply smarter and get noticed.

Upgrade to Premium

Join thousands of professionals who got noticed and hired faster.