Logo for Weekday (YC W21)

Senior Data Engineer (Microsoft Fabric Engineer)

Roles & Responsibilities

  • 4+ years of experience in data engineering, data architecture, or ETL development
  • Hands-on experience with Microsoft Fabric data engineering capabilities and Azure data services
  • Strong programming skills in Python and SQL, with experience building scalable data pipelines
  • Experience with Databricks, Apache Spark, Delta Lake, and building data platforms on Azure

Requirements:

  • Design and implement scalable cloud-native data architectures using Microsoft Fabric and Azure, define data governance, architecture standards, and platform scalability; build data models and data warehouse architectures to support analytics and AI workloads
  • Design and develop high-performance ETL/ELT pipelines for large-scale data processing using Python and SQL; ensure reliability, scalability, and performance
  • Develop and manage data engineering workflows on Databricks, Spark, and Delta Lake; implement data ingestion frameworks and optimize pipelines for performance, reliability, and cost efficiency
  • Design workflow orchestration using Airflow or Azure-native services; automate data processing pipelines and maintain operational reliability; collaborate with ML teams to support AI-driven data workflows and feature engineering

Job description

This role is for one of the Weekday's clients

Min Experience: 5 years

Location: Remote (India)

JobType: full-time

We are seeking an experienced Senior Data Engineer (Microsoft Fabric Engineer) to design, build, and scale modern cloud-native data platforms. This role focuses on developing robust ETL/ELT pipelines, data architectures, and high-performance data engineering solutions using Microsoft Fabric and Azure data technologies.

The ideal candidate will combine strong architectural thinking with hands-on engineering expertise to build scalable data pipelines, support advanced analytics, and collaborate with machine learning teams on AI-driven data workflows. The role requires deep experience with Databricks, Spark, Delta Lake, Python, and SQL, along with modern data orchestration and governance practices.

Requirements

Key Responsibilities

Data Architecture & Platform Design

  • Design and implement scalable cloud-native data architectures using Microsoft Fabric and Azure data services.
  • Define best practices for data governance, architecture standards, and platform scalability.
  • Build robust data models and data warehouse architectures to support analytics and AI workloads.

ETL/ELT Pipeline Development

  • Design and develop high-performance ETL and ELT pipelines for large-scale data processing.
  • Build and maintain data pipelines using Python and SQL to process and transform complex datasets.
  • Ensure reliability, scalability, and performance optimization across data workflows.

Data Engineering & Platform Development

  • Develop and manage data engineering workflows using Databricks, Spark, and Delta Lake.
  • Implement data ingestion frameworks and support large-scale data processing environments.
  • Optimize data pipelines for performance, reliability, and cost efficiency.

Orchestration & Automation

  • Design workflow orchestration using tools such as Airflow or Azure-native orchestration services.
  • Automate data processing pipelines and maintain operational reliability across systems.

AI & Advanced Data Workflows

  • Collaborate with machine learning teams to support LLM, NLP, and AI-driven data workflows.
  • Enable feature engineering and data pipelines that support advanced analytics and AI models.

Governance & Best Practices

  • Establish best practices for data architecture, pipeline management, documentation, and security.
  • Ensure compliance with enterprise data governance and quality standards.

Required Skills & Experience

  • 4+ years of experience in data engineering, data architecture, or ETL development.
  • Hands-on experience with Microsoft Fabric data engineering capabilities.
  • Strong expertise in ETL/ELT development and data pipeline design.
  • Experience working with Databricks, Apache Spark, and Delta Lake.
  • Strong programming skills in Python and SQL.
  • Experience building scalable data platforms on Azure cloud environments.
  • Knowledge of data warehousing, data modeling, and large-scale data processing.
  • Familiarity with LLM/NLP workflows or AI-driven data pipelines is an advantage.
  • Bachelor’s degree in Computer Science, Information Technology, or related field preferred.

Key Skills

  • Microsoft Fabric
  • ETL / ELT
  • Data Engineering
  • Data Warehousing
  • Data Pipelines
  • Azure Data Lake
  • Data Management
  • Data Architecture
  • Azure Data Factory
  • Databricks / Spark / Delta Lake

Data Engineer Related jobs

Other jobs at Weekday (YC W21)

We help you get seen. Not ignored.

We help you get seen faster — by the right people.

🚀

Auto-Apply

We apply for you — automatically and instantly.

Save time, skip forms, and stay on top of every opportunity. Because you can't get seen if you're not in the race.

AI Match Feedback

Know your real match before you apply.

Get a detailed AI assessment of your profile against each job posting. Because getting seen starts with passing the filters.

Upgrade to Premium. Apply smarter and get noticed.

Upgrade to Premium

Join thousands of professionals who got noticed and hired faster.