Logo for Hunt St

Senior Data Engineer (014-851)

Roles & Responsibilities

  • 5+ years of professional experience in data engineering
  • Must have experience with Databricks or Fabric and advanced SQL (CTEs, window functions, performance tuning)
  • Hands-on experience with modern data platforms (lakehouse, pipelines, distributed processing) and dimensional modelling/analytics-ready data design
  • AI proficiency: practical experience using AI in development workflows, building AI-powered tools or automation, including prompt design

Requirements:

  • Design, build, and maintain scalable data pipelines in modern lakehouse architectures
  • Develop production-ready Python and SQL code; implement ETL/ELT processes and orchestration workflows
  • Model data using medallion architecture (Bronze/Silver/Gold), star schemas, and SCDs; integrate multiple data sources
  • Deploy pipelines with CI/CD tools, collaborate with clients on requirements and architecture, monitor performance, and ensure data quality and production readiness

Job description

​​Looking for Philippines-based candidates

Job Role: Senior Data Engineer

Compensation range: $2,500 AUD - $3,500 AUD / Monthly

Engagement type: Employer of Record

Work Schedule: This role is expected to align with the AU business hours (approx. 9 AM - 5 PM, Monday to Friday).

Who We Are: At Hunt St, we help Australian companies hire top remote talent in the Philippines. For this role, you will be formally employed through an Employer of Record (EOR) arrangement. We are not an outsourcing agency. All of our roles are 100% remote, so you’ll be able to work from home.

Who The Client Is: A boutique Australia-based consulting team specialising in data engineering, analytics, and AI-driven solutions for a diverse client base across industries such as financial services, healthcare, retail, and technology.

Role Overview: We’re looking for a Senior Data Engineer to join a high-performing delivery team and work on end-to-end data solutions across modern lakehouse environments.

This is a hands-on role, focused on building and shipping production pipelines—not people management. You’ll work directly with clients and senior engineers, owning delivery from ingestion through to transformation and consumption.

Key Responsibilities: 

  • Design, build, and maintain scalable data pipelines in modern lakehouse architectures.
  • Develop clean, efficient, and production-ready Python and SQL code.
  • Implement ETL/ELT processes, transformations, and orchestration workflows.
  • Model data using medallion architecture (Bronze/Silver/Gold), star schemas, and SCDs.
  • Integrate multiple data sources (APIs, databases, SaaS platforms, flat files).
  • Deploy pipelines using CI/CD tools and version control best practices.
  • Leverage AI tools (e.g., agent-based workflows, automation scripts) to improve delivery speed and quality.
  • Collaborate directly with clients on requirements, architecture, and delivery updates.
  • Monitor, troubleshoot, and optimise pipelines for performance and reliability.
  • Ensure data quality, integrity, and production readiness.


Required Skills and Qualifications:

  • 5+ years of professional experience in data engineering.
  • Must have experience with Databricks or Fabric.
  • Strong Python skills for production environments.
  • Advanced SQL (CTEs, window functions, performance tuning, complex joins).
  • Hands-on experience with modern data platforms (lakehouse, pipelines, distributed processing).
  • Experience with dimensional modelling and analytics-ready data design.
  • Solid understanding of CI/CD, Git workflows, and deployment practices.
  • Strong communication skills with the ability to work directly with clients.
  • Ability to translate business requirements into technical solutions.

AI Proficiency (Required)

This role requires strong, practical experience using AI in development workflows:

  • Comfortable working in modern development environments with AI-assisted tooling.
  • Proven experience building AI-powered tools, automations, or agents with real business impact.
  • Ability to use AI across the development lifecycle (design, coding, debugging, testing, documentation).
  • Strong judgment in validating and refining AI-generated outputs.
  • Experience with prompt design, context handling, and tool integrations.

Nice to Have

  • Experience with integration platforms or middleware tools.
  • Exposure to dbt or modern data transformation frameworks.
  • Experience with cloud-based data ecosystems.
  • Dashboarding or semantic modelling experience.
  • Familiarity with AI/agent frameworks or orchestration tools.
  • Previous consulting or client-facing project delivery.

Work Arrangement & Expectations:

This is a remote role that will be set up as an employer of record.

To ensure alignment and transparency, successful candidates will be expected to:

  • Disclose any existing ongoing roles or client work
  • Reflect this engagement on their LinkedIn profile

Data Engineer Related jobs

Other jobs at Hunt St

We help you get seen. Not ignored.

We help you get seen faster — by the right people.

🚀

Auto-Apply

We apply for you — automatically and instantly.

Save time, skip forms, and stay on top of every opportunity. Because you can't get seen if you're not in the race.

AI Match Feedback

Know your real match before you apply.

Get a detailed AI assessment of your profile against each job posting. Because getting seen starts with passing the filters.

Upgrade to Premium. Apply smarter and get noticed.

Upgrade to Premium

Join thousands of professionals who got noticed and hired faster.