Logo for Quantix, Inc.

Senior to Expert-level Data Engineer

Roles & Responsibilities

  • 8+ years of professional experience in data engineering with demonstrable expertise in building and deploying scalable data solutions.
  • Expert-level proficiency in Python and SQL for scalable data transformation and strategic analysis.
  • Deep expertise in Google Cloud Platform stack (BigQuery, Pub/Sub, Datastream, Dataflow, Dataform, Cloud Composer) and experience with transactional databases such as AlloyDB and Cloud SQL; data governance with Dataplex.
  • Ability to lead data architecture across multiple projects, influence technical direction, and collaborate with engineering, product, and science teams.

Requirements:

  • Design and implement a scalable GCP-native data strategy that underpins ML initiatives and enables squad-owned data infrastructure.
  • Lead the architecture and delivery of multiple data engineering projects, building reusable, high-fidelity data products for rapid iteration.
  • Architect and govern the data storage strategy within the squad, leveraging BigQuery, AlloyDB, ODS, and analytical data warehouses.
  • Collaborate with engineers, scientists, and product teams to deliver squad-owned data products in a fast-paced, iterative environment.

Job description


Title:  Staff Data Engineer (one level above senior)
Location:  100% Remote
Tiem Zone:  CST
Hours:  Approximately 50 hours/week
Type:  Direct hire
Salary range:  180 – 220K
 
Job Description:  We are looking for a senior to expert-level Data Engineer who will play a key role in accelerating innovation by delivering robust data solutions. You will be responsible for designing and implementing a scalable GCP-native data strategy that underpins our machine learning initiatives and enables decentralized, squad-owned data infrastructure. Working closely with industry-leading engineers and scientists, you will help achieve large-scale behavior change through intelligent, data-powered systems. Your work will involve crafting reusable, high-fidelity data products and building infrastructure that allows for rapid iteration, continual improvement, and measurable outcomes.

*they are looking for the type of person that will be able to lead the team and architecture of multiple projects at once from a strategic standpoint, not just single project work. 
 
Requirements:

  • 8+ years of professional experience in data engineering or a related field, with demonstrated expertise in building and deploying scalable data solutions
  • Strong skills in critical thinking, decision making, problem-solving, and attention to detail
  • Technically skilled, and able to understand technology tradeoffs that your squad will face
  • Proficient at resolving ambiguity. When there is uncertainty, you can work with squad colleagues to define the path forward
  • Able to work independently, operating without significant input or guidance
  • Expert-level proficiency in Python and SQL for scalable data transformation and strategic analysis within the squad's domain
  • Expertise in designing, building, and operating data products, not just pipelines, that deliver compounding value and adhere to domain-driven data principles
  • Architect and govern the data storage strategy within the squad, strategically utilizing transactional systems (e.g., AlloyDB), Operational Data Stores (ODS), and analytical data warehouses, with a primary focus on BigQuery
  • Mastery of Google BigQuery for strategic data product development and high-volume analytical processing, coupled with deep hands-on experience integrating with transactional databases such as AlloyDB (PostgreSQL) and Cloud SQL (PostgreSQL)
  • Experience modernizing legacy data assets and optimizing high-performance SQL/procedural logic, including exposure to proprietary SQL dialects (e.g., TSQL, PL/pgSQL), demonstrating an ability to decouple logic from operational databases to BigQuery/Dataform
  • Extensive experience architecting ingestion strategies using native services like Pub/Sub and Datastream (CDC) for high-throughput data delivery
  • Mandatory deep expertise in the strategic native GCP stack:  GCP Data Ecosystem: including Dataform for transformations, Cloud Composer (Airflow) for orchestration, and Cloud Dataflow (Apache Beam) for processing
  • Experience applying Dataplex features for data governance, quality, and discovery across the domain's data products
  • Proven success collaborating across engineering, product, and science teams to deliver squad-owned data products in a fast-paced, iterative environment
  • Highly motivated and organized, demonstrating an advanced ability to influence technical direction and build effective partnerships across internal and external stakeholders

Data Engineer Related jobs

Other jobs at Quantix, Inc.

We help you get seen. Not ignored.

We help you get seen faster — by the right people.

🚀

Auto-Apply

We apply for you — automatically and instantly.

Save time, skip forms, and stay on top of every opportunity. Because you can't get seen if you're not in the race.

✨

AI Match Feedback

Know your real match before you apply.

Get a detailed AI assessment of your profile against each job posting. Because getting seen starts with passing the filters.

Upgrade to Premium. Apply smarter and get noticed.

Upgrade to Premium

Join thousands of professionals who got noticed and hired faster.