Logo for Bridgeway Benefit Technologies

Senior Data Engineer

Key Facts

Remote From: 
Full time
Senior (5-10 years)
English

Other Skills

  • Collaboration
  • Communication
  • Mentorship

Roles & Responsibilities

  • 5+ years of hands-on data engineering experience
  • 3+ years building and operating data pipelines on a modern lakehouse platform (e.g., Databricks Unity Catalog, Delta Live Tables, Asset Bundles), including data modeling, governance, and CI/CD deployment
  • 3+ years of experience with analytical SQL (ANSI SQL/T-SQL/Spark SQL) and Python for data engineering
  • Strong communication skills to collaborate and influence across engineering, analytics, and business stakeholders

Requirements:

  • Design, develop, and maintain a scalable lakehouse architecture with a bronze/silver/gold data model optimized for analytics and AI/ML consumption
  • Design, implement, and operate ELT pipelines with workflow orchestration, scheduling, and monitoring for reliable, scalable execution
  • Establish data quality, testing, and observability practices and proactively monitor and resolve data and automation issues to ensure platform reliability
  • Ensure data security and compliance with role-based access controls, encryption, masking, and governance best practices

Job description

Bridgeway is seeking a Senior Data Engineer to design, develop, and maintain our data warehouse infrastructure. This role involves working closely with analysts, engineers, and other stakeholders to shape our data architecture, ensuring secure and efficient data pipelines, and enabling advanced analytics across the organization. The ideal candidate will have a strong background in data engineering, data warehousing, and ELT processes, along with a passion for optimizing data systems.

This is a remote position, with preference given to East Coast candidates.

 

Key Responsibilities:

  • Design, develop, and maintain a scalable lakehouse architecture, including a medallion (bronze/silver/gold) data model optimized for analytics and AI/ML consumption.
  • Design, implement, and operate ELT pipelines, including workflow orchestration, scheduling, and monitoring, to ensure reliable and scalable execution.
  • Establish data quality, testing, and observability practices, and proactively monitor and resolve data and automation issues to ensure platform reliability and trust.
  • Ensure data security and compliance, including role-based access controls for security, encryption, masking, and governance best practices to ensure compliant handling of sensitive information.
  • Optimize performance of data workflows and storage for cost efficiency and speed.
  • Partner with engineers, analysts, and stakeholders to meet data needs; balance cost, performance, simplicity, and time-to-value while mentoring teams and documenting standards.
  • Provide technical leadership and mentorship to team members – guiding best practices, skill development, and collaboration cross-functionally.
  • Enable AI/ML use cases through well-structured data models, feature availability, and platform integrations using tools such as Databricks Vector Search and Model Serving. 
  • Develop and maintain data pipelines using version control and CI/CD best practices in a collaborative engineering environment.
  • Collaborate within an Agile-Scrum framework and develop comprehensive technical design documentation to ensure efficient and successful delivery.  
  • Serve as a trusted expert on organizational data domains, processes, and best practices.  

 

Requirements:

  • 5+ years of hands-on data engineering experience required
  • 3+ years of experience building and operating data pipelines on a modern lakehouse platform (e.g., Databricks – Unity Catalog, Delta Live Tables, Asset Bundles), including data modeling, governance, and CI/CD deployment patterns
  • 3+ years of experience with analytical SQL (ANSI SQL/T-SQL/Spark SQL) and Python for data engineering, including pipeline construction, transformation logic, and automation required
  • Strong communication skills with the ability to collaborate and influence across engineering, analytics, and business stakeholders required
  • Streaming and ingestion tools, such as Kafka, Kinesis, Event Hubs, Debezium, or Fivetran preferred
  • DAX, LookML, dbt; Airflow/Dagster/Prefect, Terraform; Azure DevOps; Power BI/Looker/Tableau; GitHub CoPilot knowledge is a plus
  • Bachelor’s degree in Computer Science, Information Technology, or a related field. Master’s degree preferred

 

Data Engineer Related jobs

Other jobs at Bridgeway Benefit Technologies

We help you get seen. Not ignored.

We help you get seen faster — by the right people.

🚀

Auto-Apply

We apply for you — automatically and instantly.

Save time, skip forms, and stay on top of every opportunity. Because you can't get seen if you're not in the race.

AI Match Feedback

Know your real match before you apply.

Get a detailed AI assessment of your profile against each job posting. Because getting seen starts with passing the filters.

Upgrade to Premium. Apply smarter and get noticed.

Upgrade to Premium

Join thousands of professionals who got noticed and hired faster.