Logo for Bamboo Insurance

Senior Data Engineer

Roles & Responsibilities

  • Strong SQL and Python skills for building and maintaining data pipelines and transformations
  • Experience designing and operating cloud-based data platforms
  • Strong experience with Snowflake and managing data warehouse performance
  • Experience integrating external systems and APIs into data pipelines

Requirements:

  • Design, maintain Snowflake data models and transformations to support reliable analytics workloads and optimize warehouse performance
  • Build scalable data ingestion pipelines from APIs, files, and external systems; transform data into warehouse-ready schemas
  • Implement monitoring, data quality checks, and reliability practices; expose telemetry for AI-assisted monitoring and operations
  • Collaborate with data analysts, engineers, DevOps, and business stakeholders to deliver trusted data systems and enable AI-assisted data exploration

Job description

Job Summary: 

We are looking for a highly versatile and hands-on Senior Data Engineer to help design, build, and operate a modern cloud data platform. This role sits at the intersection of data engineering, platform reliability, and systems integration. Help evolve the data platform to support AI-assisted data operations, including building contextual data layers that allow AI agents to understand pipelines, schemas, and system health. Enable conversational access to data so business users and analysts can explore datasets, ask questions, and generate insights through AI-powered interfaces. 

The ideal candidate enjoys solving complex data problems, building scalable pipelines, and improving the reliability and performance of data systems from source ingestion through analytics delivery. 

You will play a key role in shaping how data flows across the organization, ensuring that pipelines are reliable, data is trustworthy, and the platform can scale as new sources and use cases emerge. 

 

Problems You Will Help Solve: 

  • Integrate data from multiple external systems and ensure it flows reliably into the organization’s data platform. 

  • Build scalable pipelines that can accommodate evolving schemas, new data sources, and changing business needs. 

  • Ensure data accuracy and completeness across ingestion, transformation, and warehouse layers. 

  • Maintain a high-performing Snowflake environment that supports analytics and reporting workloads at scale. 

  • Improve the reliability, observability, and maintainability of data pipelines and data infrastructure. 

  • Continuously evolve the data platform so new integrations and data products can be delivered quickly and safely. 

  • Enable AI-assisted exploration of enterprise data by building structured metadata, lineage, and semantic layers that allow AI systems to reason about datasets and pipeline behavior. 

 

Duties/Responsibilities: 

  • Data Platform & Warehousing 

  • Implement modern data architecture patterns such as layered data models (e.g., raw, standardized, curated) to support scalable analytics. 

  • Build structured data models and metadata layers that provide clear context about datasets, schemas, and relationships so both users and AI systems can reliably understand and query data. 

  • Design and maintain Snowflake data models and transformations that support reliable reporting and analytics workloads. 

  • Optimize warehouse performance and manage compute usage to balance performance and cost. 

  • Support backup, recovery, and resiliency strategies across data warehouse and staging environments. 

  • Data Ingestion & Transformation 

  • Design and maintain scalable pipelines that ingest data from APIs, files, and external systems into the data platform. 

  • Transform structured and semi-structured data into reliable warehouse-ready schemas. 

  • Build reusable ingestion and transformation patterns that simplify onboarding of new data sources. 

  • Platform Reliability & Infrastructure 

  • Manage and improve the cloud infrastructure that supports the data platform. 

  • Implement disaster recovery practices including automated backups and recovery validation. 

  • Maintain archival processes and support historical data recovery when needed. 

  • Monitoring & Data Quality 

  • Implement monitoring and alerting to ensure data pipelines operate reliably. 

  • Build automated checks to validate data accuracy, completeness, and consistency across systems. 

  • Enable AI-assisted monitoring and troubleshooting by exposing pipeline telemetry, metadata, and operational signals in a structured and accessible way. 

  • Maintain operational documentation, pipeline dependencies, and system runbooks. 

  • Collaboration & Governance 

  • Partner with data analysts, engineers, DevOps, and business stakeholders to deliver trusted data systems. 

  • Support AI-assisted data exploration that allows business users and analysts to discover datasets, generate queries, and explore insights through conversational interfaces. 

  • Support security, access control, and governance practices across the data platform. 

  • Manage data movement between environments and support operational workflows for development, testing, and production systems. 

  • AI-Driven Data Operations 

  • Explore and implement emerging AI capabilities that assist with data discovery, pipeline monitoring, and operational troubleshooting using platform metadata and system telemetry. 

 

Required Skills/Abilities:  

  • Strong SQL and Python skills for building and maintaining data pipelines and transformations 

  • Experience designing and operating cloud-based data platforms 

  • Strong experience working with modern data warehouses such as Snowflake 

  • Experience integrating external systems and APIs into data pipelines 

  • Familiarity with modern data ingestion, transformation, and orchestration approaches 

  • Experience working with cloud infrastructure supporting data systems 

  • Understanding of database performance, query optimization, and system tuning 

  • Experience implementing monitoring, reliability, and operational practices for data pipelines 

  • Ability to design and maintain automated data validation and quality checks 

  • Familiarity with building contextual data layers (metadata, lineage, schemas, documentation, operational logs) that enable AI systems to reason about data platforms. 

  • Experience in applying AI techniques to improve data discovery, pipeline monitoring, and analytics workflows. 

  • Experience enabling conversational or AI-assisted interfaces that help users explore datasets or generate analytical queries. 

  • Strong documentation, collaboration, and problem-solving skills 

 

Required Education and Experience: 

  • Bachelor’s degree in Computer Science, Information Systems, Engineering, or a related field. 

  • 5+ years of experience in modern data engineering skills, data infrastructure, or similar roles. 

  • Experience delivering and maintaining data pipelines in cloud-based production environments. 

  • Track record of managing Snowflake performance and AWS-hosted systems at scale. 

  • Exposure to building AI-enabled data platforms, including semantic data layers, metadata services, or conversational analytics interfaces. 

  • Demonstrated ability to design and operate reliable data systems that support analytics and reporting 

 

Preferred Requirements: 

  • Master’s degree in Computer Science, Information Systems, Engineering, or a related field. 

  • Experience in at least one domain-specific data model such as P&C Insurance, CRM, or Call Center systems. 

  • Familiarity with real-time or event-driven data architectures. 

  • Experience with data governance, access control and compliance frameworks. 

  • Exposure to infrastructure-as-code and CI/CD practices for data platforms 

  • Relevant certifications related to cloud platforms or modern data systems. 

 

Physical Requirements:  

  • Prolonged periods of sitting or standing at a desk and working on a computer. 

 

Salary: Starting at $140,000 annually. Candidate's skills, experience and abilities will be taken into consideration for final offer.

 

Bamboo is committed to the principles of equal employment. We are committed to complying with all federal, state, and local laws providing equal employment opportunities, and all other employment laws and regulations.

Data Engineer Related jobs

Other jobs at Bamboo Insurance

We help you get seen. Not ignored.

We help you get seen faster — by the right people.

🚀

Auto-Apply

We apply for you — automatically and instantly.

Save time, skip forms, and stay on top of every opportunity. Because you can't get seen if you're not in the race.

AI Match Feedback

Know your real match before you apply.

Get a detailed AI assessment of your profile against each job posting. Because getting seen starts with passing the filters.

Upgrade to Premium. Apply smarter and get noticed.

Upgrade to Premium

Join thousands of professionals who got noticed and hired faster.