Data Engineer

Remote: 
Full Remote
Contract: 

Offer summary

Qualifications:

3-5+ years of experience in data engineering with hands-on experience in data pipelines., Strong SQL skills and experience optimizing queries for large datasets., Proficiency in Python or Java for building data workflows and transformations., Familiarity with AWS services and ETL/ELT systems like Airflow or Glue..

Key responsabilities:

  • Design and implement scalable data models and pipelines to support engineering and business needs.
  • Optimize and migrate data workflows to ensure seamless integration with minimal impact on revenue.
  • Collaborate with developers to create effective data schemas and storage strategies.
  • Support core business processes by collaborating with analytics and product teams for key reporting.

Protecht logo
Protecht Insurtech: Insurance + Technology Startup https://protecht.com/
11 - 50 Employees
See all jobs

Job description

Protecht is reinventing refunds, striving to make every experience refundable. Our strength lies in our proprietary Software-as-a-Service (SaaS) embedded refund protection platform, which allows us to aggregate significant distribution while providing a top-tier digital purchasing experience for event organizers, booking platforms, ticketing systems, and end consumers. Our e-commerce solution is embedded in millions of carts each month and backed by A.M.-Best-Rated insurers.

Job Summary

Our data-driven solutions power mission-critical applications, and we need a Data Engineer to help us optimize and scale our data infrastructure. If you thrive on building robust data architectures, designing effective schemas, ensuring high-performance data pipelines, and empowering business teams with the right data solutions, we want you on our team.

As a Data Engineer at Protecht, you’ll work closely with both engineering and business teams. 

Your work will help:

  • Developers optimize data storage and retrieval
  • Migrate key applications to separate data warehouses to prevent revenue-impacting disruptions
  • Implement efficient data pipelines to transform and transfer data in near real-time
  • Support reporting and analytics capabilities to promote data-driven insights across the business

As we scale, you’ll play a crucial role in ensuring our data systems remain performant, reliable, and accessible, both for our internal teams and external partners.

Responsibilities
  • Design & Implement Data Solutions – Architect scalable, efficient, and well-structured data models and pipelines to support both engineering and business needs.
  • Optimize & Migrate Data Workflows – Develop ETL/ELT pipelines to migrate critical applications to separate data warehouses, ensuring seamless integration and minimal impact on live revenue-generating applications.
  • Collaborate with Developers – Work closely with engineers to design and implement the most effective data schemas, storage strategies, and indexing techniques for performance optimization.
  • Develop & Maintain Pipelines – Build batch and near real-time data pipelines to support various applications, ensuring efficient data transformation and storage.
  • Support Core Business Processes - Collaborate with analytics and product teams to support key reporting across multiple business functions.
  • Ensure Data Quality & Governance – Design frameworks for data validation, integrity, and compliance including SOC II & PCI to ensure high-quality, accurate, and secure data.
  • Performance Optimization & Troubleshooting – Analyze and improve query performance, database efficiency, and system scalability while troubleshooting complex data issues.
  • Support & Scale Data Infrastructure – Help establish best practices, automation, and monitoring strategies to ensure our data architecture scales with the company’s growth.
Requirements
  • 3-5+ years of experience in data engineering, with hands-on experience designing and managing data pipelines.
  • Strong SQL skills and experience optimizing queries for performance across large datasets.
  • Proficiency in Python or Java for building data workflows and transformations.
  • Experience with databases such as AWS Redshift, Athena, Postgres, MySQL, DynamoDB, or other modern data storage solutions.
  • Knowledge of ETL/ELT systems like Airflow, Glue, Spark, DBT, or similar tools.
  • Familiarity with AWS services (Lambda, Step Functions, Fargate, S3, etc.) for scalable data infrastructure.
  • Experience working with message queues and event-driven architectures (Kafka, Kinesis, SQS, etc.).
  • Understanding of data modeling, indexing strategies, and schema design to optimize for various application use cases.
  • Experience with BI tools such as Domo, Tableau, Looker, or similar is a plus.
  • Strong problem-solving skills and a proactive mindset for improving data workflows.
  • Excellent communication skills to collaborate with engineers, product teams, and business stakeholders.
  • This is a remote position. Travel to Protecht Hubs in Phoenix, San Francisco, Denver, Los Angeles, Austin, or Chicago may be required as necessary.
Why Join Protecht?
  • Fully Remote Working Environment
  • Competitive Salary and Equity Opportunities
  • Unlimited Paid Time-off
  • Medical, Dental, and Vision Benefits
  • Annual Bonus Program
  • 401k Matching
  • $100/month for Event Ticket Purchase
  • Company-Sponsored Events


 

Required profile

Experience

Industry :
Insurtech: Insurance + Technology
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Communication
  • Problem Solving

Data Engineer Related jobs