With over $3 billion in bookings of experiences, the Peek.com platform combines powerful business software with an awardwinning marketplace for consumers to book fun things to do like wine tours, watersports, skydiving, art classes, and more.
PS: In 2021, Peek was not only recognized with a coveted position on Forbes Americas Best Startups Employer list but also celebrated an honor from Newsweek with their Future of Travel award 🚀. And the accolades dont stop there! Were ecstatic to announce our #14 ranking on the a16z Marketplace 100 for 2023! 🎉
We are looking for our next DevOps Engineer. Someone will passionately contribute to infrastructureascode, accelerating development and deployment processes, increasing reliability, and scaling our platform as our company grows.
Our team is 100% remote; however, we prefer candidates in the same time zones as the greater United States (UTC10 to UTC4).
This is an oncall position and will require you to be part of an oncall schedule. We will also occasionally require you to work outside of normal business hours on infrastructure upgrades and maintenance. We are committed to working with you to keep a healthy and balanced schedule.
We’re a small DevOps team supporting 50+ engineers, building a serviceoriented architecture on top of Kubernetes and GCP. We own all aspects of the SDLC but strive to automate selfservice wherever possible. Being a small team, we also practice SRE, continuously improving our observability and building nearly everything with InfrastructureasCode. Security and compliance best practices are integral to our workflows, ensuring systems are secure by design and meet regulatory and organizational standards. Our team is remote but highly organized to meet the demands of a fastpaced environment. Our primary business language is English, and we emphasize strong communication skills.
You are an experienced cloud engineer with at least 3+ years managing Google Cloud Platform (GCP) andor Amazon Web Services (AWS), including services such as Compute Engine, Kubernetes Engine, Cloud SQL (PostgreSQL), Memorystore (Redis), Cloud DNS, Route53, S3, IAM security, VPC, and Security Groups. You have a strong track record operating largescale, highavailability, asynchronous, distributed systems, deploying and managing serviceoriented architectures, and improving application performance and solving scaling challenges.
You have handson experience running Kubernetes in production using Helm, and you are skilled with infrastructureascode technologies such as Terraform or Pulumi. You understand how to design and implement robust monitoring and reporting solutions using tools like Prometheus, Grafana, or New Relic. You have a solid understanding of networking (routers, switches, load balancing, DNS, VPN, TLS). You are experienced in working with source control and CICD systems such as GitGitHub, Jenkins, Codefresh, or ArgoCD.
You can code in one or more programming languages such as Python, TypeScript, or Go. You have experience with data warehousing using BigQuery or Redshift. You are securityminded and strive to ensure security and compliance best practices throughout the SDLC to meet SOC2 and PCI requirements, especially when handling PII.
You are comfortable working with serverless platforms like GCP Cloud Run and Cloud Functions. You enjoy building playbooks and mentoring others, sharing knowledge to strengthen the team as a whole.
At least 3 years of experience as a DevOps Engineer or Platform Engineer
Handson experience with Kubernetes, including the ability to troubleshoot clusterrelated issues.
Proficiency with Infrastructure as Code (IaC) tools such as Terraform or Pulumi.
Strong scripting skills in Bash and Python, with experience writing automation scripts for CICD pipelines.
Experience working with a major cloud provider (AWS, GCP, or Azure), and a solid understanding of networking concepts such as VPCs, DNS, TLS, load balancing, and VPNs.
Solid understanding of the software development lifecycle (SDLC) and modern CICD systems such as GitHub Actions, Jenkins, Codefresh, or ArgoCD.
Experience explicitly using GCP (Google Cloud Platform). This is where we deploy 95% of our infrastructure.
Experience with some highlevel programming languages such as Python, Ruby, and Typescript.
Experience working with databases such as PostgreSQL and MongoDB
Experience working with data warehouses such as Redshift and BigQuery
Experience with caching systems such as Redis and Fastly
Experience working with serverless platforms such as GCP Cloud Run, GCP Cloud Functions, AWS Lambda.
Meet the Recruiter: Discuss the requirements of the role and learn more about Peek’s culture
Meet the Hiring Manager
Infrastructure Challenge, followed by meeting the team
Meet a Stakeholder
Meet an Executive
References and Offer
Perks & Benefits
Peek invests in our employee’s health and wellbeing. We’ve built our benefits package around our Peekster’s needs including full health care, dental, and vision plans, paid parental leave, company recharge at the end of the year, and competitive compensation packages that include significant equity upside that allows you to share in Peek’s longterm success.
This link leads to the machinereadable files that are made available in response to the federal Transparency in Coverage Rule and includes negotiated service rates and outofnetwork allowed amounts between health plans and healthcare providers. The machinereadable files are formatted to allow researchers, regulators, and application developers to more easily access and analyze data. Beginning on July 1, you may locate and view the UnitedHealthcare MRFs on the UnitedHealthcare public site by going to transparencyincoverage.uhc.com.
PLAUD.AI
Commit
TechBiz Global
TalentIn
AMFG