Workato transforms technology complexity into business opportunity. As the leader in enterprise orchestration, Workato helps businesses globally streamline operations by connecting data, processes, applications, and experiences. Its AI-powered platform enables teams to navigate complex workflows in real-time, driving efficiency and agility.
Trusted by a community of 400,000 global customers, Workato empowers organizations of every size to unlock new value and lead in today’s fast-changing world. Learn how Workato helps businesses of all sizes achieve more at workato.com.
Ultimately, Workato believes in fostering a flexible, trust-oriented culture that empowers everyone to take full ownership of their roles. We are driven by innovation and looking for team players who want to actively build our company.
But, we also believe in balancing productivity with self-care. That’s why we offer all of our employees a vibrant and dynamic work environment along with a multitude of benefits they can enjoy inside and outside of their work lives.
If this sounds right up your alley, please submit an application. We look forward to getting to know you!
Also, feel free to check out why:
Business Insider named us an “enterprise startup to bet your career on”
Forbes’ Cloud 100 recognized us as one of the top 100 private cloud companies in the world
Deloitte Tech Fast 500 ranked us as the 17th fastest growing tech company in the Bay Area, and 96th in North America
Quartz ranked us the #1 best company for remote workers
As a Storage Infrastructure Engineer you will be responsible for designing, building, operating, scaling, maintaining and evolving Workato enterprise-grade storage infrastructure. You will work closely with all engineering & infrastructure teams in Workato leading our global storage story for different use cases, scenarios, enterprise-grade real time latency-critical requirements and global scale. You will work closely with the engineering leadership team and will have a direct long-term strategic impact on modernization and evolution of the Workato architecture. You are the expert in different distributed storage systems, know how to use them effectively and understand internals and deep principles behind. At the same time it is a hands-on role so there is expectation of the significant contribution to storage infrastructure as code, monitoring, continuous analysis of slow requests and trends, insights, reliability and upgrades. We expect from all Workato engineering roles: deep knowledge & understanding of computer systems, desire & love to go to the lowest level of the details, troubleshooting using modern techniques and tools. It is required to have practical experience with modern infrastructure building tools and principles such as IaC, knowledge and experience of running distributed storage systems in cloud (AWS preferred) and container orchestration systems (kubernetes is mandatory) in highly reliable, highly scalable manner. One expectation is great understanding of reliability and availability in terms of nines for different use cases, knowledge of architecture & design patterns and procedures for different numbers of nines. Knowledge and experience of storage security, auditability and compliance requirements are highly desired.
Workato storage layer is the most mission-critical highly loaded distributed infrastructure. It is based on industry leading mature technologies such as PostgreSQL, Redis, Clickhouse, MySQL/MariaDB deployed in self-hosted or vendor managed (e.g. AWS Aurora) version.
We are currently working on scaling, upgrading, automating, securing, modernizing the storage layer to meet new requirements of rapidly growing business such as:
Automate storage maintenance to fully eliminate manual work
Support zero-downtime DB upgrades
Support massive compute scale: thousands of network clients (up to hundred applications)
Bring-in proper observability & monitoring
Scale storage up to 5x-10x in short term
Re-design storage architecture for longer term scalability up to 100x times
Keep cost under control in predictable manner
Take ownership of distributed storage story and be center of distributed storage expertise
5+ years of trackable work experience with deploying and supporting highly scalable distributed enterprise-grade in-memory data stores.
Production experience operating, maintaining, troubleshooting, and scaling clustered Redis environments such as ElastiCache, Valkey, Redis Enterprise, or self-managed Redis clusters with high-availability configurations
Great knowledge and understanding of the Valkey/Redis ecosystem including Redis modules, Sentinel, Redis Cluster, client libraries, and complementary technologies like Envoy for Redis proxy capabilities
Experience with Redis memory optimization, eviction policies, data expiration strategies, and overall cost optimization and capacity planning
Experience with zero-downtime major version upgrades and migrations of heavily utilized Redis instances supporting critical application workloads
Experience with vendor-specific implementations of Redis such as AWS ElastiCache, Azure Cache for Redis, and Redis Enterprise
Understanding of compliance certification, auditability, security, authentication methods, and access controls for Redis systems
Experience increasing reliability and availability for Redis deployments through architecture redesign, proper sharding strategies, persistence configuration, and implementation of Redis Cluster or Redis Sentinel
Experience building proper observability, monitoring, alerting, and logging for Redis health. Ability to troubleshoot performance bottlenecks in distributed Redis deployments including memory fragmentation, network issues, and slow commands.
Experience managing complex Redis infrastructure in the cloud (in Kubernetes clusters, AWS cloud) using Infrastructure as Code tools (Terraform is highly preferred).
Experience deploying stateful Redis instances into Kubernetes with modern tools like Kustomize, Helm, ArgoCD, etc.
Experience with AWS cloud computing (EC2, ElastiCache, EKS, Route53, VPCs, Subnets, Route Tables).
Basic knowledge of one or more high-level programming languages, such as Python, Go. Basic knowledge of Ruby and Redis client libraries, readiness to read Ruby monolith
Experience implementing comprehensive performance testing methodologies for Redis deployments
Demonstrated ability to develop clear, actionable Redis best practice documentation for development teams, including usage guidelines, implementation patterns, and configuration standards.
Experience building cross-regional Disaster Recovery Redis solutions with low RTO/RPO targets (from hours to minutes). Experience designing and implementing geo-distributed Redis deployments with active-active replication.
Good communication and collaboration skills in international technological companies
Readiness to work remotely with teams distributed across the world and timezones
Interest in modern big distributed storage technologies, architectures
Good Spoken English to participate in product-related, architectural and technical discussions
Proper balance between being hands-on and deeper analytical approaches
Veeva Systems
MongoDB
NEARSOURCE TECHNOLOGIES
Keyrock
Optimal Growth Technologies