Location: Remote
Employment Type: Full-Time
Experience Level: Senior (4+ years preferred)
Are you passionate about backend systems and large-scale data processing? Do you love crafting high-performance infrastructure that transforms massive volumes of raw data into insights?
We're looking for a Backend Data Engineer who thrives on solving deep infrastructure challenges, working with petabytes of data, and building efficient, modern data pipelines. Our stack includes cloud-native object storage, specialized databases like ScyllaDB, RocksDB, and DuckDB, and systems programming using Python and Rust.
You'll play a key role in designing the infrastructure that powers our analytics and data delivery balancing performance, scalability, and reliability.
Architect backend systems using the right combination of databases, storage layers, and processing frameworks
Build and maintain backend services and data pipelines with a focus on efficiency, scalability, and clean architecture
Design and optimize in-house databases and indexes, and integrate best-in-class open-source solutions
Enhance our Python-based stack through better architecture and high-performance Rust extensions
Work directly with PB-scale cloud storage, implementing efficient data access patterns to power product queries
Extract, transform, and migrate terabytes of hot data between storage systems for analytics and infrastructure upgrades
Strong background in backend engineering with expertise in data processing, infrastructure, and performance optimization
Deep understanding of modern cloud tooling (AWS S3, GCS, object stores, etc.)
Experience with Python (data/backend focus) and Rust (or willingness to learn and apply it)
Familiarity with distributed databases, queues, and streaming pipelines
Ability to think deeply about systems design and maintain clean, production-grade backend services
Hands-on experience working with large-scale data systems (TBPB scale)
Languages: Python, Rust
Databases: RocksDB, ScyllaDB, DuckDB
Infrastructure: Cloud-native object storage (e.g., S3), custom accessors, high-throughput pipelines
Tooling: Job queues, indexing engines, in-house and open-source backends
Solve cutting-edge backend engineering problems at scale
Work with a team that values engineering craftsmanship and computing fundamentals
Shape the technical direction of a high-performance, data-intensive product
Contribute to a meaningful mission by transforming complex data into actionable intelligence
HR POD - Hiring Talent Globally
Transparent Search Group
bunch
SupplyHouse.com
leboncoin