Virtasant is a global technology services company with a network of over 4,000 technology professionals across 130+ countries. We specialize in cloud architecture, infrastructure, migration, and optimization, helping enterprises scale efficiently while maintaining cost control.
Our clients range from Fortune 500 companies to fast-growing startups, relying on us to build high-performance infrastructure, optimize cloud environments, and enable continuous delivery at scale.
Our client is a US company that specializes in online real estate marketplaces, making a difference in the way people buy and sell their homes.
For them, we’re looking for a Backend Data Engineer to build data pipelines and support our client's engineering team across their two real estate software platforms. You need to be based in Latin America.
In this role, you’ll be designing pipelines and data stores to manage terabytes of real estate data. This role will allow you to define and refine the overall data strategy for a growing tech company.
Design, build, and maintain our client's data pipelines and data warehouses for user-facing features, analytics, and aiding business intelligence.
Develop ETL ecosystem tools using tools like Python, External APIs, Airbyte, Snowflake, dbt, PostgreSQL, MySQL, and Amazon S3.
Define automated solutions to solve complex problems and better understand our clients' data, users, and market.
Assist the overall engineering team with database design and data flows
Develop engineering best practices, documentation, and process flows that facilitate collaboration and knowledge transfer.
You will be able to lead the way in identifying, scoping, building, and deploying new data solutions (pipelines, warehousing, and other infrastructure).
It's an awesome match for those who are self-motivated and curious, with the desire to initiate and own projects from start to finish.
It's ideal for a team player who enjoys collaborating across teams to develop solutions that are the best fit for the products and their peers.
The challenge of managing high volumes of data makes this role challenging and interesting!
2 to 5 years of experience working as a Data Engineer or Backend Engineer
Expert-level understanding of SQL-based database design and usage
Expert-level proficiency in at least one programming language (Ideally Python, but PHP or JavaScript could work too)
Professional experience in high-volume ETL systems.
Effective communication with engineering peers, project managers, and business stakeholders.
An advanced degree in Computer Science, Analytics, or a related field
Hands-on experience building data solutions in at least one public cloud environment. We use AWS.
Experience using Python or dbt in a data role
Experience with a Business Intelligence tool like Tableau, Qlik, or Amazon QuickSight.
Successfully reading from external APIs and writing to internal databases
Gaining familiarity with existing data pipelines, schemas, and tooling
Improving and maturing pipelines to process event streams and dynamically update data products
Contributing to data architecture decisions across multiple teams
Owning multiple data streams that cross team boundaries
Playing a key role in shaping data strategy, reliability, and scalability across platforms
Recruiter interview (30 min)
Technical Interview (30 min)
Interview with the client's Director of Engineering (60 minutes)
Take-home coding exercise (60 minutes, maximum)
Interview with a small panel of leaders from the client, including the Director of Product
We strive to move efficiently from step to step so that the recruitment process can be as fast as possible.
Totally remote, full-time (40h/week)
Work hours - 8 hours, with US Central time overlap (10 AM - 4 PM)
Independent contractor agreement (after the first 6-month trial, it's a long-term, no-end-date contract)
Payment in USD, biweekly or monthly - your choice

Autodesk

Deel

vialytics

Scribe

Okta

Gigster

Gigster

Gigster