Senior Data Engineer

Remote: 
Full Remote
Contract: 

Offer summary

Qualifications:

Minimum 5 years of experience in data engineering roles., Proficiency in Python and SQL for data pipeline development., Experience with cloud platforms like AWS, GCP, or Azure., Strong understanding of data warehousing, ETL processes, and distributed systems..

Key responsibilities:

  • Design and build scalable data pipelines and real-time data streams.
  • Lead technical projects through architecture, design, and implementation.
  • Collaborate with cross-functional teams to troubleshoot data issues.
  • Monitor and analyze data flows, maintaining data quality and system health.

Loka logo
Loka SME http://www.loka.com/
51 - 200 Employees
See all jobs

Job description

 

Years from now, history will look back at this moment as a landmark for the human race—the moment when generative artificial intelligence began to change everything. Luckily, you happen to be living through it. Now what?

If you’re one of those rare people driven to add their intellectual energy to the relentless momentum of technological advancement, consider Loka your optimal leverage point.

Loka is a globally distributed tech consultancy based in Silicon Valley, and we’re seeking a Senior Data Engineer to join our growing team. In this role you will design, implement and scale data solutions from batch to streaming, dealing with the low-latency requirements of a few megabytes up to several petabytes. You will be an evangelist of good practices and dwell at the cutting edge of new technologies and architectural patterns. Ultimately we expect that you will not just fit in with our Data crew but elevate it to new heights.

We help our clients launch innovations that fix the climate, defeat cancer, improve agricultural efficiencies, teach kids to read and more. We work with all kinds of companies, from Silicon Valley startups to major household brands, always guided by our fundamental belief that mission matters. We count more than 200 certified specialists, technical experts and PhDs among our colleagues—committed Lokals who contribute to their brilliance not only to serving our clients but to mentoring junior colleagues as well. We work entirely remotely, test new ideas with our in-house incubator and take every other Friday off (really).

To cap it off, at the end of 2024 Loka was recognized by AWS as Innovation Partner of the Year, beating more than 150,000 partners for the title.

We’re as proud of our culture as we are of our projects. You will be too. Join our team, feed your need to grow and ship projects you believe in with a team you’re proud to be a part of.

The Role

  • Build scalable production-grade data pipelines and real-time data streams.
  • Write and design scalable, cloud-ready applications.
  • Lead technical projects through architecture, design and implementation phases.
  • Collaborate with Machine Learning, Data Science, Design, Software Engineering and Business teams to triage data or ETL issues.
  • Build tests and health checks to maintain code and data quality.
  • Monitor and analyze data flowing through various systems by adding appropriate visualizations and dashboards.
  • Provide updates and offer guidance to clients.

Technical Requirements

  • Advanced Python and SQL
  • Experience in ETL design, implementation, and maintenance.
  • Experience in AWS, GCP or Azure delivering data-centric product experiences
  • 5+ years experience in related role
  • Experience with in-memory and disk-based databases, relational and non-relational databases, full-text search engines, database design, development, and maintenance. (Such as MySQL, MongoDB, OpenSearch, and DynamoDB, with bonus for Graph Databases like Neo4j)
  • Experience with data warehousing and multidimensional data models (columnar data modeling)
  • Working knowledge in Data Lake, Data warehouse and massive parallel processing
  • Strong problem-solving ability and ability to work through ambiguity and incomplete specifications

Preferred but Not Required

  • Working knowledge in IAM, Federated Authentication, SSO, SAML, Encryption, Security, APIs, Disaster Recovery or Backup
  • Strong ability in distributed systems for large-scale data processing
  • Experience with Spark and Pandas
  • Experience with Open Tables (Hudi, Delta Lake, DataBricks, Iceberg as related to Data Lakehouse)
  • Experience with Infrastructure as Code and CI/CD Pipelines
  • Experience with QuickSight and DataViz

 

Additional Requirements

  • Excellent English - Being a global team, we work entirely in English for meetings, customer calls and business communications
  • CV written in English

 

Personality Profile

  • Curious: You strive to learn and grow into different industries with a modern tech stack.
  • Autonomous: You thrive in a fully remote, globally distributed team.
  • Collaborative: You enjoy working across departments.
  • Adaptable: You operate with a startup mindset and move at startup speed.

Benefits

 

  • Every other Friday off (26 extra days off a year)
  • Remote-first and flexible schedule
  • Explorer and Relocate programs (three months abroad or full relocation)
  • Paid sick days and local holidays
  • Business English classes
  • Access to LokaLabs™, our internal R&D incubator
  • Mental wellness and physical fitness programs
  • Continuous learning support

 

 

Your achievements matter to us! Ensure your CV and LinkedIn and GitHub profiles are up to date and accurately reflect your experience.

Required profile

Experience

Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Problem Solving
  • Adaptability
  • Collaboration
  • Curiosity

Data Engineer Related jobs