Logo for ioet

Senior Data Engineer

Roles & Responsibilities

  • 5+ years of experience as a Data Engineer
  • Strong experience with Databricks (notebooks, jobs, workflows, Delta Lake, etc.)
  • Advanced hands-on experience with PySpark for batch processing and large-scale data transformations
  • Strong Python experience for automation, ETL logic, and data manipulation

Requirements:

  • Design, develop, and optimize ETL pipelines using Databricks, PySpark, and Python
  • Build and maintain PySpark jobs for large-scale data processing and transformation
  • Implement and manage Databricks workflows, Delta Lake operations, and notebook-based development
  • Develop Python scripts and reusable components to support data processing and automation

Job description

At ioet, a leading software company with a talented team across LATAM, we provide Software Engineering as a service to clients worldwide. Join us for exciting professional challenges, working on projects ranging from innovative startups to globally recognized brands. Our positions are full-time, remote, and offer competitive compensation in USD.

We are looking for an experienced Senior Data Engineer with strong hands-on expertise in Databricks, PySpark, Python, and data quality initiatives. You will play a key role in designing and implementing scalable data pipelines, ensuring reliable data processing, and driving data quality across the platform.

This role also requires solid experience building services and data-driven applications using Python with Django, asynchronous task processing with Celery, caching and messaging tools, and modern databases used for both structured and unstructured data storage.

You should feel comfortable working with large datasets, optimizing distributed processing jobs, and partnering closely with stakeholders to deliver high-quality, well-structured data products that support both analytics and application-level needs.

Responsibilities:

  • Design, develop, and optimize ETL pipelines using Databricks, PySpark, and Python
  • Build and maintain PySpark jobs for large-scale data processing and transformation
  • Implement and manage Databricks workflows, Delta Lake operations, and notebook-based development
  • Develop Python scripts and reusable components to support data processing and automation
  • Build and maintain web services using Python and Django
  • Implement asynchronous task processing with Celery
  • Use Redis for caching strategies and improving application performance
  • Work with RabbitMQ for message queues and event handling
  • Ensure data quality, reliability, and documentation across the platform
  • Work with PostgreSQL as the primary relational database
  • Use MongoDB for metadata and indexing workflows
  • Operate Elasticsearch as the main search engine in the platform

Requirements:

  • 5+ years working as a Data Engineer
    Strong professional experience with Databricks (notebooks, jobs, workflows, Delta Lake, etc)
  • Advanced hands-on experience with PySpark for batch processing and large-scale data transformations
  • Strong Python experience for automation, ETL logic, and data manipulation
  • Professional experience using Python with Django
  • Experience using Celery for asynchronous tasks
  • Experience with Redis for cache and performance optimization
  • Experience with RabbitMQ for queues and event-driven workflows
  • Proficiency with PostgreSQL as the main database
  • Experience working with MongoDB
  • Experience implementing Elasticsearch for search and indexing needs
  • Experience designing, implementing, and maintaining data quality frameworks, validation checks, and monitoring tools
  • Proficiency with SQL for complex transformations and optimization
  • Experience working with various stakeholders to understand the data domain and adapt it to business needs
  • Strong English communication skills – minimum B2 level
  • Send your application and CV in English (mandatory)
  • Based in Latin America

Benefits:

  • Remote work
  • Flexible schedule
  • Collaboration with international clients
  • USD compensation
  • Paid Holidays and Vacations
  • Paid family and sick leaves
  • English classes
  • Educational and wellness bonus
  • Structured career plan with regular salary reviews
  • Emphasis on personal growth and mentorship

Are you ready to be part of the ioet journey?

Get your CV in English and Apply Now.

If you are curious to know more about our culture, technologies, and blogs, visit www.ioet.com

Data Engineer Related jobs

Other jobs at ioet

We help you get seen. Not ignored.

We help you get seen faster — by the right people.

🚀

Auto-Apply

We apply for you — automatically and instantly.

Save time, skip forms, and stay on top of every opportunity. Because you can't get seen if you're not in the race.

AI Match Feedback

Know your real match before you apply.

Get a detailed AI assessment of your profile against each job posting. Because getting seen starts with passing the filters.

Upgrade to Premium. Apply smarter and get noticed.

Upgrade to Premium

Join thousands of professionals who got noticed and hired faster.