Match score not available

Senior Data Engineer

Remote: 
Full Remote
Contract: 
Experience: 
Mid-level (2-5 years)
Work from: 

Offer summary

Qualifications:

Strong Python and PySpark experience, Expertise in Databricks and ADLS, Experience with data modeling.

Key responsabilities:

  • Design, build, maintain scalable data pipelines
  • Optimize data processing for efficiency
  • Collaborate with data scientists and engineers
Globant Commerce Studio logo
Globant Commerce Studio Large https://www.globant.com/
10001 Employees
See more Globant Commerce Studio offers

Job description

Company Description

Globant’s Commerce Studio helps organizations create best-in-class commerce-enabled experiences, with engineering and design at its core. The goal is to meet the demands of tomorrow's customers, leveraging long-standing expertise with large and complex commerce transformations in both B2B, B2C and D2C domains.

As an award-winning partner of enterprise-class platforms: Salesforce Commerce Cloud, Adobe Magento Commerce, and other API first, Headless Commerce surround solutions, we help clients create a competitive advantage with commerce at its core.

Our mission is to empower companies to succeed and thrive in the ever-changing digital landscape by building best-in-class future-ready digital commerce solutions globally.

Job Description
  • Design, build, and maintain scalable data pipelines using PySpark and Databricks
  • Optimize data processing and storage for maximum performance and efficiency
  • Troubleshoot and debug data-related issues, and implement solutions to prevent reoccurrence
  • Collaborate with data scientists, software engineers, and other stakeholders to ensure that data solutions are aligned with business goals

Qualifications
  • Strong experience in Python programming and PySpark, and SparkSQL
  • Clear understanding of Spark Data structures, RDD, Dataframe, dataset
  • Expertise in Databricks and ADLS
  • Expertise handling data type, from dictionaries, lists, tuples, sets, arrays, pandas dataframes, and spark dataframes
  • Expertise working with complex data types such as, structs, and JSON strings.
  • Clear understanding of Spark Broadcast, Repartition, Bloom index filters
  • Experience with ADLS optimization, partitioning, shuffling and shrinking
  • Ideal experience with disk caching
  • Ideal Experience with cost based optimizer
  • Experience with data modeling, data warehousing, data-lake, delta-lake and ETL/ELT processes in ADF
  • Strong analytical and problem-solving skills
  • Excellent documentation, communication and collaboration skills

Additional Information
  • Work with professionals who have created some of the most revolutionary solutions in their fields.
  • Make an impact. Work in large-scale projects globally.
  • Develop your career in our Studios. Each Studio represents deep pockets of expertise on the latest technologies and trends and delivers tailored solutions focused on specific challenges.
  • Develop your career within an industry or multiple industries.
  • Work in the city you want, and be nourished by cultural exchanges.
  • Be empowered to choose your career path: we have more than 600 simultaneous projects, so you can choose where and how to work.
  • Be part of an agile pod. Driven by a culture of self-regulated teamwork, each team -or POD- works directly with our customers with a full maturity path that evolves as they increase speed, quality and autonomy.

Required profile

Experience

Level of experience: Mid-level (2-5 years)
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Verbal Communication Skills
  • Motivational Skills
  • Open Mindset
  • Collaboration
  • Analytical Skills

Data Engineer Related jobs