Match score not available

Data Engineer

Remote: 
Full Remote
Contract: 
Work from: 

Offer summary

Qualifications:

Bachelor’s degree in Computer Science, Information Systems, or a related field (Master’s preferred)., 3+ years of experience in data engineering or related fields., Strong programming skills in Python (PySpark) and SQL, with expertise in building and optimizing data pipelines., Familiarity with Azure Data Factory, Airflow, and modern data integration tools like dbt..

Key responsabilities:

  • Design, build, and maintain ETL/ELT pipelines in Databricks using PySpark, SQL, and Delta Lake.
  • Integrate structured, semi-structured, and unstructured data from multiple sources and develop data models for analytics.
  • Optimize Databricks notebooks and implement monitoring solutions for pipeline performance.
  • Collaborate with data scientists and analysts to support data-driven initiatives and ensure data governance compliance.

Yusen Logistics logo
Yusen Logistics https://www.yusen-logistics.com
10001 Employees
See all jobs

Job description

The Company

Founded in 1955, Yusen Logistics is a global supply chain logistics company that provides ocean and air freight forwarding, warehousing, distribution services, and supply chain management – a seamlessly connected suite of supply chain solutions that delivers superior value, reliability, and expertise. Yusen Logistics is committed to developing employees that deliver consistent quality and service to our customers, and providing them with the skills, training, support, and opportunities they need to be successful. As a company we’re dedicated to a culture of continuous improvement, ensuring everyone who works with us is committed, connected and creative in making us the world’s preferred choice.

About IT At Yusen

It’s an exciting time to join us as we’re transforming the way we deliver IT at Yusen Logistics (Europe). Digitalization, innovation and taking our IT to the next level will be core to our future success, and we’re on a journey to create one European IT organization to take on this challenge. We’re bringing together our talented IT professionals from 12 counties as one team. They’ll be working beyond geographic boundaries, with clear technical career paths and great development opportunities. Our people are the energy behind our IT and we’re committed to making IT a great place to work for everyone. Could you play a part in helping us to achieve our ambitions?

About the Job:

A key role within our Data Platform & Engineering team, responsible for designing, implementing, and maintaining robust data pipelines, data models and infrastructure to support efficient data processing, integration, and analysis on our Azure Databricks Data Platform (utilising tools such as Apache Airflow, Azure Data Factory and dbt).

ACCOUNTABILITIES:

RESPONSIBILITY AREA KEY ACTIVITES

Data Pipeline Development & Optimization

  • Design, build, and maintain ETL/ELT pipelines in Databricks using PySpark, SQL and Delta Lake.
  • Ensure data integrity and quality by implementing best practices and validation checks.


Data Integration & Modelling

  • Integrate structured, semi-structured, and unstructured data from multiple sources.
  • Develop and implement data models for optimal querying and analytics.


Performance Tuning & Monitoring

  • Optimize Databricks notebooks, clusters, and jobs for performance and cost-effectiveness.
  • Implement monitoring solutions to track pipeline performance and address issues proactively.


Collaboration & Stakeholder Engagement

  • Work closely with data scientists, analysts, and other engineers to support data-driven initiatives.
  • Communicate technical solutions effectively to both technical and non-technical audiences.


Security, Compliance & Governance

  • Implement best practices for data governance, security, and privacy within the Databricks environment.
  • Ensure alignment with organizational and industry compliance standards (e.g., GDPR).


Automation & Continuous Improvement

  • Utilize workflow orchestration tools (Airflow, ADF) and dbt for automation and efficiency.
  • Drive process improvements through CI/CD pipelines.


QUALIFICATONS:

Required Level Of Education

  • Bachelor’s degree in Computer Science, Information Systems, or a related field (Master’s preferred)


Relevant Experience

  • Proven experience (3+ years) in data engineering or related fields
  • Demonstrated expertise in building and optimizing data pipelines using Databricks, PySpark, and SQL
  • Familiarity with Azure Data Factory, Airflow, or similar modern workflow orchestration tools


Required Skills

  • Strong programming skills in Python (PySpark) and SQL
  • In-depth knowledge of data lake architectures, Delta Lake, and big data frameworks (Apache Spark)
  • Strong data modelling skills in Data Vault 2.0 and Kimball methodology
  • Experience with CI/CD tools (Azure DevOps, Git) for data workflows
  • Familiarity with data warehousing, data lake architectures and real-time data processing frameworks (preferably using Azure Databricks).
  • Familiarity with modern data integration and transformation tools (e.g., dbt).


COMPETENCES

  • Analytical Thinking: Ability to dissect complex data problems and develop scalable solutions.
  • Collaboration: Excellent team player, adept at working across multidisciplinary teams.
  • Attention to Detail: Meticulous in ensuring data quality, integrity, and security.
  • Continuous Learning: Stays updated with emerging technologies and best practices in data engineering.


What we offer.

A unique opportunity to engage, drive and develop yourself in a global company. This role comes with a great deal of freedom, responsibilities and challenges. Naturally we offer a competitive salary and benefits.

Diversity Statement.

At Yusen we are committed to fostering a working environment that embraces diversity, equity and inclusion (DE&I) for all our employees and stakeholders. We are an equal opportunity employer that recognizes the value of a diverse workforce. Benefiting from creative solution derived as a result of embracing differences. All qualified individuals will receive consideration for employment.

Required profile

Experience

Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Analytical Thinking
  • Detail Oriented
  • Collaboration

Data Engineer Related jobs