Big Data Engineer

Work set-up: 
Full Remote
Contract: 
Experience: 
Mid-level (2-5 years)
Work from: 

Offer summary

Qualifications:

At least 3 years of experience in Data Engineering., Hands-on experience with Azure Databricks and Azure Data Factory., Proficiency in SQL, PySpark, and Python for data engineering., Knowledge of data modeling, source system analysis, and data visualization tools..

Key responsibilities:

  • Design and implement scalable data pipelines using Azure technologies.
  • Ensure data quality and integrity throughout the data migration process.
  • Collaborate with cross-functional teams to deliver data solutions in various domains.
  • Support the team during migration phases and stay updated with industry best practices.

Nagarro logo
Nagarro XLarge https://www.nagarro.com
10001 Employees
See all jobs

Job description

Company Description

đŸ‘‹đŸŒ Were Nagarro.

We are a digital product engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale — across all devices and digital mediums, and our people exist everywhere in the world (17 500+ experts across 37 countries, to be exact). Our work culture is dynamic and nonhierarchical. Were looking for great new colleagues. Thats where you come in!

By this point in your career, it is not just about the tech you know or how well you can code. It is about what more you want to do with that knowledge. Can you help your teammates proceed in the right direction? Can you tackle the challenges our clients face while always looking to take our solutions one step further to succeed at an even higher level? Yes? You may be ready to join us.

Job Description

You will work closely with crossfunctional teams to deliver highquality solutions in domains such as Supply Chain, Finance, Operations, Customer Experience, HR, Risk Management, and Global IT.

Key Responsibilities:

  • Be part of the technical plan for the migration, including data ingestion, transformation, storage, and access control in Azures Data Factory and data lake.
  • Design and implement scalable and efficient data pipelines to ensure smooth data movement from multiple sources using Azure Databricks
  • Developing scalable and reusable frameworks for ingesting of data sets
  • Ensure data quality and integrity throughout the entire data pipeline, implementing robust data validation and cleansing mechanisms.
  • Working with event basedstreaming technologies to ingest and process data.
  • Provide support to the team, resolving any technical challenges or issues that may arise during the migration and postmigration phases.
  • Stay up to date with the latest advancements in cloud computing, data engineering, and analytics technologies, and recommend best practices and industry standards for implementing the data lake solution.
    • Qualifications
      • 3+ years of experience working within Data Engineering field.
      • Handson working experience with Azure Databricks.
      • Experience in Data Modelling & Source System Analysis
      • Familiarity with PySpark.
      • Mastery of SQL.
      • Knowledge of components: Azure Data Factory, Azure Data Lake, Azure SQL DW, Azure SQL.
      • Experience with Python programming language used for data Engineering purposes.
      • Ability to conduct data profiling, cataloging, and mapping for technical design and construction of technical data flows.
      • Experience in data visualizationexploration tools.
      • Excellent communication skills, with the ability to effectively convey complex ideas to technical and nontechnical stakeholders.
      • Strong team player with excellent interpersonal and collaboration skills.

Required profile

Experience

Level of experience: Mid-level (2-5 years)
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Social Skills
  • Teamwork
  • Collaboration
  • Communication

Data Engineer Related jobs