Data Engineer – Databricks at InOrg Global

Remote: 
Full Remote
Contract: 
Work from: 

Offer summary

Qualifications:

3+ years of experience in data engineering with a focus on Databricks and big data tools., Proficient in Python or Scala for ETL development., Strong understanding of Spark, Delta Lake, and Databricks SQL., Familiarity with cloud platforms like AWS, Azure, or GCP..

Key responsibilities:

  • Design, build, and maintain scalable data pipelines using Databricks and Apache Spark.
  • Integrate data from various sources into data lakes or data warehouses.
  • Collaborate with data analysts, scientists, and stakeholders to meet data needs.
  • Automate workflows and manage job scheduling within Databricks.

InOrg Global logo
InOrg Global Scaleup https://www.inorg.com/
11 - 50 Employees
See all jobs

Job description

Job Title: Data Engineer – Databricks

Company: Inorg Global

Type: Full-time

Experience Level: Mid to Senior

About Us

At Inorg Global, we deliver advanced data solutions that empower organizations to harness their data for strategic decision-making. We're seeking a skilled Data Engineer with hands-on Databricks experience to help build robust pipelines and support scalable analytics and machine learning initiatives.

Key Responsibilities

Design, build, and maintain scalable data pipelines using Databricks and Apache Spark.

Integrate data from various sources into data lakes or data warehouses.

Implement and manage Delta Lake architecture for reliable, versioned data storage.

Ensure data quality, performance, and reliability through testing and monitoring.

Collaborate with data analysts, scientists, and stakeholders to meet data needs.

Automate workflows and manage job scheduling within Databricks.

Maintain clear and thorough documentation of data workflows and architecture.

Requirements

Experience: 3+ years in data engineering with strong exposure to Databricks and big data tools.

Technical Skills:

Proficient in Python or Scala for ETL development.

Strong understanding of Spark, Delta Lake, and Databricks SQL.

Familiar with REST APIs, including Databricks REST API usage.

Cloud Platform: Experience with AWS, Azure, or GCP.

Data Modeling: Familiarity with data lakehouse concepts and dimensional modeling.

Version Control & CI/CD: Comfortable using Git and pipeline automation tools.

Soft Skills: Strong problem-solving abilities, attention to detail, and teamwork.

Nice to Have

Certifications: Databricks Certified Data Engineer Associate/Professional.

Workflow Tools: Experience with Airflow or Databricks Workflows.

Monitoring: Familiarity with Datadog, Prometheus, or similar tools.

ML Pipelines: Exposure to MLflow or model integration in pipelines.

Required profile

Experience

Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Teamwork
  • Detail Oriented
  • Problem Solving

Data Engineer Related jobs