Match score not available

Data Engineer - Azure (Scala/Kafka)

Remote: 
Full Remote
Contract: 
Experience: 
Senior (5-10 years)
Work from: 
India, Singapore, United Kingdom, United States

Offer summary

Qualifications:

Bachelor’s degree in Computer Science or similar field, 8+ years experience in Data Engineering + several years in Analytics space, Proficiency in Scala, Apache Spark, Kafka, ADF, PySpark, SQL queries, and Python programming, Experience with Azure stack products, Delta Lake, ETL processing, building data pipelines, and harmonizing data.

Key responsabilities:

  • Combine data from various sources to align data systems with business objectives
  • Build data pipelines for real-time streaming using Kafka, ADF, and API integration
  • Wrangle and transform raw data into user-friendly formats using Azure Databricks
  • Develop ingestion pipelines to handle structured and unstructured data at scale
Tiger Analytics logo
Tiger Analytics XLarge https://www.tigeranalytics.com/
1001 - 5000 Employees
See more Tiger Analytics offers

Job description

Description

Tiger Analytics is pioneering what AI and analytics can do to solve some of the toughest problems faced by organizations globally. We develop bespoke solutions powered by data and technology for several Fortune 100 companies. We have offices in multiple cities across the US, UK, India, and Singapore, and a substantial remote global workforce.

We are expanding our Data Engineering practice and looking for Sr. Azure Data Engineers to join our growing team of analytics experts. The right candidate will have strong analytical skills and the ability to combine data from different sources and will strive for efficiency by aligning data systems with business goals.

This is a remote role for applicants based in USA.

Requirements

  • Bachelor’s degree in Computer Science or similar field
  • 8+ years experience in Data Engineering + several years in Analytics space
  • Strong Proficiency in Scala - coding experience a must
  • Strong Proficiency in Kafka and ADF for data pipelines /migration experience a must (Azure Synapses)
  • Experience with real time streaming, Kafka, and API Integration
  • Experience in PySpark
  • Strong Proficiency in Python programming.
  • Strong Proficiency in SQL queries
  • Experience building data pipelines using Azure stack
  • Experience using Apache spark
  • Good working experience on Delta Lake and ETL processing
  • Prior experience of working in a Unix environment
  • Experience in harmonizing raw data into a consumer-friendly format using Azure Databricks
  • Experience extracting/querying/joining large data sets at scale
  • Experience building data ingestion pipelines using Azure Data Factory to ingest structured and unstructured data
  • Experience in data wrangling, advanced analytic modeling is preferred
  • Strong communication and organizational skills

Benefits

This position offers an excellent opportunity for significant career development in a fast-growing and challenging entrepreneurial environment with a high degree of individual responsibility.

Required profile

Experience

Level of experience: Senior (5-10 years)
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Analytical Skills
  • Goal-Oriented

Data Engineer Related jobs