Data bricks_Ritika_Elfonze

Work set-up: 
Full Remote
Contract: 
Experience: 
Mid-level (2-5 years)
Work from: 

Offer summary

Qualifications:

Minimum 4 years of experience in data engineering or related roles., Proficiency in Python/PySpark, SCALA, and SQL., Experience with Spark, Spark Streaming, and Databricks platforms., Knowledge of data analysis, code versioning (Bitbucket), and software development best practices..

Key responsibilities:

  • Maintain and support data applications and pipelines.
  • Develop and integrate software applications following architectural standards.
  • Collaborate with cross-functional teams including QA and Business Analysts.
  • Ensure thorough testing, documentation, and management of code releases.

CodersBrain logo
CodersBrain SME https://www.codersbrain.com/
201 - 500 Employees
See all jobs

Job description

Job Description:
* Maintain and support the application. Development of data ingestion pipelines. Databricks background required.
* Develop and integrate software applications using suitable development methodologies and standards, applying standard architectural patterns, taking into account critical performance characteristics and security measures.
* Evaluate new features and refractors existing code.
* Must be willing to flex work hours accordingly to support application launches and manage production outages if necessary
* Ensures to understand the requirements thoroughly and in detail and identify gaps in requirements
* Ensures that detailed unit testing is done, handles negative scenarios and document the same
* Work with QA and automation team.
* Works on best practices and documenting the process
* code merges and releases (Bitbucket)
* Collaborate with Business Analysts, Architects and Senior Developers to establish the physical application framework (e.g. libraries, modules, execution environments).
* Good data analysis skills

Must have following experience:
PythonPySpark
SCALA
SQL
SparkSpark Streaming
Databricks

* Preferred to have following experience:
Java C #
Azure
Kafka
Azure Data Factory
Big Data Tool Set
Linux

Job Location – Remote
Years of exp – 4+ years


Required profile

Experience

Level of experience: Mid-level (2-5 years)
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Collaboration
  • Problem Solving

Related jobs