Minimum 4 years of experience in data engineering or related roles., Proficiency in Python/PySpark, SCALA, and SQL., Experience with Spark, Spark Streaming, and Databricks platforms., Knowledge of data analysis, code versioning (Bitbucket), and software development best practices..
Key responsibilities:
Maintain and support data applications and pipelines.
Develop and integrate software applications following architectural standards.
Collaborate with cross-functional teams including QA and Business Analysts.
Ensure thorough testing, documentation, and management of code releases.
Report this Job
Help us maintain the quality of our job listings. If you find any issues
with this job post, please let us know. Select the reason you're reporting
this job:
Coders Brain is a global leader in IT services, digital and business solutions that partners with its clients to simplify, strengthen and transform their businesses. We ensure the highest levels of certainty and satisfaction through a deep-set commitment to our clients, comprehensive industry expertise and a global network of innovation and delivery centers.
We achieved our success because of how successfully we integrate with our clients.
* Maintain and support the application. Development of data ingestion pipelines. Databricks background required.
* Develop and integrate software applications using suitable development methodologies and standards, applying standard architectural patterns, taking into account critical performance characteristics and security measures.
* Evaluate new features and refractors existing code.
* Must be willing to flex work hours accordingly to support application launches and manage production outages if necessary
* Ensures to understand the requirements thoroughly and in detail and identify gaps in requirements
* Ensures that detailed unit testing is done, handles negative scenarios and document the same
* Work with QA and automation team.
* Works on best practices and documenting the process
* code merges and releases (Bitbucket)
* Collaborate with Business Analysts, Architects and Senior Developers to establish the physical application framework (e.g. libraries, modules, execution environments).
* Good data analysis skills
Must have following experience:
PythonPySpark
SCALA
SQL
SparkSpark Streaming
Databricks
* Preferred to have following experience:
Java C #
Azure
Kafka
Azure Data Factory
Big Data Tool Set
Linux
Job Location – Remote
Years of exp – 4+ years
Required profile
Experience
Level of experience:Mid-level (2-5 years)
Spoken language(s):
English
Check out the description to know which languages are mandatory.