Experience with large and complex data sets., Proficiency in PySpark, SQL, and Azure Databricks., Knowledge of database design using Microsoft SQL Server., Ability to develop data pipelines and optimize data flow..
Key responsibilities:
Build and optimize data and data pipeline architecture.
Support cross-functional teams with data initiatives.
Ensure consistent and efficient data delivery across projects.
Work independently to support multiple data needs.
Report this Job
Help us maintain the quality of our job listings. If you find any issues
with this job post, please let us know. Select the reason you're reporting
this job:
Kaarlo Training & HR Solutions Pvt. Ltd. Hrtech: Human Resources + Technology Startuphttps://www.kaarlo.in/
2 - 10
Employees
About Kaarlo Training & HR Solutions Pvt. Ltd.
A complete HR company providing solutions such as Recruitment, Training & Development & Organization Development to Companies, Job Seekers, Colleges, Students, Interns, across India.
We value Learning & Innovation which helps in serving our clients/candidates better.
To work in a variety of settings tobuild systems that collect, manage, and convert raw data into usable information for data scientists and business analysts to interpret.
Their ultimate goal is to make data accessible so that organizations can use it to evaluate and optimize their performance.
They will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams.
They will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects.
They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products.
Requirements
Demonstrated experience working with large and complex data sets as well as experience analyzing volumes of data.
Using Data frames to convert for PySparkSql. Working in Azure Databricks and DATAFACTORY to migrate the data.
Creating Databricks notebooks using SQL, Pyspark for data validation.
Experience in database design and development using Microsoft SQL Server, Stored Procedure, and Functions.
Involved in development of code from scratch for the given requirement.
Benefits
Career Growth
Remote Work
Salary:
40k - 50k per month
Required profile
Experience
Level of experience:Mid-level (2-5 years)
Industry :
Hrtech: Human Resources + Technology
Spoken language(s):
English
Check out the description to know which languages are mandatory.