Proven experience with Azure Data Bricks, PySpark, and Python., Knowledge of data lake, data warehousing, and data modeling on Azure., Hands-on experience with RDBMS platforms like MySQL, MS SQL Server, or Oracle., Familiarity with developing data pipelines, ETLs, and extracting data from APIs and cloud services..
Key responsibilities:
Develop and maintain big data pipelines using Azure and open-source tools.
Manage data lake, delta lake, and data warehousing solutions on Azure.
Create and optimize data processing pipelines and ETLs with PySpark and Data Bricks.
Work with Synapse and SQL Data Warehouse to present data securely and build data models.
Report this Job
Help us maintain the quality of our job listings. If you find any issues
with this job post, please let us know. Select the reason you're reporting
this job:
Stratonik is a technology consulting company delivering innovative solutions to our clients to stay ahead of the curve. We look at problems from a human perspective and our solutions reflect the ease of use.
We have a wide range of in-house capabilities such as E-commerce Solutions, Mobile Application Development,Web Application Development, Contract Staffing, Product Development - Consulting and Execution.
We have extensively worked with large corporate entities as well as helping startup visionaries develop their ideas.