2+ years experience Spark/Scala/Java, 1+ year experience Kafka/Spark Streaming, Strong database fundamentals, Experience in building ETL pipelines, Knowledge of Kubernetes and cloud deployments.
Key responsabilities:
Drive insights from data for AI workflows
Build scalable and secure data infrastructure
Devise transformation systems for data stores
Streamline data access with tools and apps
Influence design with stability and scalability in mind
Report This Job
Help us maintain the quality of our job listings. If you find any issues with this job post, please let us know.
Select the reason you're reporting this job:
Niyati Tech
2 - 10
Employees
About Niyati Tech
Welcome to the youngest staffing company founded in the year 2022 with a mission to deliver best techies to our partners, we are a highly energized, result-oriented, fast-paced organization with a culture of excellence.
For more Info please contact us: info@niyatitech.com
Niyati Tech is:
-A platform that brings talent and recruiters together to form a wildly efficient digital talent supply chain
-A leading-edge curated talent provider ensuring expert vetted, high quality talent is delivered for every role
-A talent cloud solution that provides the most efficient, cost-effective way to manage ALL talent and powering key talent initiatives direct sourcing and diversity hiring programs.
Company Overview: Niyati Tech is a leading player in the Information Technology & Services industry, specializing in providing innovative solutions to enterprises.
Role and Responsibilities: The Data Platform Engineer at Niyati Tech will play a crucial role in driving insights from data, accelerating machine learning at scale, and building innovative AI workflows. The responsibilities include building highly-scalable and secure data infrastructure, developing transformation systems for various data stores, and building tools and applications to streamline data management and access. The role also involves reviewing and influencing design and architecture with stability, maintainability, and scale in mind, identifying patterns, providing solutions to a class of problems, and handling dependencies with minimal oversight.
Candidate Qualifications: The ideal candidate should have a good understanding of distributed systems, scalability, and availability, along with at least 2 years of experience in Spark/Scala/Java, 1 year of experience in Kafka and Flint/Spark Streaming, 2 years of experience in Airflow, and experience in building ETL pipelines on a large scale. Strong database and storage fundamentals, experience with cloud deployments, and basic working knowledge of Kubernetes are also required.
Required Skills:
Airflow
Pyspark
Spark/Scala/Java
Required profile
Experience
Level of experience:Mid-level (2-5 years)
Spoken language(s):
English
Check out the description to know which languages are mandatory.