Proficiency in Python and PySpark., Strong SQL skills and experience with data warehousing and data lakes., Understanding of data models and Big Data ecosystems like Hive and Hadoop., Preferred knowledge of GCP services, cloud warehouses, distributed file systems, and DevOps practices..
Key responsibilities:
Develop and maintain data pipelines using Python/PySpark.
Manage data warehousing and data lake solutions.
Work with Big Data tools like Hive and Hadoop.
Implement CI/CD and DevOps practices for data projects.
Report this Job
Help us maintain the quality of our job listings. If you find any issues
with this job post, please let us know. Select the reason you're reporting
this job:
Coders Brain is a global leader in IT services, digital and business solutions that partners with its clients to simplify, strengthen and transform their businesses. We ensure the highest levels of certainty and satisfaction through a deep-set commitment to our clients, comprehensive industry expertise and a global network of innovation and delivery centers.
We achieved our success because of how successfully we integrate with our clients.