GCP Python_Saloni_EMIDS

Work set-up: 
Full Remote
Contract: 
Work from: 

Offer summary

Qualifications:

Proficiency in Python and PySpark., Strong SQL skills and experience with data warehousing and data lakes., Understanding of data models and Big Data ecosystems like Hive and Hadoop., Preferred knowledge of GCP services, cloud warehouses, distributed file systems, and DevOps practices..

Key responsibilities:

  • Develop and maintain data pipelines using Python/PySpark.
  • Manage data warehousing and data lake solutions.
  • Work with Big Data tools like Hive and Hadoop.
  • Implement CI/CD and DevOps practices for data projects.

CodersBrain logo
CodersBrain SME https://www.codersbrain.com/
201 - 500 Employees
See all jobs

Job description

Experience 4+Yrs
Location remote(Bangalore,Noida,Hyderbad)
Notice periodImmediate to 15 days


JD Below:

• Mandatory skills:
o PythonPyspark
o Strong in SQL
o Data warehousing & Data LAke
o Understanding of Data model
o Demonstrated knowledge in the Big Data ecosystem – Hive, Hadoop etc.

• Preferred skills:
o GCP services understanding
o Cloud Warehouses like BigQuery (preferred), Amazon Redshift, Snowflake etc.
o Distributed file systems like GCS, S3, HDFS etc
o PySpark
o Airflow Cloud Composer
o CICD and DevOps
Job responsibilites

• Mandatory skills:
o PythonPyspark
o Strong in SQL
o Data warehousing & Data LAke
o Understanding of Data model
o Demonstrated knowledge in the Big Data ecosystem – Hive, Hadoop etc.

• Preferred skills:
o GCP services understanding
o Cloud Warehouses like BigQuery (preferred), Amazon Redshift, Snowflake etc.
o Distributed file systems like GCS, S3, HDFS etc
o PySpark
o Airflow Cloud Composer
o CICD and DevOps

Required profile

Experience

Spoken language(s):
English
Check out the description to know which languages are mandatory.

Related jobs