Logo for PradeepIT Consulting Services Pvt Ltd

Databricks (Remote)

Roles & Responsibilities

  • Bachelor's and/or master's degree in computer science or equivalent experience with 6+ years IT experience and 3+ years in Data warehouse/ETL projects
  • Deep understanding of Star and Snowflake dimensional modeling and strong data management principles
  • Hands-on experience with SQL, Python, and Spark (PySpark); practical knowledge of Databricks on AWS/Azure and Delta Lake Architecture
  • Experience with ETL/ELT development including batch and streaming (e.g., Kinesis); familiarity with Kafka, Hadoop ecosystem, and NoSQL databases

Requirements:

  • Developing modern Data Warehouse solutions using Databricks and AWS/Azure Stack
  • Collaborating with DW/BI leads to understand new ETL pipeline development requirements
  • Triage issues to identify gaps in existing pipelines and implement fixes
  • Orchestrate data pipelines in the scheduler using Airflow

Job description

Roles & responsibilities

  • Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack
  • Ability to provide solutions that are forward-thinking in data engineering and analytics space
  • Collaborating with DW/BI leads to understanding new ETL pipeline development requirements.
  • Triage issues to find gaps in existing pipelines and fix the issues
  • Work with businesses to understand the need in the reporting layer and develop a data model to fulfill
  • reporting needs
  • Help joiner team members to resolve issues and technical challenges.
  • Drive technical discussion with client architects and team members
  • Orchestrate the data pipelines in the scheduler via Airflow
  • Qualification & experience
  • Bachelor's and/or master's degree in computer science or equivalent experience.
  • Must have a total of 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects.
  • Deep understanding of Star and Snowflake dimensional modeling.
  • Strong knowledge of Data Management principles
  • Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture
  • Should have hands-on experience in SQL, Python, and Spark (PySpark)
  • Candidate must have experience in AWS/ Azure stack
  • Desirable to have ETL with batch and streaming (Kinesis).
  • Experience in building ETL / data warehouse transformation processes
  • Experience with Apache Kafka for use with streaming data / event-based data
  • Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala)
  • Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB,
  • Cassandra, Neo4J)
  • Experience working with structured and unstructured data including imaging & geospatial data.
  • Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, and GIT.
  • Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning, an troubleshooting.
  • Databricks Certified Data Engineer Associate/Professional Certification (Desirable).
  • Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing
  • concurrent projects
  • Should have experience working in Agile methodology.
  • Strong verbal and written communication skills.
  • Strong analytical and problem-solving skills with a high attention to detail. Mandatory Skills:
  • Python/ PySpark / Spark with Azure/ AWS Databricks


Related jobs

Other jobs at PradeepIT Consulting Services Pvt Ltd

We help you get seen. Not ignored.

We help you get seen faster — by the right people.

🚀

Auto-Apply

We apply for you — automatically and instantly.

Save time, skip forms, and stay on top of every opportunity. Because you can't get seen if you're not in the race.

✨

AI Match Feedback

Know your real match before you apply.

Get a detailed AI assessment of your profile against each job posting. Because getting seen starts with passing the filters.

Upgrade to Premium. Apply smarter and get noticed.

Upgrade to Premium

Join thousands of professionals who got noticed and hired faster.