Logo for Overture Rede

Python Pyspark

Roles & Responsibilities

  • Design and development on Hadoop applications
  • Hands-on PySpark development with Python/Scala (preferred) or Java/Scala
  • Experience with Core Java, MapReduce, Hive programming and Hive performance concepts
  • Experience with Git repositories and source code management

Requirements:

  • Design and develop Hadoop-based applications and PySpark jobs
  • Implement MapReduce, Hive queries and optimize performance
  • Collaborate in AWS ecosystem (EC2, S3) and maintain CI/CD workflows using Git and Jenkins
  • Participate in Agile software development and deliver solutions iteratively

Job description

This is a remote position.

Primary Skills
● Design and develop on Hadoop applications
● Hands-on in developing Jobs in pySpark with Python/ SCALA (Preferred) or Java/ SCALA
● Experience on Core Java, Experience on Map Reduce programs, Hive programming, Hive queries performance concepts
● Experience on source code management with Git repositories

Secondary Skills ● Exposure to AWS Ecosystem with hands-on knowledge of ec2, S3 and services
● Basic SQL programming ● Knowledge of agile methodology for delivering software solutions
● Build scripting with Maven / Cradle, Exposure to Jenkins


Salary: 2000000

Related jobs

Other jobs at Overture Rede

We help you get seen. Not ignored.

We help you get seen faster — by the right people.

🚀

Auto-Apply

We apply for you — automatically and instantly.

Save time, skip forms, and stay on top of every opportunity. Because you can't get seen if you're not in the race.

AI Match Feedback

Know your real match before you apply.

Get a detailed AI assessment of your profile against each job posting. Because getting seen starts with passing the filters.

Upgrade to Premium. Apply smarter and get noticed.

Upgrade to Premium

Join thousands of professionals who got noticed and hired faster.