Logo for CodersBrain

Data Engineer_Mukta_Tookitaki

Roles & Responsibilities

  • Scala and Spark: 2+ years of hands-on experience (including Hadoop ecosystem security, Spark on YARN, and architectural knowledge)
  • HBase and Hive: 2+ years of experience (with Hadoop-based data processing)
  • RDBMS experience (MySQL / PostgreSQL / MariaDB): 2+ years and familiarity with big data storage techniques
  • CI/CD experience: 1+ year

Requirements:

  • Develop REST API services using Scala frameworks and troubleshoot/optimize complex Spark queries
  • Design, build, and optimize big data pipelines and architectures; model unstructured to structured data
  • Apply big data access/storage techniques and perform cost estimation based on design and development
  • Debug across server and application logs; stay organized, proactive, and collaborate effectively in a team

Job description

Requirements • Experience in developing rest API services using one of the Scala frameworks • Ability to troubleshoot and optimize complex queries on the Spark platform • Expert in building and optimizing ‘big data’ data/ML pipelines, architectures and data sets • Knowledge in modelling unstructured to structured data design. • Experience in Big Data access and storage techniques. • Experience in doing cost estimation based on the design and development. • Excellent debugging skills for the technical stack mentioned above which even includes analyzing server logs and application logs. • Highly organized, self-motivated, proactive, and ability to propose best design solutions. • Good time management and multitasking skills to work to deadlines by working independently and as a part of a team. 
Experience -Must have:
a) Scala: Minimum 2 years of experience
b) Spark: Minimum 2 years of experience
c) Hadoop: Minimum 2 years of experience (Security, Spark on yarn, Architectural knowledge)
d) Hbase: Minimum 2 years of experience
e) Hive - Minimum 2 years of experience
f) RDBMS (MySql / Postgres / Maria) - Minimum 2 years of experience
g) CI/CD Minimum 1 year of experience
Experience (Good to have):
a) Kafka
b) Spark Streaming
c) Apache Phoenix
d) Caching layer (Memcache / Redis)
e) Spark ML f) FP (Scala cats / scalaz)
Qualifications Bachelor's degree in IT, Computer Science, Software Engineering, Business Analytics or equivalent with at-least 2 years of experience in big data systems such as Hadoop as well as cloud-based solutions

Data Engineer Related jobs

Other jobs at CodersBrain

We help you get seen. Not ignored.

We help you get seen faster — by the right people.

🚀

Auto-Apply

We apply for you — automatically and instantly.

Save time, skip forms, and stay on top of every opportunity. Because you can't get seen if you're not in the race.

AI Match Feedback

Know your real match before you apply.

Get a detailed AI assessment of your profile against each job posting. Because getting seen starts with passing the filters.

Upgrade to Premium. Apply smarter and get noticed.

Upgrade to Premium

Join thousands of professionals who got noticed and hired faster.