Bachelor's degree in Computer Science or Electronics & Telecommunication.
Minimum 2 years of experience in Big Data technologies.
Experience with distributed systems and cloud computing platforms like AWS and Azure.
Hands-on skills with Hadoop, Spark, and related big data frameworks.
Requirements:
Design and develop scalable big data infrastructure for real-time data processing.
Collaborate with cross-functional teams including partners and clients.
Implement big data solutions using technologies like Hadoop, Spark, and NoSQL databases.
Manage and optimize big data environments on-premises or in the cloud.
Job description
Job Description
We are making substantial investments to continue its leadership position in the emerging digital business ecosystem as companies pivot to the “#NewAgeDELIVERY”.
A key part of this investment lies in designing systems based on Big Data Architecture to help our clients improve agility and speed to market by leveraging modern tools, techniques, and technology to deliver Cloud Native solutions.
The successful candidate will have deep experience in one or many Big Data frameworks as well as deep experience with creating Cloud Native solutions through all phases of the software development lifecycle across several Cloud providers including AWS, Azure, and others.
Qualifications
Role
· Develop scalable infrastructure and platform to collect, analyize and process large amounts of structured and unstructured datawith realtime data interpretation.
· Work closely across an array of various teams and organizations in the company (including partners, customers)
Basic Qualifications:
· Bachelors degree in Engineering in Computer ScienceElectronics & Telecommunication.
· Minimum of 2 year of Big Data experience
Preferred Qualifications:
· 5+ years of software development experience using multiple computer languages. Experience building large scale distributed data processing systemsapplications or largescale internet systems (cloud computing)
· Strong foundational knowledge and experience with distributed systems and computing systems in general. Handson engineering skills.
· Should be able to develop big data solution and how that big data solution can be delivered using big data technology such as HadoopHDFS,MapReduce, Hive, AWS EMR, MongoDB,Airflow,Oozie,, Yarn, Ambari, ZooKeeper;Sqoop, BIRT or any other big data frameworks(with full life cycle of Hadoop Solution)
· Handson experience with Hadoop applications (e.g. administration, configuration management, monitoring, debugging, and performance tuning).
· Firm understanding of major programmingscripting languages like Java, Linux, Ruby, Kafka, Camunda,Phyton andor R, Shell script.
· Should be able to build a big data environment on premises or in the cloud