Will be responsible for the design and implementation of Big Data storage, processing and analytics features and services over Huawei’s public cloud.
Capable of handling software functions over very large volumes of data, stored in distributed storage system.
Must have the ability to understand system architecture of the product/platform in depth.
Create solutions for module architecture and performance optimization, and participate in the rapid iteration and development of products.
Writing technical specs and internal documentation to share information with others
Understand and enable better integration and provisioning.
Guide team working on Bigdata Kernel projects
Solve customer bugs/issues.
Experience & Skills required :
5+ Years of relevant experience, Very good understanding and experience of Big Data Ecosystem. Business development experience. System Engineering, Product/platform Architecture experience.
Expert in Core Java. Hands-on with design and implementation.
Experienced in Open Source based commercial project delivery process and Engineering methods
Excellent analytical and problem-solving skills.
Familiarity of Big Data Open Source eco system, understanding of Apache Hadoop, Spark, HBase, Hive.
Have working experience in Open source contribution to Big data components and are considered as PMC/Committer, Contributors in Apache community.
Should have strong will and passion to learn the Big Data and cloud computing domain and grow in technical career.
Strong team spirit, excellent coordination and communication skills.
Desirable
Experience with large-scale, distributed systems architecture, design and development
Experience internals of one of the Big data components–Spark, Presto, Hadoop, HBase, Hive, Kafka, ES. Preferably contributed to one component’s design and code in the open source community.
Required profile
Experience
Level of experience:Senior (5-10 years)
Industry :
Telecommunication Services
Spoken language(s):
English
Check out the description to know which languages are mandatory.