Solid Computer Science fundamentals, excellent problem-solving skills and a strong understanding of distributed computing principles.
At least 3 years of experience in a similar role, with a proven track record of building scalable and performant data infrastructure.
Expert SQL knowledge and deep experience working with relational and NoSQL databases.
Advanced knowledge of Apache Kafka and demonstrated proficiency in Hadoop v2, HDFS, and MapReduce.
Experience with stream-processing systems (e.g. Storm, Spark Streaming), big data querying tools (e.g. Pig, Hive, Spark) and data serialization frameworks (e.g. Protobuf, Thrift, Avro).
Bachelor’s or Master’s degree in Computer Science or related field from a top university.
Able to work within the GMT+8 time zone.
Required profile
Experience
Level of experience:Mid-level (2-5 years)
Industry :
Management Consulting
Spoken language(s):
English
Check out the description to know which languages are mandatory.