Advanced Java proficiency with 4-5 years in Java and 2-3 years in Spark; hands-on experience in building complex data platforms
Experience with microservices architecture and Spring Boot
Strong SQL writing skills and hands-on experience with AWS services (Glue, EMR, Lambda, Kinesis, SQS, SNS)
Experience with big data technologies (Spark, EMR, Hadoop, Hive) and NoSQL databases (DynamoDB, DocumentDB, MongoDB)
Requirements:
Design, develop, and maintain Java Spark-based data processing pipelines and microservices in AWS
Build scalable, highly available data platforms and data ingestion/processing pipelines for large data sets
Collaborate with data engineers and architects to implement real-time data ingestion and processing using AWS services (Kinesis, Lambda, SQS/SNS)
Optimize performance, write efficient queries, and ensure alignment with architectural standards and CI/CD practices
Job description
This is a remote position.
Key Points -
Must Have: ● Advanced Java proficient ● Microservices ● Spring boot ● Writing SQL Queries (proficient) ● AWS ● 4-5 Yrs in Java, 2-3 exp - Spark ● Good to have ● Unix Shell scripting
JD ● Minimum of 8 years of experience in building complex Data Platforms and Data Engineering solutions Minimum of 6 years of hands on experience in architecture and development of data solutions in AWS environment using AWS Services ● Experience with big data technologies such as: Spark, EMR, Hadoop, Hive, ● Experience programming with at least one modern language such as Scala, Java, Python ● Hands on experience on NoSQL DBs like DynamoDB, DocumentDB, MongoDB ● Hands on experience on implementing AWS Glue, EMR, Lambda functions, SQS, SNS, ● Experience on Real-time data ingestion and processing in AWS especially using services like AWS Kinesis ● AWS certification is preferred ● Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets ● Experience with data m