Bachelor's degree in Computer Science or related field., Proficiency in big data technologies such as Hadoop, Spark, and Kafka., Experience with SQL and NoSQL databases., Strong programming skills in Java, Python, or Scala..
Key responsibilities:
Design and implement big data solutions to handle large datasets.
Collaborate with data scientists and analysts to understand data requirements.
Optimize data processing workflows for performance and efficiency.
Maintain and troubleshoot existing big data applications.
Report This Job
Help us maintain the quality of our job listings. If you find any issues with this job post, please let us know.
Select the reason you're reporting this job:
Billennium is a global IT services and solutions provider. Established in 2003, we have been in the IT services and solutions industry since then – we develop as the technology develops, delivering our clients the best-in-class solutions.
Having 11 offices on 3 continents, our 1800+ IT experts work in a follow-the-sun (24/7/365) model to deliver the highest quality IT solutions and services for businesses around the globe, helping our clients in building strong competitive advantage with technology.
Billennium is driven by purpose, yet powered by technology partnerships – Microsoft, Google, AWS, Salesforce, Mulesoft, Tableau, and more. Official partnerships confirm our expertise in delivering tailor-made, multi-cloud, and cutting-edge technology IT solutions and services.
Security and stability of our clients' business are crucial, therefore we implemented several ISO standards to strengthen these vital values – ISO 9001, ISO 27001, and ISO 20000-1 certificates.
We are a global IT services and solutions provider, with over 117 satisfied corporate, public, and government clients around the globe, including those from the regulated industries and critical systems at the government level. You can trust us!
We are seeking a talented
Big Data Developer
experienced in Python, Java, or Scala to design and implement scalable big data solutions. You will build and optimize data processing pipelines and collaborate with cross-functional teams to deliver effective data architectures.
Key responsibilities:
Develop and maintain data pipelines using big data technologies (e.g., Apache Hadoop, Apache Spark).
Collaborate with data engineers and scientists to meet data requirements.
Optimize data workflows for performance and reliability.
Ensure data quality and implement data governance practices.
Document data architectures and workflows
Qualifications:
Bachelor’s or Master’s degree in Computer Science or related field.
3+ years of experience in big data development with Python, Java, or Scala.
Strong knowledge of big data tools and frameworks.
Familiarity with cloud platforms (AWS, Azure, GCP) is a plus.
Excellent problem-solving and communication skills
What we offer:
Involvement in dynamic and interesting projects.
A collaborative and supportive team culture.
A flexible work environment.
A comprehensive benefits package tailored to your preferences
Sounds interesting? Click
“Apply”
and have chance to hear more!