Strong expertise in AWS and Big Data environments., Advanced skills in Python for Data Engineering and SQL., Intermediate to advanced knowledge of Hadoop and Spark (PySpark, Scala)., Familiarity with DevOps/CI-CD tools and Git workflows..
Key responsabilities:
Design and implement Data Warehouse and Data Lake architectures.
Act as a technical leader in Data Analytics initiatives.
Build and maintain efficient data pipelines for data processing.
Support the migration of legacy systems to cloud-based solutions.
Report This Job
Help us maintain the quality of our job listings. If you find any issues with this job post, please let us know.
Select the reason you're reporting this job:
Meta is an IT Consulting Company which has been in the market for 30 years. Meta is consistently choosing to help customers overcome their IT challenges. The company can provide consulting expertise to support IT strategy, software development, outsourced operations, staff augmentation, and SAP Projects. With our experience, we can deploy complete agile projects with coaches and scrum masters. We will assist you to select the right technologies for your company's needs aligned with your digital road map. We ensure quality with our commitment, attested through our Level 3 version 2.0 CMMI certification.
Additionally, Gartner recognized that Meta always provides quality and value to its customers. As a Silver SAP provider and an SAP partner for the last 20 years, Meta continually trains staff to prepare them for the next big challenge. Meta implemented the first public S/4 HANA system in the World and the first S/4 HANA implementation with a private company in Latin America. This experience provides confidence that our teams execute outstanding results. Meta can offer a real S/4HANA testbed with your data to test drive your future SAP system. Meta can support your existing SAP instances, help with technical expertise in your transformation, and provide a complete transformation team. We at Meta always work hard to see our clients succeed. Our purpose is to promote human growth with technology.
We are seeking a Senior Data Engineer with strong expertise in AWS and Big Data environments to lead the design and implementation of scalable Data Warehouse and Data Lake solutions. This role requires a hands-on technical leader who will serve as a reference in Data Analytics initiatives, ensuring performance, quality, and security throughout the entire data lifecycle.
Key Responsibilities
Design and implement robust Data Warehouse and Data Lake architectures;
Act as a technical leader in Data Analytics solutions;
Define and develop data repository models tailored to various business needs;
Build and maintain efficient data pipelines for ingestion, transformation, and storage;
Support and influence architectural decisions across the Big Data ecosystem;
Ensure secure, reliable, and performant data processes for multiple consumers;
Identify technical risks and provide recommendations based on support team insights;
Support performance testing and optimization initiatives;
Contribute to the migration of legacy systems to modern cloud-based solutions.
Must-Haves
Required Skills & Experience
Hadoop, Spark (PySpark, Scala): Intermediate to advanced