Minimum 3 years of experience in data integration roles., Proficiency in PySpark, Python, and SQL/PL SQL., Experience working in Unix/Linux environments and with tools like ElasticSearch, GPDB, and Oracle., Strong understanding of Data Lake, Data Warehouse concepts, and data modeling..
Key responsibilities:
Develop and maintain data integration pipelines using PySpark and SQL.
Create source-to-target mappings and system test cases.
Support code versioning, change management, and production releases.
Collaborate effectively within a complex matrix environment.
Report this Job
Help us maintain the quality of our job listings. If you find any issues
with this job post, please let us know. Select the reason you're reporting
this job:
Coders Brain is a global leader in IT services, digital and business solutions that partners with its clients to simplify, strengthen and transform their businesses. We ensure the highest levels of certainty and satisfaction through a deep-set commitment to our clients, comprehensive industry expertise and a global network of innovation and delivery centers.
We achieved our success because of how successfully we integrate with our clients.