Experience: 5+years
- Build and optimize ‘big data’ data pipeline architectures and data sets
- Experience in developing workloads in both batch-oriented and real time using Informatica Data Engineering Integration
- Extensive knowledge designing and developing ETL packages to deliver data solutions using Native Mode and the Databricks Runtime
- Experience working with Big Data tools and building data solutions for advanced analytics like Databricks, Cloudera
- Ability to work with multiple data sources and databases
- Advanced working SQL knowledge and experience working with relational databases, query authoring (T-SQL, PL/SQL, etc.) as well as working familiarity with a variety of databases (SQL Server, Oracle, etc.)
- Advanced working experience in administration and optimization of Informatica
- Should have Good knowledge of Data Lake and Dimensional data Modeling implementation
- Capable of collaborating with Team Leads in understanding and contributing to the technical solution from design through implementation level
- Informatica Data Engineering, DIS and MAS, Databricks, Hadoop
- Relational SQL and NoSQL databases, including some of the following: Azure Synapse/SQL DW and SQL Database, SQL Server and Oracle
Core cloud services from at least one of the major providers in the market (Azure, AWS, Google)
- Agile Methodologies, such as SCRUM
- Task tracking tools, such as TFS and JIRA