Hatigen is one of the most renowned IT consulting organizations focuses on strategy, data, transformation, and technology.
Believe, Hatigen turns out to be the most crucial part of your business journey. Our top talents, strategic minds, and diverse experiences provide cuttingedge solutions through which you can achieve your business goals. We are recognized as a 360° IT innovations firm, providing integrated IT consulting services, training on the most advanced technologies by the top industry experts, and IT staffing solutions. Thus, Hatigen aims to empower individuals and businesses with worldclass domain knowledge and cuttingedge technologies and drive them towards success.
What will your job look like
Design, Develop, Test and maintain efficient, scalable data pipelines and ETL process.
Translate customer requirements to develop effective, robust data solutions in line with customer Data Engineering standards
Work with other Data Engineers and Architects in tuning and optimizing the data pipelinesETL processes
Effective stakeholder engagement at all stages of the Data Engineering project lifecycle, from prefeasibility through to delivery
Implement processes to monitor performance and data quality
Requirements
Must have experience in developing, building and deploying solutions based on Microsoft Azure SQLDW, TSQL, Databricks, Data Lake, Azure Data Factory, Blob Storage SparkSpark SQL, Azure Logic Apps, Azure Log analytics.
Demonstrable experience and understanding of endtoend data delivery techniques including data modelling, ETL building, Quality Assurance and performance tuning
Experience in designing and building cloud based ETLELT pipelines and data migrations and dealing with high volume systems Good understanding of monitoring and tuning data pipeline workloads, preferably in ADF Azure Synapse Experience in building pipelines and loading data into Azure SQL Data Warehouse Azure Synapse or SQL Server BI Toolset (SRSS, SSIS, SSAS)
An indepth understanding of data management (e.g., permissions, recovery, security and monitoring) and capable of choosing the right data storesdata processing for the job (relational and nonrelational stores, batch, and Real Time processing)
Experience in dealing with different data schemas (including Microsoft SQL Server i.e., MSSQL, NoSQL, API Based) is required
Experience of working with MI BI Analysts Visualisers to understand their requirements
Experience in agile delivery methodologies, CICD practices
Any knowledge of the following is desirable: Teradata, Azure pipelines, Azure DevOps and Linux.