Design and implement end-to-end data architectures using Azure-native services such as Azure Data Lake Storage (ADLS), Azure Data Factory (ADF), Azure Synapse Analytics, and Azure SQL.
Develop scalable data pipelines and ingestion frameworks leveraging medallion architecture principles.
Perform data discovery, profiling, and mapping across multiple source systems.
Collaborate with stakeholders to gather, analyze, and refine data requirements from diverse applications.
Translate business requirements into data definitions, source-to-target mappings, and transformation logic.
Conduct gap analysis between current and target data models and architectures.
Define and implement data quality rules, validation frameworks, and governance standards.
Design and build insurance-specific data models (policy, claims, premiums, exposure, financials).
Identify appropriate source systems for data ingestion in collaboration with business stakeholders.
Develop data integration and ingestion patterns aligned with enterprise architecture.
Design and implement data reconciliation and validation procedures.
Support and execute data migration activities.
Analyze and optimize existing data pipelines and Azure SQL databases for performance and scalability.
Contribute to operating model design, governance forums, and cross-functional alignment.
Deliver key artifacts such as data dictionaries, validation catalogs, and onboarding documentation.
Requirements
Required Skills & Experience
8–10+ years of experience in data analysis within data-centric or transformation programs.
Strong expertise in data profiling, data mapping, and source-to-target transformations.
Extensive experience working with high-volume insurance datasets (policy, premium, claims, exposure, financials).
Proven experience in designing and implementing insurance-specific data models.
Hands-on experience with Azure data services (ADLS, ADF, Azure SQL, Synapse).
Solid understanding of distributed data processing concepts.
Experience in defining and implementing data quality frameworks and governance standards.
Strong expertise in data ingestion, transformation, and orchestration frameworks.
Proven ability to design scalable, high-performance, and cost-efficient data pipelines.
Experience establishing data governance and architectural best practices.
Strong stakeholder management and communication skills.
Ability to collaborate effectively with business and engineering teams to deliver scalable data solutions.
Preferred Skills
Experience with enterprise data models and data standardization initiatives.
Prior experience working on Insurance P&C cloud-based data and analytics projects.
Hands-on experience with PySpark for large-scale data processing.
Experience working with Azure Databricks and Spark-based pipelines.
Expertise in defining and enforcing best practices for Databricks workloads.
Strong analytical and problem-solving skills with a focus on scalable architecture design.
Excellent communication and cross-functional collaboration skills.
Benefits
Exavalu also promotes flexibility, adapting to the needs of employees, customers, and the business. It might be part-time work, working outside normal 9-5 business hours or working remotely. We also offer a welcome back program to help individuals return to the mainstream after a prolonged absence due to health or family reasons.
At Exavalu, we are committed to building a diverse and inclusive workforce. We welcome applications for employment from all qualified candidates, regardless of race, colour, gender, national or ethnic origin, age, disability, religion, sexual orientation, gender identity or any other status protected by applicable law. We nurture a culture that embraces all individuals and promotes diverse perspectives, where you can make an impact and grow your career.