Logo for NIR-YU

Data Architect

Roles & Responsibilities

  • 15+ years of overall experience
  • 10+ years of data architecture, data platform, or data warehouse experience
  • Hands-on experience with Snowflake (4+ years) and Databricks (4+ years); 5+ years combined experience
  • Proficiency with Delta Lake, MLflow, and Databricks SQL; experience managing Spark clusters and ML workflows

Requirements:

  • Own design and maintenance of data solutions including modeling, development, technical documentation, data diagrams and data dictionaries
  • Provide expertise in standards, architectural governance, design patterns, and evaluation of best solutions for different use cases
  • Lead the data strategy and own the vision and roadmap of data products, coordinating with stakeholders on sensitive data requirements and governance
  • Develop, maintain, and optimize data infrastructure using Delta Lake, MLflow, and Databricks SQL to enhance data management and analytics

Job description

The Role:

We are looking for hands-on Principal Data-Architect for a challenging and fun filled work of building a data architecture that is future proof for one of the customers in the financial domain.

Responsibilities:

  • Own design and maintenance of all aspects of data solutions including modeling, developing, technical documentation, data diagrams and data dictionaries.

  • Provide expertise in the development of standards, architectural governance, design patterns, and practices, evaluate best applicable solutions for different use cases

  • Determines and develops architectural approaches and solutions, conducts business reviews, documents current systems, and develops recommendations

  • Lead the data strategy and own the vision and roadmap of data products

  • Work with stakeholders to ensure that data related business requirements for protecting sensitive data are clearly defined, communicated, and well understood and considered as part of operational prioritization and planning

  • Develop, maintain, and optimize data infrastructure using Delta Lake, MLflow, and Databricks SQL to enhance data management, processing, and analytics.

  • Utilize Snowflake’s features such as data sharing, zero-copy cloning, and automatic scaling to optimize data storage, accessibility, and performance. Ensure effective management of both semi-structured and structured data within Snowflake’s architecture.

  • Implement and manage data storage solutions using Amazon S3, perform data warehousing with Amazon Redshift.

  • Design and implement data integration workflows using AWS Glue to orchestrate and automate data movement and transformation.

  • Design and implement scalable data pipelines using tools like Apache Kafka or Apache Airflow to facilitate real-time data processing and batch data workflows.

  • Apply advanced analytics techniques, including predictive modeling and data mining, to uncover insights and drive data-driven decision-making.

Requirements:

  • Overall experience of 15+ years

  • Data architecture, data platform, data warehouse related experience of 10+ years

  • Hands on experience with snowflake - 4+ years experience, Data bricks 4+ years

  • Between snowflake and Data bricks (5+ years experience)

  • Proficiency in features such as Delta Lake, MLflow, and Databricks SQL. Experience in managing Spark clusters and implementing machine learning workflows.

  • Solid experience in emerging and traditional data stack components such as: batch and real time data ingestion, ETL, ELT, orchestration tools, on-prem and cloud DW, Python, structured, semi and unstructured databases

  • Knowledge of features like Snowflake’s data sharing, zero-copy cloning, and automatic scaling. Experience in working with Snowflake’s architecture for semi-structured and structured data.

  • Experience with services like Amazon S3, Amazon Redshift, and AWS Glue.

  • Proficiency in tools such as Apache NiFi, Talend, Informatica, or Microsoft SQL Server Integration Services (SSIS).

  • Experience in designing and implementing data pipelines using tools like Apache Kafka or Apache Airflow.

  • Ability to perform data profiling, data quality assessments, and performance tuning.

  • Experience in comparing and evaluating different data technologies based on criteria like performance, scalability, and cost.

  • Skills in applying advanced analytics techniques, including predictive modeling and data mining.

  • Expert with industry standard data practices, data strategies and data concepts

  • Demonstrated experience in architecting/re-architecting complex data systems and data models.

  • Demonstrated experience in overall system design, including database selection and solutioning.

Nice-to-Have Skills:

  • Experience with data governance tools such as Collibra or Alation.

  • Knowledge of data quality frameworks and standards, such as Data Quality Dimensions (completeness, consistency, etc.).

  • Familiarity with tools like Apache Beam or Luigi for managing complex data workflows.

  • Awareness of emerging data technologies such as data mesh, data fabric, and real-time data processing frameworks.

Data Architect Related jobs

Other jobs at NIR-YU

We help you get seen. Not ignored.

We help you get seen faster — by the right people.

🚀

Auto-Apply

We apply for you — automatically and instantly.

Save time, skip forms, and stay on top of every opportunity. Because you can't get seen if you're not in the race.

AI Match Feedback

Know your real match before you apply.

Get a detailed AI assessment of your profile against each job posting. Because getting seen starts with passing the filters.

Upgrade to Premium. Apply smarter and get noticed.

Upgrade to Premium

Join thousands of professionals who got noticed and hired faster.