Match score not available

Snowflake Data Engineer

Remote: 
Full Remote
Contract: 
Experience: 
Mid-level (2-5 years)
Work from: 

Offer summary

Qualifications:

2+ years hands-on Snowflake experience, 4+ intensive years in data modeling and ETL processing, Experience in cloud data management.

Key responsabilities:

  • Design and implement Data Pipelines in Snowflake
  • Develop ingestion pipelines from various sources
  • Create data transformations with SQL, Python
  • Optimize existing pipelines and transformations
  • Automate data ingestion and transformation
Hiflylabs logo
Hiflylabs SME https://hiflylabs.com/
51 - 200 Employees
See more Hiflylabs offers

Job description

Company Description

Hiflylabs is a leading Data Consultancy Agency competency center in Europe with a staff of over 180 experts. With two decades of experience in the Data & Analytics field, Hiflylabs focuses on serving clients both in Europe and North America. We build cloud data platforms and data warehouses, do data analysis and data visualization, apply data science to business problems, and even consult on data management.

We strive to create a work environment that is both challenging and supportive, allowing our employees to grow and excel with our company. We believe that our people are our most valuable assets, and we are committed to invest in their personal and professional development through our mentoring system.

Job Description

At Hiflylabs, we are looking to extend our Analytics Engineering team. This team typically works with international clients in small squads, almost exclusively in agile.

We expect you to be a data-oriented person with data/software engineering chops and at least a pinch of business understanding.

You will be building data pipelines in Snowflake, sometimes as a Hiflylabs team member and sometimes more deeply intertwined with the clients’ organization.  Depending on the assignment and its phase you will be designing and implementing data ingestions  and transformation logics, and sometimes optimize complex ETL processes.  You will be working with technical-focused as well as business-oriented colleagues, so you will have the ability to do deeply technical work as well as converse with product managers and end users and see how your work drives the organization forward.

Main Tasks

  • Design and implement Data Pipelines in Snowflake
  • Develop ingestion pipelines from various sources (via Snowpipe and other means)
  • Create data transformations with SQL, Python, PaaS and 3rd party tools (e.g. dbt)
  • Optimize existing pipelines and transformations for cost, load time, and maintainability
  • Automate data ingestion and transformation (often with Airflow or similar tools)
  • Help business users to formulate their needs on the one hand and to make the most of the data platform on the other hand.

Qualifications

Requirements

  • Working experience with and in-depth knowledge of Snowflake and its family of tools, preferably over multiple projects.  As an orientation, you are likely to have 2+ years of hands-on Snowflake experience.
  • Data transformation background: sound conceptual understanding and practical experience of data modeling and ETL processing, including extensive knowledge of SQL, including Snowflake’s SQL flavor.  As an orientation, you are likely to have 4+ intensive years in the field.
  • At least some experience in cloud data management in AWS or GCP data stack components (primarily batch data management, storage, loading, transformation of structured data, Data Lake storage and management tools).
  • Team player with ability and explicit willingness to cooperate with other team members, including non-technical business stakeholders in moving the project forward.  For this you need to be proactive, and to have some level of business understanding and a genuine interest in the business impact of the project.
  • Strong technical and business English (oral, reading and writing).

Advantages

  • SnowPro certification, preferably SnowPro Advanced Data Engineer.
  • Development background in Python, preferably focused on data ingestion / transformation.
  • PySpark / Spark experience.

Additional Information

Why us?

  • Diverse projects - In each assignment, there is always something new, either on the technical or the business side, that helps you grow.
  • Empowerment - Trust is a cornerstone of our culture. We'll hold your hand if you need it, but give you space if you’d like to push your limits. Don't lose sight of the goal, the rest is up to you. 
  • Flexible ways of working – We love our location on Bartók Béla RoadStreet which is not only an office but also a community space. However, we respect our people to do their work when and how they work best. 
  • Balanced life - We love what we do and aim to work together with others who do their work with love. At the same time, we highly value fresh minds, for which we think a healthy work/life balance is essential! Forget about pointless meetings and unnecessary administration.
  • Mentoring from your first day – Continuous support is not just a set of fancy words we throw around here; your mentor follows you throughout your career path.
  • Learning & Development opportunities - If you want to keep learning and improving, we are on to a great track! We look forward to helping you unlock your potential.
  • Supportive corporate culture - In addition to our professional success, we are proud of the social cohesion that is based on comradery, mutual support, and respect and is constantly nurtured in the company.

Required profile

Experience

Level of experience: Mid-level (2-5 years)
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Open Mindset
  • Verbal Communication Skills
  • Collaboration

Data Engineer Related jobs