Data Engineer | Mid/Senior | Apps | NordVPN

Remote: 
Full Remote
Contract: 
Work from: 

Offer summary

Qualifications:

3+ years of experience in data acquisition tasks, Deep hands-on experience in Python, Strong knowledge of Apache Spark, Familiarity with Git and Bash, with Airflow experience as a plus..

Key responsabilities:

  • Acquire data from various sources by developing scripts, workflows, and ETL pipelines.
  • Maintain the integrity and structure of existing data models in the data warehouse.
  • Identify and implement internal process improvements to optimize data delivery.
  • Collaborate with other teams to understand their data needs and ensure data availability.

Nord Security logo
Nord Security Large https://nordsecurity.com
1001 - 5000 Employees
See all jobs

Job description

Nord Security was born as a passion project, and our drive is reflected in our work, which has earned high praise from major tech outlets and cybersec experts. We want one thing only — to give true online privacy and security to as many people as we can. And for that purpose we create top-notch cybersecurity products and services that grant a safer cyber future to millions of users.

NordVPN is the fastest VPN and the most trusted online security solution on the planet. NordVPN protects your internet traffic with next-generation encryption, being the preferred tool of activists and privacy-conscious individuals around the globe.

The NordVPN Apps department believes in constant improvement and innovation, so it takes the initiative to refine all products at every stage. We’re actively involved in all phases of development with other teams to obtain the best outcomes – from the simplest UI elements to innovative features. 
Our apps team is all about hard work, modern technology stack, speed, a constant desire to learn, and above all, vigilance in keeping every last asset safe and sound. That's how we build top-notch cybersecurity solutions that people can trust. 

What will you do?
- Acquire data from various data sources (APIs, relational and non-relational databases, queues …) by developing scripts, workflows, and ETL pipelines using our stack of both “small” and big data;
- Participate in modeling business processes with data models;Maintain existing data models’ integrity and structure in the data warehouse;
- Identify, design, and implement internal process improvements such as automating manual processes, and optimizing data delivery;
- Assess the effectiveness and accuracy of data-gathering techniques;Develop and deploy processes and tools to monitor and analyze pipeline performance and data accuracy;
- Discover opportunities for data acquisition, diagnostics, mapping, and correction;Employ a variety of development languages and tools to blend data systems together;
- Recommend and validate different ways to improve data reliability, efficiency, and quality;
- Troubleshooting the data pipeline;
- Ad-hoc dataset creation;Work with other teams to understand their individual needs and objectives to enable them through data availability.

Core Requirements
- 3+ years of experience performing data acquisition tasks; 
- Deep, hands-on experience in Python;
- Strong knowledge of Apache Spark;
- Knowledge of Git;
- Knowledge of Bash;
- Experience with Airflow is a plus.

Why should you pick this team?

- Big data (~6TB of  compressed data inflow per day)
- Broad scope of work with an opportunity to contribute to different projects (developing ETL pipelines, developing internal tools and internally shared libraries, developing the data model, working on monitoring and stability, working on optimizations and stability of processes, processes automation, etc)

Salary Range
17600-29900 zl/month

Required profile

Experience

Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Teamwork
  • Problem Solving

Data Engineer Related jobs