Senior Data Engineer at Vigil

Remote: 
Full Remote
Contract: 
Work from: 

Offer summary

Qualifications:

Approximately 4-6 years of experience in data engineering, with a focus on building data products using platforms like Databricks and Spark., Proficient in Python (especially PySpark), Scala, and SQL, with a solid understanding of ELT/ETL practices., Experience with cloud services such as AWS, Azure, or Google Cloud, and tools like S3 and Redshift., Strong analytical skills and effective communication abilities, with experience in collaborative, cross-functional teams..

Key responsibilities:

  • Design and maintain robust data workflows and components using Databricks, ensuring efficient data ingestion and storage.
  • Collaborate with commercial teams to gather specifications for analytical products and support dashboard development.
  • Continuously assess and optimize data workflows for performance and reliability, addressing any system inefficiencies.
  • Implement data governance standards and ensure data quality through validation checks and monitoring tools.

Vigil logo
Vigil Scaleup https://www.vigil.global/
51 - 200 Employees
See all jobs

Job description

SUMMARY:

As a Data Engineer, you will be responsible for designing, developing, maintaining, and optimising a data pipeline infrastructure using a proprietary data platform , which is based on Databricks. You will collaborate with cross-functional teams to design and implement scalable data solutions, ensuring efficient data ingestion, transformation, storage, and analysis.

WHAT WILL YOU BE DOING:
  • Data Engineering Ownership: End-to-end creation and upkeep of robust data workflows and components, leveraging tools such as Databricks. This includes designing early-stage prototypes and deploying large-scale data acquisition, handling, and storage strategies.

  • ETL/ELT Workflow Management: Construct and refine data ingestion pipelines to seamlessly integrate varied data sources into the central platform. Ensure data consistency through validation, cleaning, and enrichment routines.

  • Stakeholder Collaboration: Partner with commercial teams and data stakeholders to gather and refine specifications for analytical products and visual reporting needs.

  • Architectural Design & Data Modelling: Engage with analysts and data experts to define efficient data models and architectural plans. Support dashboard and report development, while ensuring optimal data structuring for performance.

  • Pipeline Performance & Reliability: Continuously assess and fine-tune data workflows, addressing system inefficiencies, integration problems, and data fidelity concerns.

  • Quality Assurance in Data: Define expectations for data integrity and collaborate with QA to automate validation checks. Utilise monitoring tools to surface and track data quality metrics.

  • Data Controls & Compliance: Apply internal governance standards and safeguard sensitive data through access rules, encryption protocols, and retention strategies, aligning with organisational security frameworks.

  • Technical Enablement & Knowledge Sharing: Work closely with interdisciplinary teams to ensure data needs are met, while thoroughly documenting processes and solutions. Convey technical ideas in a way that’s accessible across departments.

  • Innovation & Process Evolution: Keep informed on developments within the data ecosystem. Advocate for and implement improvements in efficiency, tooling, and automation. Participate in internal knowledge communities.

  • Agile Delivery Engagement: Contribute to delivery efforts by actively participating in sprint planning, stand-ups, and backlog refinement. Help define and deliver technical tasks, aligning work with CI/CD best practices.

WHAT WE ARE LOOKING FOR:

  • Approximately 4 years of experience in data engineering for mid-level roles
  • 6 or more years of experience for senior-level data engineering positions
  • Between 1 to 4 years of experience building data products using platforms such as Databricks, Spark, Cloudera, or HortonWorks
  • Skilled in Python (especially PySpark), Scala, and SQL
  • Experienced in designing and implementing scalable data pipelines for high-volume environments
  • Solid understanding of ELT/ETL practices and data integration strategies
  • Capable of writing robust production code with automated testing
  • Familiar with CI/CD tools such as GitHub Actions and Jenkins for deploying code
  • Hands-on experience with distributed data processing using Apache Spark
  • Proficient with cloud services including AWS, Azure, or Google Cloud and tools like S3, Glue, Lambda, Redshift, and BigQuery
  • Good knowledge of data modelling, relational databases, and SQL performance optimisation
  • Strong analytical and problem-solving abilities, with attention to troubleshooting details
  • Effective communicator with experience working in collaborative, cross-functional teams
  • Basic understanding of machine learning concepts such as classification, regression, A/B testing, and experimental design
AWESOME BUT NOT REQUIRED:
  • Understanding of the UK media landscape, including over-the-top (OTT) and traditional broadcast advertising
  • Familiarity with concepts and trends in digital advertising and marketing analytics
  • Experience applying statistical approaches such as regression and classification, as well as designing and analysing A/B and other controlled experiments
  • Awareness of modern data architecture methodologies including Data Mesh, enterprise-level data frameworks, and business intelligence architecture
  • Solid grasp of data protection practices, regulatory compliance, and governance principles
  • Exposure to a variety of data management domains such as metadata handling, data quality enforcement, master data systems, and governance tooling
  • Hands-on experience with data visualisation platforms like Tableau, Looker, AWS QuickSight, and ThoughtSpot for building interactive reporting solutions
WHAT’S IN IT FOR YOU?
  • Be part of our collegial environment where responsibility and authority are shared equally amongst colleagues and help create our company culture
  • A culture in which we don’t criticise failure but ensure we learn from our mistakes
  • An Agile environment where your ideas are welcome
  • The possibility to grow and experience different projects
  • Ongoing Training & Mentoring
  • The possibility to travel

- ATTENTION! THIS POSITION IS FOR PORTUGAL OR BRAZIL BASED ONLY

Required profile

Experience

Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Collaboration
  • Communication
  • Problem Solving

Data Engineer Related jobs