Match score not available

Senior Data Engineer (GCP)

fully flexible
Remote: 
Full Remote
Contract: 
Work from: 

Offer summary

Qualifications:

Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field., Proficiency in GCP, specifically BigQuery, and experience with data pipeline design., Strong programming skills in SQL and Python, with knowledge of API integration., Experience in data transformation, cleaning, and modeling for high-quality data management..

Key responsabilities:

  • Design and optimize scalable data pipelines on GCP for batch and real-time processing.
  • Develop and maintain dashboards in Looker for executive-level reporting.
  • Integrate external CRM systems and data sources through API connectors.
  • Collaborate with cross-functional teams to create effective data solutions aligned with business goals.

Addepto logo
Addepto Startup http://www.addepto.com
51 - 200 Employees
See all jobs

Job description

Addepto is a leading consulting and technology company specializing in AI and Big Data, helping clients deliver innovative data projects. We partner with top-tier global enterprises and pioneering startups, including Rolls Royce, Continental, Porsche, ABB, and WGU. Our exclusive focus on AI and Big Data has earned us recognition by Forbes as one of the top 10 AI companies.


As a Senior Data Engineer, you will have the exciting opportunity to work with a team of technology experts on challenging projects across various industries, leveraging cutting-edge technologies. Here are some of the projects we are seeking talented individuals to join:

  • Centralized reporting platform for a growing US telecommunications company. This project involves implementing BigQuery and Looker as the central platform for data reporting. It focuses on centralizing data, integrating various CRMs, and building executive reporting solutions to support decision-making and business growth.

  • Design and development of a universal data platform for global aerospace companies. This Azure and Databricks powered initiative combines diverse enterprise and public data sources. The data platform is at the early stages of the development, covering design of architecture and processes as well as giving freedom for technology selection.

  • Design of the data transformation and following data ops pipelines for global car manufacturer. This project aims to build a data processing system for both real-time streaming and batch data. We’ll handle data for business uses like process monitoring, analysis, and reporting, while also exploring LLMs for chatbots and data analysis. Key tasks include data cleaning, normalization, and optimizing the data model for performance and accuracy.


🚀 Your main responsibilities:

  • Design, implement, and optimize scalable data pipelines on GCP using BigQuery for both batch and real-time data processing.
  • Develop, enhance, and maintain dashboards in Looker to provide insightful and executive-level reporting.
  • Integrate external CRM systems and other data sources through API connectors to centralize and streamline data access.
  • Leverage SQL, Python, and API connectors for efficient ETL processes, data transformation, and automation.
  • Conduct data cleaning, transformation, and modeling to ensure high-quality, consistent data across the platform.
  • Collaborate with cross-functional teams to understand business needs and translate them into effective data solutions that align with reporting and strategic goals.

Required profile

Experience

Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Collaboration
  • Problem Solving

Data Engineer Related jobs