Match score not available

Azure Data Engineer

72% Flex
Remote: 
Full Remote
Contract: 
Work from: 

Offer summary

Qualifications:

Advanced understanding and experience in SQL, relational databases, building data pipelines, and working with large datasets., Proficiency in data warehouse technologies like Azure Synapse Analytics, Azure Data Factory, and knowledge of scripting languages like Python..

Key responsabilities:

  • Manage and optimize data processing and storage performance to support organization operational and analytical needs.
  • Create and maintain data pipelines, continuously implementing improvements for data extraction, transformation, and loading mechanisms.
  • Design and deploy data infrastructure tools while developing and deploying big data platforms and advanced analytics solutions.
  • Stay informed about industry trends, CI/CD practices, and actively seek process and system enhancement opportunities.
The Remote Group logo
The Remote Group Scaleup https://theremotegroup.com/
201 - 500 Employees
See more The Remote Group offers

Job description

Logo Jobgether

Your missions

Position: Azure Data Engineer

Schedule: 1pm to 10pm (Sunday to Thursday)

Work set-up: On-site (Clark Pampanga)


Duties & Responsibilities:

The Azure Data Engineer is responsible for the maintenance, improvement, cleaning, and manipulation of data in the organization operational and analytics databases. The candidate works with the software engineers, data analytics teams, data scientists, and data warehouse engineers in order to understand and aid in the implementation of database requirements and ELT/ETL pipelines, analyze data processing and storage performance, and troubleshoot any existent issues. The candidate will implement methods to improve data reliability and quality, combine raw information from different sources to create consistent and machine-readable formats, defines and builds the data pipelines, and develop and test architectures that enable data extraction and transformation for all kinds of data modeling.

The candidate will create and manage data infrastructure and tools, including collecting, storing, processing and analyzing organization data and data systems, and use the best solutions to analyze mass data sets to get results. The candidate also plays a key role in the development and deployment of innovative big data platforms, advanced analytics, and data processing.

  • Create and maintain optimal and scalable data pipeline architecture Assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources
  • Drive design, model, implement, and operate large, evolving, structured and unstructured datasets
  • Evaluate and implement efficient distributed storage and query techniques
  • Design and implement monitoring of data services platform
  • Understand, design and implement Azure data storage and data processing services.
  • Keeps track of industry best practices and trends and through his acquired knowledge, takes advantage of process and system improvement opportunities
  • Develop and maintain technical documentation and operational procedures for areas of responsibility


Qualifications:

  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
  • Experience building and optimizing big data pipelines, architectures and data sets.
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Strong analytic skills related to working with unstructured datasets.
  • Build processes supporting data transformation, data structures, metadata, dependency and workload management.
  • A successful history of manipulating, processing and extracting value from large disconnected datasets
  • Experience with relational SQL, NoSQL databases, and data lakes: ADLS Gen2, Azure SQL Database, Azure Synapse Analytics, Cosmos DB, Analysis Services etc.
  • Experience with data pipeline and workflow management tools, specifically Azure Data Factory development, monitoring and debugging
  • Experience with Azure cloud services: IoT Hub, Even Hub, Stream Analytics, Azure Data Explorer, Databricks, Azure ML Services, and Cognitive Services
  • Experience with scripting languages: Python, Scala, etc., and Azure CLI & PowerShell
  • Must have a strong understanding of CI/CD practices and Azure DevOps Bachelor Degree in Computer Science or Engineering
  • Azure Certified Data Engineer
  • Strong communication skills with ability to interact with Architects
  • Strong project management and organizational skills
  • Ability to work under pressure and prioritize with minimal supervision
  • Multi-tasking skills and attention to detail
  • Team player with ability to work with cross functional team

Required profile

Experience

Spoken language(s):
English
Check out the description to know which languages are mandatory.

Go Premium: Access the World's Largest Selection of Remote Jobs!

  • Largest Inventory: Dive into the world's largest remote job inventory. More than half of these opportunities can't be found on standard platforms.
  • Personalized Matches: Our AI-driven algorithms ensure you find job listings perfectly matched to your skills and preferences.
  • Application fast-lane: Discover positions where you rank in the TOP 5% of applicants, and get personally introduced to recruiters with Jobgether.
  • Try out our Premium Benefits with a 7-Day FREE TRIAL.
    No obligations. Cancel anytime.
Upgrade to Premium

Find other similar jobs