Data Engineer

Work set-up: 
Full Remote
Contract: 
Work from: 

Offer summary

Qualifications:

Experience in Data Engineering and ETL jobs., Proficiency in Snowflake and Azure Cloud services., Familiarity with healthcare data standards and data masking techniques., Ability to preprocess large datasets using Azure Tools and Python..

Key responsibilities:

  • Design and maintain scalable data pipelines for structured and unstructured data.
  • Implement data processing workflows for training large language models.
  • Address data quality issues and ensure minimal disruption to systems.
  • Communicate with stakeholders and collaborate with cross-functional teams.

Keylent Inc logo
Keylent Inc Information Technology & Services SME https://www.keylent.com/
201 - 500 Employees
See all jobs

Job description

Mandatory Skills: -
Gen AI
Data Engineering, ETL Jobs
Snowflake
Azure Cloud

Role & responsibilities

Data Engineer – Essential Job Functions:
Design, develop, and maintain scalable data pipelines for ingesting, processing, and transforming large volumes of structured and unstructured data.
Implement efficient data processing workflows to support the training and evaluation of solutions using large language models, ensuring reliability, scalability, and performance.
Addressing issues related to data quality, pipeline failures, or resource contention, ensuring minimal disruption to systems.
Integrate Large Language Model into data pipeline for natural language processing tasks.
Working with Snowflake ecosystem
Deploying, scaling, and monitoring AI solutions on cloud platforms like Snowflake, Azure, AWS, GCP
Communicating technical and non-technical stakeholders and collaborate with cross-functional teams.
Cloud cost management and best practices to optimize cloud resource usage and minimize costs.

Data Engineer – Preferred Qualifications:
Experience working within the Azure ecosystem, including Azure AI Search, Azure Storage Blob, Azure Postgres and understanding how to leverage them for data processing, storage, and analytics tasks.
Experience with techniques such as data normalization, feature engineering, and data augmentation.
Ability to preprocess and clean large datasets efficiently using Azure Tools /Python and other data manipulation tools.
Expertise in working with healthcare data standards (ex. HIPAA and FHIR), sensitive data and data masking techniques to mask personally identifiable information (PII) and protected health information (PHI) is essential.
In-depth knowledge of search algorithms, indexing techniques, and retrieval models for effective information retrieval tasks. Familiarity with search platforms like Elasticsearch or Azure AI Search is a must.
Familiarity with chunking techniques and working with vectors and vector databases like Pinecone.
Experience working within the snowflake ecosystem.
Ability to design, develop, and maintain scalable data pipelines for ingesting, processing, and transforming large volumes of structured and unstructured data.
Experience with implementing best practices for data storage, retrieval, and access control to ensure data integrity, security, and compliance with regulatory requirements.
Be able to implement efficient data processing workflows to support the training and evaluation of solutions using large language models, ensuring reliability, scalability, and performance.
Ability to proactively identify and address issues related to data quality, pipeline failures, or resource contention, ensuring minimal disruption to systems.
Experience with large language model frameworks, such as Langchain and know how to integrate them into data pipelines for natural language processing tasks.

Required profile

Experience

Industry :
Information Technology & Services
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Collaboration
  • Communication

Data Engineer Related jobs