Data Engineer | Turning Raw Data into Gold (B2B or CIM)

Remote: 
Full Remote
Contract: 
Work from: 

Offer summary

Qualifications:

3+ years of experience as a Data Engineer or in a similar role., Strong proficiency in SQL and Python., Solid understanding of data modeling, ETL/ELT processes, and pipeline orchestration., Experience with cloud platforms like AWS and Azure, including their data services..

Key responsabilities:

  • Design, develop, and maintain robust and scalable data pipelines.
  • Optimize data systems for performance, scalability, and reliability.
  • Collaborate with data analysts and scientists to ensure high data quality.
  • Implement data governance, security, and privacy standards.

Tecknoworks logo
Tecknoworks Information Technology & Services SME http://www.tecknoworks.com/
51 - 200 Employees
See all jobs

Job description

Tecknoworks is a global technology consulting company. At our core, we embody values that define who we are and how we operate. We are curious, continuously seeking to expand our understanding and question conventional wisdom. Fearlessness drives us, propelling us to take daring steps to achieve significant outcomes. Our aspiration to be inspiring motivates us to consistently reach for our personal and collective best, setting an example for ourselves and those we interact with. Collaboration is our strength, capitalizing on the diverse brilliance within our team. We aim to provide consistent and lasting positive outcomes for our clients.

 

We are seeking highly skilled and motivated Data Engineers to join our growing data team. The ideal candidates will be responsible for building and maintaining scalable data pipelines, managing data architecture, and enabling data-driven decision-making across the organization. The roles require hands-on experience with cloud platforms, specifically AWS and/or Azure, including proficiency in their respective data and analytics services as follows:

Amazon Web Services (AWS):

  • Experience with AWS Glue for ETL/ELT processes.
  • Familiarity with Amazon Redshift, Athena, S3, and Lake Formation.
  • Use of AWS Lambda, Step Functions, and CloudWatch for data pipeline orchestration and monitoring.
  • Exposure to Amazon Kinesis or Kafka on AWS for real-time data streaming.
  • Knowledge of IAM, VPC, and security practices in AWS data environments.

Microsoft Azure:

  • Experience with Azure Data Factory (ADF)/Synapse for data integration and orchestration.
  • Familiarity with Azure Synapse Analytics, Azure Data Lake Storage (ADLS), and Azure SQL Database.
  • Hands-on with Databricks on Azure and Apache Spark for data processing and analytics.
  • Exposure to Azure Event Hubs, Azure Functions, and Logic Apps.
  • Understanding of Azure Monitor, Log Analytics, and role-based access control.

 

Location:  Romania (Remote)

Contract type: Employment or collaboration contract

Requirements

  • Design, develop, and maintain robust and scalable data pipelines to ingest, transform, and store data from diverse sources.
  • Optimize data systems for performance, scalability, and reliability in a cloud-native environment.
  • Work closely with data analysts, data scientists, and other stakeholders to ensure high data quality and availability.
  • Develop and manage data models using DBT, ensuring modular, testable, and well-documented transformation layers.
  • Implement and enforce data governance, security, and privacy standards.
  • Manage and optimize cloud data warehouses, especially Snowflake, for performance, cost-efficiency, and scalability.
  • Monitor, troubleshoot, and improve data workflows and ETL/ELT processes.
  • Collaborate in the design and deployment of data lakes, warehouses, and lakehouse architectures.

Required Qualifications:

  • 3+ years of experience as a Data Engineer or in a similar role.
  • Strong proficiency in SQL and Python.
  • Solid understanding of data modeling, ETL/ELT processes, and pipeline orchestration.
  • Experience working in DevOps environments using CI/CD tools (e.g., GitHub Actions, Azure DevOps).
  • Knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes, Airflow).
  • Familiarity with data cataloging tools like AWS Glue Data Catalog or Azure Purview.
  • Strong interpersonal and communication skills—able to collaborate with cross-functional teams and external clients.
  • Adaptability in fast-paced environments with shifting client needs and priorities.
  • Analytical mindset with attention to detail and a commitment to delivering quality results.

If you have the skills and experience, we're looking for, we would love to hear from you. Please submit your resume showcasing your relevant expertise for this role. We are eager to see what you can bring to our team! 

Required profile

Experience

Industry :
Information Technology & Services
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Adaptability
  • Analytical Thinking
  • Collaboration
  • Communication

Data Engineer Related jobs