Senior Data Engineer

Work set-up: 
Full Remote
Contract: 
Experience: 
Senior (5-10 years)
Work from: 

Offer summary

Qualifications:

4+ years of experience as a Data Engineer., Strong expertise in cloud technologies, especially Azure., Hands-on experience with Apache Spark and Databricks for large-scale data processing., Proficiency in SQL, Python, and experience with relational and NoSQL databases..

Key responsibilities:

  • Design, develop, and optimize data ingestion pipelines.
  • Build and optimize ETL/ELT pipelines using cloud-based technologies.
  • Collaborate with stakeholders to create scalable data systems for analytics.
  • Ensure data solutions meet operational, security, and compliance standards.

Tkxel logo
Tkxel SME https://www.tkxel.com
501 - 1000 Employees
See all jobs

Job description

This is a remote position.

We are seeking an experienced Data Engineer to the design, development, and optimization of our client data infrastructure. This role requires deep expertise in cloud technologies (primarily Azure with AWS as a plus) and data engineering best practices, with additional experience in Apache Spark and Databricks for largescale data processing. The Data Engineer will work closely with data scientists, analysts, and other stakeholders to create scalable and efficient data systems that support advanced analytics and business intelligence. Additionally, this role involves mentoring junior engineers and driving technical innovation within the data engineering team.

Key Responsibilities:

  • Collaborate with Solution Architects: Work with Big Data Solution Architects to design, prototype, implement, and optimize data ingestion pipelines, ensuring effective data sharing across business systems.
  • ETLELT Pipeline Development: Build and optimize ETLELT pipelines and analytics solutions using a combination of cloudbased technologies, with an emphasis on Apache Spark and Databricks for largescale data processing.
  • Data Processing with Spark: Leverage Apache Spark for distributed data processing, data transformation, and analytics at scale. Experience with Databricks for optimized Spark execution is highly desirable.
  • ProductionReady Solutions: Ensure data architecture, code, and processes meet operational, security, and compliance standards, making solutions productionready in cloud environments.
  • Project Support & Delivery: Actively participate in project and production delivery meetings, providing technical expertise to resolve issues quickly and ensure successful project execution.
  • Database Management: Manage both SQL (e.g., PostgreSQL, MySQL) and NoSQL (e.g., DynamoDB, MongoDB) databases, ensuring data is efficiently stored, retrieved, and queried.
  • RealTime Data Processing: Implement and maintain realtime data streaming solutions using tools such as Apache Kafka, AWS Kinesis, or other technologies for lowlatency data processing.
  • Cloud Monitoring & Automation: Use monitoring and automation tools (e.g., AWS CloudWatch, Azure Monitor) to ensure efficient use of cloud resources and optimize data pipelines.
  • Data Governance & Security: Implement best practices for data governance, security, and compliance, including data encryption, access controls, and audit trails to meet regulatory standards.
  • Collaboration with Stakeholders: Work closely with data scientists, analysts, and business teams to align data infrastructure with strategic business objectives and goals.
  • Documentation: Maintain clear and detailed documentation of data models, pipeline processes, and system architectures to support collaboration and troubleshooting.


    • Requirements

      Required Skills & Qualifications:

      • 4+ years of experience as a Data Engineer, with strong expertise in cloudbased data warehousing, ETL pipelines, and largescale data processing.
      • Proficiency with cloud technologies, with experience in platforms like Azure .
      • Handson experience with Apache Spark for distributed data processing and transformation. Experience with Databricks is highly desirable.
      • Strong SQL skills and experience with relational databases (e.g., PostgreSQL, MySQL) as well as NoSQL databases (e.g., MongoDB, DynamoDB).
      • Proficient in Python for data processing, automation tasks, and building data workflows.
      • Experience with PySpark for largescale data engineering, particularly in Spark clusters or Databricks.
      • Experience in designing and optimizing data warehouse architectures, ensuring optimal query performance in largescale environments.
      • A strong understanding of data governance, security, and compliance best practices, including encryption, access control, and data privacy.
        • Preferred Qualifications:

          • Bachelor’s degree in Computer Science, Engineering, or a related field.
          • Certifications in Data Engineering from cloud providers (e.g., AWS Certified Big Data Specialty, Microsoft Certified: Azure Data Engineer Associate) are a plus.
          • Experience with advanced data engineering tools and platforms such as Databricks, Apache Spark, or similar distributed computing technologies



            • Salary:

              Market Competitive

Required profile

Experience

Level of experience: Senior (5-10 years)
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Data Engineer Related jobs