Match score not available

Senior Data Engineer

Remote: 
Full Remote
Experience: 
Mid-level (2-5 years)
Work from: 

Offer summary

Qualifications:

Proficiency in SQL and Python, Expertise in pySpark and Airflow, Deep understanding of DBMS and data modeling, Familiarity with cloud platforms like AWS or GCP.

Key responsabilities:

  • Design and maintain scalable data pipelines
  • Build large-scale data processing frameworks
Intellectsoft logo
Intellectsoft Computer Software / SaaS SME https://www.intellectsoft.net/
51 - 200 Employees
See all jobs

Job description

Intellectsoft is a software development company delivering innovative solutions since 2007. We operate across North America, Latin America, the Nordic region, the UK, and Europe.We specialize in industries like Fintech, Healthcare, EdTech, Construction, Hospitality, and more, partnering with startups, mid-sized businesses, and Fortune 500 companies to drive growth and scalability. Our clients include Jaguar Motors, Universal Pictures, Harley-Davidson, Qualcomm, and London Stock Exchange.Together, our team delivers solutions that make a difference. Learn more at www.intellectsoft.net

Our customer's product is an AI-powered platform that helps businesses make better decisions and work more efficiently. It uses advanced analytics and machine learning to analyze large amounts of data and provide useful insights and predictions. The platform is widely used in various industries, including healthcare, to optimize processes, improve customer experiences, and support innovation. It integrates easily with existing systems, making it easier for teams to make quick, data-driven decisions.ces to deliver cutting-edge solutions.

Requirements

  • Proficiency in SQL for data manipulation and querying large datasets.
  • Strong experience with Python for data processing and scripting.
  • Expertise in pySpark for distributed data processing and big data workflows.
  • Hands-on experience with Airflow for workflow orchestration and automation.
  • Deep understanding of Database Management Systems (DBMS), including design, optimization, and maintenance.
  • Solid knowledge of data modeling, ETL pipelines, and data integration.
  • Familiarity with cloud platforms such as AWS, GCP, or Azure.
Nice to have skills
  • Experience with other big data tools (e.g., Hadoop, Kafka, or Snowflake).
  • Knowledge of DevOps practices, including CI/CD for data pipelines.
  • Familiarity with containerization tools like Docker or Kubernetes.
  • Previous experience working in agile development teams.
  • Understanding of Machine Learning pipelines or frameworks.
Responsibilities
  • Design, develop, and maintain scalable data pipelines and ETL processes.
  • Build and optimize large-scale data processing frameworks using PySpark.
  • Create workflows and automate processes using Apache Airflow.
  • Manage, monitor, and enhance database performance and integrity.
  • Collaborate with cross-functional teams, including data analysts, scientists, and stakeholders, to understand data needs.
  • Ensure data quality, reliability, and compliance with industry standards.
  • Troubleshoot, debug, and optimize data pipelines and workflows.
  • Continuously evaluate and integrate new tools and technologies to enhance data infrastructure.

Benefits

  • 35 paid absence days per year for work-life balance of each specialist + 1 additional day for each following year of cooperation with the company
  • Up to 15 unused absence days can be add to income after 12 month of cooperation
  • Health insurance for you
  • Depreciation coverage for personal laptop usage for project needs
  • Udemy courses of your choice
  • Regular soft-skills trainings
  • Excellence Сenters meetups

Required profile

Experience

Level of experience: Mid-level (2-5 years)
Industry :
Computer Software / SaaS
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Collaboration
  • Problem Solving

Data Engineer Related jobs