Logo for Sky Systems, Inc. (SkySys)

Data Engineer

Roles & Responsibilities

  • Bachelor's degree in Computer Science, Engineering, or related field, or equivalent practical experience; 3-5 years of data engineering experience
  • Hands-on experience with Azure Databricks and Apache Spark
  • Strong Python programming and SQL skills; experience with Delta Lake and data modeling (dimensional modeling and Data Vault)
  • Experience building configuration-driven ETL/ELT pipelines using Azure Data Factory, Databricks Workflows, or Apache Airflow; familiarity with Azure services (ADLS, Azure SQL, REST API, SFTP, Key Vault) and Git/CI/CD

Requirements:

  • Architect and implement end-to-end data solutions on the Databricks Unity Catalog platform, designing ETL/ELT pipelines from diverse sources to downstream consumers
  • Develop and optimize Apache Spark jobs for large-scale data processing, implement data quality frameworks, and build reusable data engineering libraries
  • Collaborate with product owners to translate requirements into technical specifications, advise on data architecture, and support analytics teams in data access
  • Optimize query performance and resource utilization, implement monitoring and alerting, maintain pipeline documentation, ensure security/compliance, and participate in code reviews

Job description


Role: Data Engineer
Position Type: Full-Time Contract (40hrs/week)
Contract Duration: Long Term (Through Dec 2026)
Work Schedule: 8 hours/day (Mon-Fri)
Work Hours: 8am - 5pm CST
Location: 100% Remote (Candidates can work from anywhere in LATAM Countries)

Position Overview
We are seeking an experienced Data Engineer to design, develop, and maintain scalable data pipelines and analytics solutions using Azure Databricks. The ideal candidate will have strong expertise in cloud-based data engineering, distributed computing, and modern data architecture patterns. Our team operates in the US Central time zone.

Must have:
  • Azure Data Factory
  • Azure Databricks
  • Apache Spark
  • Python
  • SQL

Key Responsibilities
The Data Engineer will architect and implement end-to-end data solutions on the Databricks Unity Catalog platform. This includes designing and building ETL/ELT pipelines that ingest data from diverse sources, transform it according to business requirements, and deliver it to downstream consumers. The role requires developing and optimizing Apache Spark jobs for processing large-scale datasets efficiently, implementing data quality frameworks to ensure accuracy and reliability, and influencing reusable frameworks and libraries to accelerate development across the team.
Collaboration is central to this position. The Data Engineer will work closely with product owners to understand requirements and deliver solutions that meet organizational needs. This includes translating business requirements into technical specifications, providing guidance on data architecture and best practices, and supporting analytics teams in accessing and utilizing data effectively.
Technical excellence and operational sustainability are expected. The Data Engineer will optimize query performance and resource utilization to control costs, implement comprehensive monitoring and alerting systems, maintain thorough documentation of data pipelines and processes, and ensure adherence to security policies and compliance requirements. The role also involves participating in code reviews and promoting engineering best practices throughout the data organization.

Required Qualifications
Candidates must possess a Bachelor's degree in Computer Science, Engineering, or a related technical field, or equivalent practical experience. A minimum of 3-5 years of experience in data engineering roles is required, with at least 2 years of hands-on experience with Azure Databricks and Apache Spark. Strong programming skills in Python are essential, along with proficiency in SQL and experience with relational and non-relational databases.
The position requires demonstrated experience building configuration driven ETL and orchestrating data pipelines using tools such as Azure Data Factory, Databricks Workflows, or Apache Airflow. Candidates should have solid understanding of data modeling concepts including dimensional modeling and data vault methodologies, experience with Delta Lake and medallion architecture patterns, and familiarity with Azure services including Azure Data Lake Storage, Azure Data Factory, Azure SQL Database, REST API, SFTP, and Azure Key Vault. Proficiency in Git for version control, including branching strategies, pull requests, and collaborative development workflows, along with CI/CD practices for data pipelines is expected.

Preferred Qualifications
Preference will be given to candidates holding Azure certifications such as Azure Data Engineer Associate or Databricks certifications. Experience with streaming data processing using Structured Streaming or Event Hubs, knowledge of infrastructure as code using Terraform/Terragrunt or ARM templates, and familiarity with data governance tools and practices are valued. Experience with Unity Catalog for data governance and understanding of data security and compliance frameworks round out the ideal candidate profile.

Technical Skills
- Azure Databricks and Apache Spark
- Python and PySpark
- SQL, particularly Spark SQL
- Azure Data Lake Storage (ADLS)
- Delta Lake and Lakehouse architecture
- Git and GitHub
- Data orchestration and workflow management
- Airflow and Databricks workflows
- Performance tuning and optimization
- Data quality and testing frameworks
- Familiarity with Azure Data Factory and Azure SQL DB/DW

Data Engineer Related jobs

Other jobs at Sky Systems, Inc. (SkySys)

We help you get seen. Not ignored.

We help you get seen faster — by the right people.

🚀

Auto-Apply

We apply for you — automatically and instantly.

Save time, skip forms, and stay on top of every opportunity. Because you can't get seen if you're not in the race.

✨

AI Match Feedback

Know your real match before you apply.

Get a detailed AI assessment of your profile against each job posting. Because getting seen starts with passing the filters.

Upgrade to Premium. Apply smarter and get noticed.

Upgrade to Premium

Join thousands of professionals who got noticed and hired faster.