Data Engineer

Remote: 
Full Remote
Contract: 
Work from: 

Offer summary

Qualifications:

Proficiency in PySpark and Spark SQL for data processing., Experience with Databricks and knowledge of Delta Live Tables for automated ETL., Familiarity with Azure Data Lake Storage and orchestration tools like Apache Airflow., At least one year of experience with Terraform and good practices of GitOps..

Key responsabilities:

  • Analyze user problems and maintain communication with Data Architect and Project Manager.
  • Design and implement data pipelines and infrastructure while following best practices.
  • Define, execute, and document functional and technical tests in collaboration with the Project Manager.
  • Participate in Deployment Reviews and monitor post-deployment behavior to ensure proper strategies are used.

Syffer logo
Syffer
2 - 10 Employees
See all jobs

Job description

Syffer is an all-inclusive consulting company focused on talent, tech and innovation. We exist to elevate companies and humans all around the world, making change, from the inside to the outside.

We believe that technology + human kindness positively impacts every community around the world. Our approach is simple, we see a world without borders, and believe in equal opportunities. We are guided by our core principles of spreading positivity, good energy and promote equality and care for others.

Our hiring process is unique! People are selected by their value, education, talent and personality. We dont present ethnicity, religion, national origin, age, gender, sexual orientation or identity.

Its time to burst the bubble, and we will do it together!

What You'll do:

- Analyze user problems, ensure clear understanding of architecture, and maintain open communication with Data Architect, peers, and Project Manager;

Design and implement data pipelines and infrastructure (e.g., with Terraform), follow data best practices, and manage interface contracts with version control and code reviews;

- Apply strong knowledge of data warehousing, ETL/ELT processes, data lakes, and modeling throughout development;

- Define, execute, and document functional and technical tests in collaboration with the Project Manager, sharing regular updates on results;

- Participate in Deployment Reviews, monitor post-deployment behavior, log errors, and ensure proper use of deployment and monitoring strategies.


What You Are:

Proficiency with PySpark and Spark SQL for data processing.

Experience with Databricks using Unit Catalog.

Knowledge of Delta Live Tables (DLT) for automated ETL and workflow orchestration in Databricks.

Familiarity with Azure Data Lake Storage.

Experience with orchestration tools (e.g., Apache Airflow or similar) for building and scheduling ETL/ELT pipelines.

Knowledge of data partitioning and data lifecycle management on cloudbased storage.

Familiarity with implementing data security and data privacy practices in a cloud environment.

Terraform: At least one year of experience with Terraform and know good practices of GitOps.

Additional Knowledge and Experience that are a Pluss: Databricks Asset Bundles, Kubernetes, Apache Kafka, Vault.

What youll get:

- Wage according to candidate's professional experience;

- Remote Work whenever possible;

- Allocation of health insurance from the beginning of the employment;

- Delivery of work equipment adjusted to the performance of functions;

- And others.

Work together with expert teams on projects of large magnitude and intensity, long term together with our clients, all leaders in their industries.

Are you ready to step into a diverse and inclusive world with us?

Together we will promote uniquess!

Required profile

Experience

Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Teamwork
  • Communication

Data Engineer Related jobs