Databricks

Work set-up: 
Full Remote
Contract: 
Work from: 

Offer summary

Qualifications:

Proficiency in Python and PySpark for large-scale data processing., Experience with Databricks platform and its tools., Strong scripting skills and understanding of Apache Spark architecture., Educational background in computer science, data science, or related fields..

Key responsibilities:

  • Develop and maintain data processing workflows using PySpark and Databricks.
  • Collaborate with data scientists and engineers to implement AI functionalities.
  • Optimize data pipelines for performance and scalability.
  • Support data analysis and reporting tasks within the team.

Elfonze Technologies logo
Elfonze Technologies Scaleup https://www.elfonze.com/
201 - 500 Employees
See all jobs

Job description

This is a remote position.

Python & PySpark
  • PySpark is a Python API for Apache Spark, enabling large-scale data processing.

  • Python is essential for scripting and integrating with Spark, especially when using libraries like pyspark-ai to incorporate AI functionalities into Spark workflows .

Databricks
  • Databricks provides a unified analytics platform that integrates with Apache Spark.

  • It supports Python and PySpark for data processing and offers tools for deploying machine learning models.

  • The English SDK for Apache Spark allows for natural language interactions with Spark DataFrames, facilitating tasks like data transformation and analysis .



Required profile

Experience

Spoken language(s):
English
Check out the description to know which languages are mandatory.

Related jobs