Profasee Senior Python Engineer(Data Pipelines & APIs)

Work set-up: 
Full Remote
Contract: 
Experience: 
Senior (5-10 years)
Work from: 

Offer summary

Qualifications:

Proven experience with Python programming (5+ years)., Strong knowledge of SQL and experience with relational databases like PostgreSQL or MySQL., Experience in building and maintaining data pipelines, APIs, and ETL processes., Excellent communication skills and ability to work effectively in a remote, dynamic team..

Key responsibilities:

  • Develop and maintain data pipelines, storage systems, and APIs for data access.
  • Collaborate with data science and ML teams to provide clean and usable data.
  • Monitor and optimize pipeline performance, scalability, and error handling.
  • Support ML engineering efforts with data engineering expertise.

Silver.dev logo
Silver.dev Information Technology & Services Startup https://silver.dev/
2 - 10 Employees
See all jobs

Job description

Architect the Pipes. Fuel the Models. Move the Market.

Location: Remotefirst (strong, reliable internet required)

Team: Data Engineering & ML (reporting to CTO)

What we are looking for

We are looking for an entrepreneurial and business minded Python Engineer to help us disrupt the ecommerce industry. We believe it is time for technology and data to help merchants deliver the best pricing to their customers, no matter what. This is the opportunity to join a startup led by a proven founder in an industry that is growing exponentially and an opportunity to work alongside a small and focused Data Science (DS) & Machine Learning (ML) Engineering team, the chance to work on something not only exciting and fun, but also creative in ways that most people will never experience in their lifetime.

What you will do.

You will work side by side with ecommerce experts and the DS ML team to build, test, and maintain systems that collect, manage, and convert raw data into usable information for the DS ML team to interpret. You will be responsible for building and maintaining data pipelines, storage systems, and APIs to access the data. You will also provide support to the ML engineering team to help with data engineering.

The ideal candidate writes welltested code that is easy to read and understand, communicates effectively with team members, asks for help and feedback when needed, and shares knowledge.

  • Write code and tests for pulling data from 3rd party integrations and loading into a structured database for each data source (ETL).

  • Provide API access to the data collected from 3rd parties.

  • Analyze data sources to help decide which are useful for the DS ML team.

  • Maintain the data pipeline architecture, automation, scalability, and error handling.

  • Monitor pipeline performance and stability.

    • Who you are.
      • You enjoy building the tools and systems for delivering a production product.

      • You are comfortable in a small, dynamic team, where you will be designing and leading initiatives.

      • You are selfmotivated and comfortable working in a 100% remote environment.

      • You can balance the need to move quickly with the desire to build for the future.

      • You are willing to bring your expertise to the table and willing let others bring their expertise as well.

        • Requirements
          • 5+ years of software development experience

          • Proficient with Python programming language

          • Proficient with SQL programming language and databases (PostgreSQL, MySQL, etc)

          • Proficient integrating with APIs, data formats (JSON, CSV, XML, etc), rate limiting, and error handling.

          • Experience with distributed task queues (i.e. Celery), multiprocessing, job scheduling.

          • Understanding of the infrastructure used to build a production application (ex. NGINX, RabbitMQ, Error and Performance monitoring tools, AWS services such as ECSEKS, etc.)

          • Strong communication skills, especially with nonsoftware developers

          • Experience with containerization (DockerKubernetes)

          • Experience designing and delivering the architecture required to run code in production

          • Experience with ETL pipelines

            • Good to have
              • Experience working in earlystage startups andor developing your own projects

              • Experience with NoSQL and nonrelational databases

              • Experience with machine learning and data science.

              • Experience with distributed computing frameworks such as Apache Spark, Apache Iceberg or Hadoop

              • Experience with data orchestration tools such as Apache Airflow, Prefect, or Luigi

                • Interview Process
                  • Silver Screening Interview.

                  • Silver Technical Interview.

                  • Logic Quizz.

                  • Client Screening interview.

                  • Client Live coding Interview.

                  • Client Behavioral Interview.

Required profile

Experience

Level of experience: Senior (5-10 years)
Industry :
Information Technology & Services
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Self-Motivation
  • Teamwork
  • Communication

Python Developer Related jobs