πŸ‡΅πŸ‡± Data Engineer

Work set-up: 
Full Remote
Contract: 
Salary: 
16 - 16K yearly
Experience: 
Senior (5-10 years)
Work from: 

Offer summary

Qualifications:

At least 3 years of experience as a Data Engineer., Hands-on experience with AWS services like Redshift, S3, Lambda, and DynamoDB., Proficiency in SQL, data pipeline development, and schema design., Knowledge of Python for building microservices and APIs..

Key responsibilities:

  • Develop and maintain Python-based backend services and microservices.
  • Design and optimize cloud-native applications using AWS tools.
  • Automate tasks and workflows using Python scripts and AWS orchestration.
  • Build data access layers and integrate data across systems.

edrone logo
edrone
51 - 200 Employees
See all jobs

Job description

Were a hardworking, funloving, getthingsdone type of team dedicated to providing unique marketing automation solutions for clients. We understand the challenges of eCommerce and the importance of seamless customer service and satisfaction. We roll our sleeves up, act fast, and learn together. Were looking for a Data Engineer who will do the same! πŸš€

Sounds interesting? Keep on reading!

πŸš€ What’s in it for you:

  • Be part of a small, fastpaced team that values innovation, efficiency, and a positive work culture. We thrive on challenges, embrace change, and keep things moving.

  • We value initiative and ownershipβ€”if something makes sense, we act on it quickly and take full responsibility for delivering it.

  • Direct responsibility for projects, regular 1:1s with your leader with a blameless postmortems, code reviews

  • B2B contract (1520k) & covering all the costs of accounting services

  • Hybrid or remote work or a modern, wellequipped office whatever you prefer!

  • 26 paid days off so you can relax properly!

  • Benefits MultiSport card, LuxMed medical package, group accident insurance, English and Portuguese classes, and Hedepy a portal for mental health and development


    • πŸš€ How you will spend your time:

      Backend System Development

      • Design, build, and maintain robust Pythonbased services and microservices

      • Develop and optimize RESTful APIs and background services supporting core business logic and integrations

      • Ensure code quality, reusability, and scalability through modular design and adherence to best practices

        • CloudNative Application Engineering

          • Develop serverless and containerized applications using AWS Lambda, ECS, and other cloudnative tools

          • Leverage AWS services (S3, RDS, DynamoDB, Step Functions, etc.) to support backend operations and workflows

          • Collaborate with DevOps to provision, deploy, and monitor cloud infrastructure

            Automation and Task Orchestration

              • Automate recurring tasks, background jobs, and workflows using Python scripts and AWS orchestration tools

              • Build and maintain task schedulers and asynchronous workers for timesensitive operations

              • Implement monitoring, logging, and alerting systems for observability and proactive issue resolution

                Data Access and Integration

                  • Build data access layers and connectors for interfacing with relational and NoSQL databases

                  • Develop data integration scripts or services to move and sync data between systems when needed

                  • Write efficient, productiongrade SQL and Python code to support internal tools and services

                    Contribute to Innovation and Excellence

                      • Stay informed on modern Python practices, libraries, and AWS developments

                      • Take initiative in proposing improvements and new ideas to enhance our platform

                        • πŸš€Who you are:

                          • 3+ years of experience as a Data Engineer
                          • Handson experience with schema design, complex SQLquery optimization, and running data pipelines in production.
                          • Experienced with AWS services (Redshift, Aurora, DynamoDB, S3, Glue, Lambda, Step Functions, etc.) to build data pipelines and scalable cloudnative applications
                          • dbt experience (or strong SQLELT background and eagerness to learn dbt quickly)
                          • Familiarity with data orchestration tools (e.g., Airflow, Step Functions) β€” scheduling, monitoring, and troubleshooting data pipelines
                          • Ability to build and maintain RESTful APIsmicroservices in Python (e.g., FastAPIFlask) and understand basic backend architecture
                                • πŸš€ It’s nice if you have:

                                  • Experience in Java

                                    πŸ“How does the recruitment process look like:

                                    1. A 30minute phone interview with the People and Culture Partner Milena Micor, where we aim to get to know you a little better!
                                    2. A technical online interview with the Data Team Lead Krystian Andruszek and another panelist.
                                    3. 30minute conversation with our CTO Maciej Mendrela – to align on vision, culture and expectations.
                                    4. Decision regarding the offer and welcome on board! ⭐

                                        • After each stage, you will always receive feedback regarding your candidacy.

Required profile

Experience

Level of experience: Senior (5-10 years)
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Teamwork
  • Problem Solving

Data Engineer Related jobs