Match score not available

III Engineer, Data

Remote: 
Full Remote
Contract: 
Experience: 
Mid-level (2-5 years)
Work from: 

Offer summary

Qualifications:

Over 5 years Data Engineering experience, Proficient in Python or Java.

Key responsabilities:

  • Design and build data pipelines
  • Ensure data accuracy and security
MRSOOL  logo
MRSOOL Scaleup https://mrsool.co/
201 - 500 Employees
See more MRSOOL offers

Job description

Who are we❓

Welcome to the world of Mrsool! 🌍✨ Where on-demand delivery meets unparalleled user needs to deliver anything you desire. As one of the largest delivery platforms in the Middle East and North Africa (MENA) region, Mrsool has captivated users with its unique and seamless experience, earning it the highest ratings among all major delivery platforms on both Apple's App Store and Google's Play Store. 🌟📲

What sets Mrsool apart is its commitment to providing an unmatched "order anything from anywhere" experience. 🌐📦 This extraordinary feat is made possible by our extensive fleet of dedicated on-demand couriers. With their unwavering dedication, they ensure that your desired items reach your doorstep, no matter where you are. 🚗🛵

Whether it's a late-night craving, a forgotten item, or a special gift for a loved one, Mrsool is here to deliver, quite literally. 😋🎁 We take pride in the convenience we offer, empowering you to get what you need when you need it, all at the tap of a button. 💪💫

The Job in a Nutshell💡

We are actively seeking a highly motivated and skilled Senior Data Engineer to establish robust foundational data infrastructures.

Your mission is to develop and deliver data solutions essential for empowering data scientists, analysts, and Product Managers across the company to gain a deeper understanding of Mrsool's business.

In this role, you will take charge of designing and implementing data architectures, creating systems for moving and transforming data, and ensuring its accuracy. You'll collaborate closely with cross-functional teams to bolster our business objectives and offer insights and recommendations for data-driven decision-making.

This role presents a unique opportunity to be a crucial part of enhancing data-driven decision-making processes within the organization and building data intensive applications.

If you're eager to take on this rewarding opportunity, we’d love to hear from you. Apply today!

What You Will Do💡
    • You will be responsible for finding the best cloud solution to solve data storage, processing, and orchestration while keeping the cost in check.
    • You will design, build, and deploy batch and real-time data pipelines, storage, and model schemas. This will involve leveraging performance tuning techniques, conceptual schemas, and modern technology.
    • Develop and maintain data models to support efficient data retrieval and analysis. Create and optimize data marts for various business units.
    • Work closely with product managers, data analysts, and other business stakeholders to gather requirements and ensure alignment with data strategies. Collaborate with backend engineers to integrate data solutions into applications.
    • Design, develop, and maintain critical data infrastructure, datasets, and pipelines.
    • Ensure data is stored safely and securely, adhering to frequently changing regulations (e.g. GDPR) and best practices for user data storage and security.
    • Take ownership of critical data pipelines, manage their SLAs, and constantly improve pipeline efficiency and data quality.
    • Facilitate data integration and transformation requirements for moving data between applications, ensuring interoperability with database, data warehouse, and data mart environments.
    • Assist in designing and managing the technology stack used for data storage and processing.

Requirements

What are We Looking For❓
  • Possess over 5 years of hands-on experience in Data Engineering, specializing in the development of scalable storage solutions and robust schema layers.
  • Proficient in a programming language such as Python or Java, along with their respective standard data processing libraries.
  • Demonstrated expertise in crafting and troubleshooting data pipelines utilizing distributed data frameworks like Apache Spark and Flink etc.
  • Extensive background in working with relational databases (AWS, RDS, Aurora), adept in SQL, data warehousing, and proficient in designing ETL/streaming pipelines.
  • Proven track record of integrating data from core platforms into a centralized warehouse or datalake.
  • Adhere to rigorous standards in code quality, implement automated testing, and champion other engineering best practices.
  • Well-versed in establishing secure systems and access models for handling highly sensitive data.
  • Exhibit strong cross-functional communication skills, proficient in extracting requirements, and skilled in architecting shared datasets.
  • Possess a genuine passion for creating exceptional tools that provide a delightful user experience.

Benefits

What We Offer You❗

    • Inclusive and Diverse Environment: We foster an inclusive and diverse workplace that values innovation and provides flexibility. Whether you prefer remote, in-office, or hybrid work arrangements, we accommodate your needs.
    • Competitive Compensation: Our compensation packages are competitive and include potential share options for certain roles. 
    • Personal Growth and Development: We are committed to your professional development, offering regular training and an annual learning stipend to help you advance your career in a fast-paced, dynamic environment.
    • Autonomy and Mentorship: You’ll enjoy a degree of autonomy in your role, supported by mentorship and ambitious goals that drive both your personal success and the company's growth.

Required profile

Experience

Level of experience: Mid-level (2-5 years)
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Lateral Communication

Data Engineer Related jobs