Match score not available

Backend Engineer, Optimized Checkout & Link Data Engineering

83% Flex
EXTRA HOLIDAYS - FULLY FLEXIBLE - 4 DAY WEEK
Remote: 
Full Remote
Contract: 
Experience: 
Senior (5-10 years)
Work from: 

Offer summary

Qualifications:

Bachelor's degree in Computer Science or Engineering, Master’s degree preferred., 5+ years of experience with data pipelines (Hadoop/Spark/Pig), Strong SQL proficiency, Scala/Java coding skills, Knowledge of systems like Hadoop, Spark, Presto, Airflow, Experience in AWS cloud is a plus..

Key responsabilities:

  • Conceptualize data architecture for large-scale projects
  • Ensure data quality and excellence
  • Optimize data pipelines and systems
  • Conduct SQL data investigations and optimizations
  • Mentor team members, facilitate code reviews
Stripe logo
Stripe Information Technology & Services Large https://stripe.com/
1001 - 5000 Employees
HQ: San Francisco
See more Stripe offers

Job description

Logo Jobgether

Your missions

Who we are
About Stripe

Stripe is a financial infrastructure platform for businesses. Millions of companies—from the world’s largest enterprises to the most ambitious startups—use Stripe to accept payments, grow their revenue, and accelerate new business opportunities. Our mission is to increase the GDP of the internet, and we have a staggering amount of work ahead. That means you have an unprecedented opportunity to put the global economy within everyone’s reach while doing the most important work of your career.

About the team

The Optimized Checkout & Link team at Stripe builds best-in-class checkout experiences across web and mobile that delight consumers and streamline checkout flows for merchants. Based across North America, we're a diverse team who are deeply passionate about redefining the payment experience creating outstanding value for merchants, increasing revenue, lowering cost and growing their business. We work on Checkout, Payment Links, Elements, Payment Methods, and Link – each playing a crucial part in augmenting the economic landscape of the internet. Our days are filled with exciting challenges and collaborative problem-solving as we strive to simplify payment options, create unique business solutions and enhance checkout ease. Join us in crafting the future of digital commerce.

What you’ll do

We’re looking for people with a strong background in data engineering and analytics to help us scale while maintaining correct and complete data.

Responsibilities
  • Conceptualize and own the data architecture for multiple large-scale projects, while evaluating design and operational cost-benefit tradeoffs within systems
  • Be an advocate for data quality and excellence of our platform.
  • Create and contribute to frameworks that improve the efficacy of logging data, while working with data infrastructure to triage issues and resolve
  • Gather requirements, understand the big picture, create detailed proposals in technical specification documents.
  • Productizing data ingestion from various sources, data delivery to various destinations, and creating well-orchestrated data pipelines.
  • Optimize pipelines, dashboards, frameworks, and systems to facilitate easier development of data artifacts
  • Conduct SQL data investigations, data quality analysis and optimizations.
  • Contribute in peer code reviews, and help the team produce high quality code.
  • Mentor team members by giving/receiving actionable feedback
Who you are

We’re looking for someone who meets the minimum requirements to be considered for the role. If you meet these requirements, you are encouraged to apply. The preferred qualifications are a bonus, not a requirement.

Minimum requirements
  • Bachelor's degree in Computer Science or Engineering Master’s degree is preferred.
  • Have a strong engineering background and are interested in data
  • 5+ years of experience with writing and debugging data pipelines using a distributed data framework (Hadoop/Spark/Pig etc…)
  • Great data modeling skills, database design, relational/non-relational.
  • Very strong SQL proficiency, and preferably SQL query optimization experience.
  • Strong coding skills in Scala or Java preferably for building performance data pipelines.
  • Strong understanding and practical experience with systems such as Hadoop, Spark, Presto, Iceberg, and Airflow
  • Versed in software production engineering practices, version control, code peer reviews, automated testing, and CI/CD.
  • Excellent communication skills.
  • Experience in AWS cloud is preferred.

 

Required profile

Experience

Level of experience: Senior (5-10 years)
Industry :
Information Technology & Services
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Soft Skills

  • Mentoring
  • Excellent Communication

Go Premium: Access the World's Largest Selection of Remote Jobs!

  • Largest Inventory: Dive into the world's largest remote job inventory. More than half of these opportunities can't be found on standard platforms.
  • Personalized Matches: Our AI-driven algorithms ensure you find job listings perfectly matched to your skills and preferences.
  • Application fast-lane: Discover positions where you rank in the TOP 5% of applicants, and get personally introduced to recruiters with Jobgether.
  • Try out our Premium Benefits with a 7-Day FREE TRIAL.
    No obligations. Cancel anytime.
Upgrade to Premium

Find more Data Engineer jobs