Bachelor's degree in Computer Science or related field such as physics or mathematics., At least 5 years of experience with Spark, Python, Java, C++, or Scala development., Over 5 years of SQL experience and expertise in schema design and data modeling., Experience with large-scale data processing platforms like Databricks and data lake architectures..
Key responsibilities:
Design and develop scalable analytics data pipelines and models.
Collaborate with teams to define data assets and architecture strategies.
Implement data quality frameworks and evaluate data tools for lineage and integration.
Support operational stability through occasional on-call work.
Report This Job
Help us maintain the quality of our job listings. If you find any issues with this job post, please let us know.
Select the reason you're reporting this job:
Dropbox is the one place to keep life organized and keep work moving. With more than 700 million registered users across 180 countries, we're on a mission to design a more enlightened way of working. Dropbox is headquartered in San Francisco, CA, and has offices around the world.
To learn more about working at Dropbox, visit dropbox.com/jobs
We also have a few simple guidelines to keep this space respectful and productive. Please avoid:
- Harassing other people or using language that’s hateful, offensive, vulgar, or advocates violence
- Trolling, fraud and spamming
- Violating someone else’s rights or privacy
- Advertising or soliciting donations
- Link baiting
- Posting off topic comments or thread hijacking
We may remove comments that violate these guidelines.
In this role you will build large, scalable analytics pipelines using modern data technologies. This is not a“maintain existing platform” or“make minor tweaks to current code base” kind of role. We are effectively building from the ground up and plan to leverage the most recent Big Data technologies. If you enjoy building new things without being constrained by technical debt, this is the job for you!
Our Engineering Career Framework is viewable by anyone outside the company and describes what’s expected for our engineers at each of our career levels. Check out our blog post on this topic and more here.
Responsibilities
Help define company data assets(data model), Spark, SparkSQL and HiveSQL jobs to populate data models
Help define and design data integrations, data quality frameworks and design and evaluate open source/vendor tools for data lineage
Work closely with Dropbox business units and engineering teams to develop strategy for long term Data Platform architecture to be efficient, reliable and scalable
Conceptualize and own the data architecture for multiple large-scale projects, while evaluating design and operational cost-benefit tradeoffs within systems
Collaborate with engineers, product managers, and data scientists to understand data needs, representing key data insights in a meaningful way
Design, build, and launch collections of sophisticated data models and visualizations that support multiple use cases across different products or domains
Optimize pipelines, dashboards, frameworks, and systems to facilitate easier development of data artifacts
On-call work may be necessary occasionally to help address bugs, outages, or other operational issues, with the goal of maintaining a stable and high-quality experience for our customers.
Requirements
5+ years of Spark, Python, Java, C++, or Scala development experience
5+ years of SQL experience
5+ years of experience with schema design, dimensional data modeling, and medallion architectures
Experience with the Databricks platform and data lake architectures for large-scale data processing and analytics
Excellent product strategic thinking and communications to influence product and cross-functional teams by identifying the data opportunities to drive impact
BS degree in Computer Science or related technical field involving coding(e.g., physics or mathematics), or equivalent technical experience
Experience designing, building and maintaining data processing systems
Preferred Qualifications
7+ years of SQL experience
7+ years of experience with schema design, dimensional data modeling, and medallion architectures
Experience with Airflow or other similar orchestration frameworks
Experience building data quality monitoring using MonteCarlo or similar tools
Compensation
Poland Pay Range
183 600 zł—248 400 zł PLN
Required profile
Experience
Level of experience:Senior (5-10 years)
Spoken language(s):
English
Check out the description to know which languages are mandatory.