Match score not available

DataOps Engineer

EXTRA HOLIDAYS - EXTRA PARENTAL LEAVE - FULLY FLEXIBLE
Remote: 
Full Remote
Contract: 
Experience: 
Mid-level (2-5 years)
Work from: 

Offer summary

Qualifications:

Proficiency in Python and PySpark, 3+ years of experience with Apache Spark, Experience in data workflows automation and orchestration using tools like Apache Airflow or AWS Steps.

Key responsabilities:

  • Develop tools for automating and streamlining data processing workflows
  • Ingest raw data and build the global data core for life sciences
  • Design and implement the data assembly line to derive insights faster and with fewer errors
Veeva Systems logo
Veeva Systems Computer Software / SaaS SME https://www.veeva.com/
5001 - 10000 Employees
See more Veeva Systems offers

Job description

Logo Jobgether

Your missions

Veeva Systems is a mission-driven organization and pioneer in industry cloud, helping life sciences companies bring therapies to patients faster. As one of the fastest-growing SaaS companies in history, we surpassed $2B in revenue in our last fiscal year with extensive growth potential ahead.

At the heart of Veeva are our values: Do the Right Thing, Customer Success, Employee Success, and Speed. We're not just any public company – we made history in 2021 by becoming a public benefit corporation (PBC), legally bound to balancing the interests of customers, employees, society, and investors.

As a Work Anywhere company, we support your flexibility to work from home or in the office, so you can thrive in your ideal environment.

Join us in transforming the life sciences industry, committed to making a positive impact on its customers, employees, and communities.

The Role

Veeva OpenData supports the industry by providing real-time reference data across the complete healthcare ecosystem to support commercial sales execution, compliance, and business analytics. We drive value to our customers through constant innovation, using cloud-based solutions and state-of-the-art technologies to deliver product excellence and customer success. The OpenData Global Data Tools team delivers the tools and data processing pipelines to build the global data core for life sciences in 100+ countries.
As a DataOps engineer of the Global Data Tools team, you will design the data assembly line that allows deriving insights from data faster and with fewer errors. You will be responsible for creating the tools and the processes used to store, manage and process all compiled data to build the OpenData Reference.
What You'll Do
    • Build DataOps tools for data workflows automation and streamline data processing (e.g., reusable software libraries, tools to orchestrate data processing tasks and its dependencies and components to enable CI/CD integrations)
    • Adopt solutions and tools that adhere to the DataOps best practices
    • Continually strive to reduce wasted effort, identify gaps and correct them, and improve data development and deployment processes
    • Develop the ingestion pipelines for raw data
    • Put in place the building blocks to deliver the data core for life sciences
Requirements
    • Proficient in Python programming language and PySpark
    • 3+ years of experience working with Apache Spark
    • Previous experience building tools and libraries to automate and streamline data processing workflows
    • Experience running data workflows through DevOps pipelines
    • Experience orchestrating data workflows using state-of-the-art tools (e.g., Airflow, AWS Steps, or similar from other cloud vendors), spawning jobs in a Spark cloud-managed cluster (e.g., EMR, Databricks)
    • Experience with the Delta Lake Architecture and the delta format
Nice to Have
    • Hands-on experience using DevOps tools to deploy and administer clusters in a managed Apache Spark platform in the cloud (e.g., Databricks, AWS EMR)
    • Valuable previous experience with Scala or Kotlin programming languages
    • Experience with Amazon Redshift
    • Previous experience in the Life Sciences sector
Perks & Benefits
    • Benefits package including Restricted Stock Units (RSUs), family health insurance, and contributions to private pension plans
    • Annual allocations for continuous learning, development & charitable contributions
    • Fitness reimbursement
    • Work anywhere
#RemotePortugal

Veeva’s headquarters is located in the San Francisco Bay Area with offices in more than 15 countries around the world.

Veeva is committed to fostering a culture of inclusion and growing a diverse workforce. Diversity makes us stronger. It comes in many forms. Gender, race, ethnicity, religion, politics, sexual orientation, age, disability and life experience shape us all into unique individuals. We value people for the individuals they are and the contributions they can bring to our teams.

Required profile

Experience

Level of experience: Mid-level (2-5 years)
Industry :
Computer Software / SaaS
Spoken language(s):
Check out the description to know which languages are mandatory.

DevOps Engineer Related jobs