Match score not available

Data Engineer

Remote: 
Full Remote
Contract: 
Experience: 
Senior (5-10 years)
Work from: 

Offer summary

Qualifications:

Bachelor's degree in related field preferred, 4-6 years in Big Data technologies, Proficient in Databricks and SQL, Strong programming skills in Python, Experience with BI tools like Tableau.

Key responsabilities:

  • Develop and maintain ETL/ELT pipelines
  • Optimize data workflows for large volumes
  • Build and manage data warehouses and lakes
  • Collaborate with analysts to ensure data quality
  • Implement data governance policies and compliance
Endpoint Clinical logo
Endpoint Clinical Pharmaceuticals SME https://www.endpointclinical.com/
501 - 1000 Employees
See more Endpoint Clinical offers

Job description

About Us:

Endpoint is an interactive response technology (IRT®) systems and solutions provider that supports the life sciences industry. Since 2009, we have been working with a single vision in mind, to help sponsors and pharmaceutical companies achieve clinical trial success. Our solutions, realized through the proprietary PULSE® platform, have proven to maximize the supply chain, minimize operational costs, and ensure timely and accurate patient dosing. Endpoint is headquartered in Raleigh-Durham, North Carolina with offices across the United States, Europe, and Asia.

Position Overview:
The Data Engineer plays a critical role in designing, implementing, and maintaining the data infrastructure that drives our business intelligence, analytics, and data science initiatives. In this role, the Data Engineer will work closely with cross-functional teams to ensure data is accurate, accessible, and optimized for various business needs. This position requires expertise in Databricks, SQL, Python, Spark, and other Big Data tools, with a strong emphasis on ELT/ETL processes. The engineer will collaborate with various stakeholders to ensure data quality and to build efficient, scalable data solutions.

Responsibilities:
  • Design, develop, and maintain scalable ETL/ELT pipelines using Databricks and other big data technologies.
  • Optimize data workflows to handle large volumes of data efficiently.
  • Build and manage data warehouses and data lakes to store structured and unstructured data.
  • Utilize SQL, Python, Spark for data extraction, transformation, and loading (ETL) processes.
  • Work closely with data analysts and data scientists to understand their data needs and ensure the availability of clean, reliable data.
  • Integrate data from various sources, ensuring consistency and accuracy across the data ecosystem.
  • Implement data quality checks to ensure data accuracy, completeness, and consistency.
  • Develop and enforce data governance policies and procedures to maintain high data quality standards.
  • Develop and support BI tools and dashboards, providing business insights and data-driven decision-making support.
  • Work with stakeholders to understand reporting requirements and deliver actionable insights.
  • Automate repetitive data processing tasks to improve efficiency and reduce manual work.
  • Continuously monitor and improve data pipeline performance, addressing bottlenecks and optimizing resources.
  • Document data processes, workflows, and architecture for future reference and knowledge sharing.
  • Ensure compliance with data security and privacy regulations, such as GDPR, HIPAA, etc.

  • Education:
  • Bachelor's degree in Computer Science, Software Engineering, Mathematics, or a related technical field is preferred.

  • Experience:
  • 4-6 years of technical experience with a strong focus on Big Data technologies in any of these areas: software engineering, integrations, data warehousing, data analysis, business intelligence, preferably at a technology or biotech/pharma company
  • Proficiency in Databricks for data engineering tasks.
  • Advanced knowledge of SQL for complex queries, data manipulation, and performance tuning.
  • Strong programming skills in Python for scripting and automation.
  • Experience with Big Data tools (e.g., Spark, Hadoop) and data processing frameworks.
  • Familiarity with BI tools (e.g., Tableau, Power BI) and experience in developing dashboards and reports.
  • Experience with cloud platforms & tools like Azure ADF or Databricks.
  • Familiarity with data modeling and data architecture design.
  • Understanding of machine learning concepts and their application in data engineering.

  • Skills:
  • Keen attention to detail and bias for action.
  • Excellent organizational skills and proven ability to multi-task.
  • Ability to influence without authority and lead successful teams.
  • Strong interpersonal skills with the ability to work effectively with a wide variety of professionals.
  • Ability to lead back-end data initiatives – identify data sources, write scripts, transfer and transform data, and automate processes.
  • Ability to understand source data, its strengths, weaknesses, semantics, and formats.
  • Excellent knowledge of logical and physical data modeling concepts (relational and dimensional).
  • Endpoint Clinical does not accept unsolicited resumes from search firms or any other third parties. Any unsolicited resume sent to Endpoint Clinical will be considered Endpoint Clinical property, and Endpoint Clinical will not pay a fee should it hire the subject of any unsolicited resume.

    Endpoint Clinical is an equal opportunities employer AA/M/F/Veteran/Disability.    
     
    Pursuant to the San Francisco Fair Chance Ordinance, we will consider for employment, qualified applicants with arrest and conviction records.
    #LI-MT

    Required profile

    Experience

    Level of experience: Senior (5-10 years)
    Industry :
    Pharmaceuticals
    Spoken language(s):
    English
    Check out the description to know which languages are mandatory.

    Other Skills

    • Organizational Skills
    • Influencing Skills
    • Social Skills
    • Leadership Development
    • Detail Oriented

    Data Engineer Related jobs