Match score not available

Senior Analytics Engineer

Remote: 
Full Remote
Experience: 
Senior (5-10 years)
Work from: 

Offer summary

Qualifications:

5+ years of experience in analytics engineering or similar roles, Expert-level SQL and strong Python skills, Bachelor's degree in Computer Science or related field, Experience with dbt and modern data warehouses.

Key responsabilities:

  • Design and implement data warehouse architecture
  • Build and maintain data models and ETL pipelines
Acryl Data logo
Acryl Data Startup https://www.acryldata.io/sign-up
11 - 50 Employees
See more Acryl Data offers

Job description

We're seeking our first Senior Analytics Engineer to establish and lead our data infrastructure and analytics practice. This is a unique opportunity to shape our data strategy from the ground up and build scalable foundations that will support our rapid growth. As the first hire in this role, you'll have significant autonomy in architectural decisions and establishing best practices.

About Us

Acryl Data is the company behind DataHub, the leading open-source metadata platform. Originally developed at LinkedIn, DataHub has grown into the largest open-source metadata community with over 12,000 data practitioners and deployments across 3,000+ organizations worldwide.

Through our flagship product DataHub Cloud, we provide enterprise-grade data catalog and observability solutions that enable seamless data discovery, robust data observability, and federated governance across organizations' entire data ecosystem. Our customers range from innovative startups to Fortune 10 companies, all benefiting from our expertise in bringing clarity and control to complex data environments.

Founded by the original creators of LinkedIn DataHub and Airbnb Dataportal, and backed by top-tier venture capital firms, Acryl Data combines deep technical expertise with a proven track record in building industry-leading data platforms.

Role Overview

As the Senior Analytics Engineer, you'll architect our entire data stack, establish data modeling frameworks, and build reliable data pipelines that will serve as the backbone of both our internal analytics and customer-facing data products. You'll work directly with our leadership team to drive product analytics initiatives and create scalable data resources that deliver value to our customers. This role combines strategic thinking with hands-on implementation, requiring someone who can both design the big picture and execute on the details while balancing the needs of internal stakeholders and external customers.

Key Responsibilities
  • Design and implement our company's first data warehouse architecture and modeling framework with a focus on product usage analytics, user behavior patterns, and engagement metrics
  • Establish best practices for data modeling, testing, and documentation that will scale with our growth
  • Build and maintain our initial suite of data models and ETL pipelines using modern data stack technologies
  • Create foundational data documentation and governance standards
  • Partner directly with leadership to understand business requirements and translate them into technical solutions
  • Drive data literacy across the organization through education and enablement
Required Qualifications
  • 5+ years of experience in analytics engineering or similar technical roles
  • Deep expertise in data modeling design and implementation, with proven ability to architect complex data models from scratch
  • Extensive experience setting up and optimizing data warehouses and transformation layers
  • Expert-level SQL skills and advanced knowledge of data modeling techniques (dimensional modeling, data vault)
  • Strong proficiency with dbt (data build tool) and modern data warehouse platforms (Snowflake, BigQuery, or Redshift)
  • Demonstrated ability to work effectively in fast-paced, ambiguous environments
  • Experience working directly with business stakeholders to gather requirements and provide solutions
  • Track record of building production-grade data pipelines from the ground up
  • Strong Python programming skills for data manipulation and automation
  • Bachelor's degree in Computer Science, Engineering, Mathematics, or related field
Must-Have Traits
  • Comfortable with ambiguity and able to create structure in undefined situations
  • Self-directed and able to prioritize high-impact work with minimal guidance
  • Experience working in fast-paced environments
  • Strong communication skills with the ability to translate between technical and business contexts
  • Proactive problem-solver who can anticipate future needs and scale considerations
  • Enthusiasm for building things from scratch and establishing best practices
  • Experience with data visualization tools (Looker, Tableau, Power BI, Hex, or similar)
Preferred Qualifications
  • Experience working with data catalogs (DataHub is a huge plus!)
  • Previous experience as a first/early data hire at a startup
  • Knowledge of data governance and security best practices
  • Track record of implementing data quality frameworks
  • Contributing to open-source projects or data communities
Technical Skills
  • Languages & Tools: SQL, Python, dbt, Git
  • Data Warehouses: Snowflake, BigQuery, or Redshift
  • ETL/ELT Tools: Airflow, Fivetran, or similar
  • BI Tools: Looker, Tableau, Power BI, or similar

This position is open to candidates in all time zones of the United States and can be done remotely. 

Required profile

Experience

Level of experience: Senior (5-10 years)
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Adaptability
  • Communication
  • Problem Solving

Related jobs