Match score not available

Lead Data Engineer

Remote: 
Full Remote
Experience: 
Mid-level (2-5 years)
Work from: 
United States

Offer summary

Qualifications:

3+ years experience with RDBMS and T-SQL, NoSQL, DataLakes, Spark, Python, or Scala, Familiarity with database schema design, cloud-based service-oriented architecture, testing pyramid, and Agile/Scrum.

Key responsabilities:

  • Leading a team managing an AI self-service portal and microservices
  • Developing software engineering best practices and troubleshooting code issues
Ex Parte logo
Ex Parte Startup https://www.exparte.com/
2 - 10 Employees
See more Ex Parte offers

Job description

Company Description

Ex Parte provides our customers with the data and insight to make smart and informed decisions on the most important legal issues facing their organizations.

We are is looking for talented, enthusiastic senior data engineers who share our passion for big data, AI, and machine learning and are excited by seemingly-impossible challenges. As an early employee, you must be amazingly entrepreneurial and thrive in a fast-paced environment where the solutions aren’t predefined.

Every year, corporations spend more than $250B on litigation in the United States alone. And yet, critical decisions such as whether to litigate or settle, or where to file suit or which attorney to hire, are all made the same way they were 100 years ago.

We are applying artificial intelligence, machine learning, and natural language processing to provide our customers with the insight they need to make highly informed decisions and gain a winning advantage. Think of it like Moneyball, but for a market more than 20x the size of Major League Baseball.

Job Description

You’ll lead a team that owns the AI self-service portal and the related set of microservices. You’ll work with a cross-functional team of engineers to contribute to our data platform. The role also offers the opportunity to grow into either more back-end or ML/AI work depending on your interests and experience.

In this role, you can expect to:

· Work directly with product owners and data experts to build products that solve to solve complex client problems

· Build and support a distributed platform supporting all ExParte data

· Work across our product, primarily on the data pipelines

· Interface directly with internal teams

· Evaluate software and implementation options and document them for technical teams

· Work with data analysts to collect insight on possible data collection efficiencies and identify automation potential for manual workflows

· Integrate best qualitative practices in program design and development.

· Be a part of a distributed team (we’re in North America and Europe)

· Work with Azure cloud and Databricks

· Develop technical architectures and specific implementations to meet business needs.

· Guide the team’s software engineering best practices by documenting standards and completing code reviews.

· Troubleshoot new and existing code and provide feedback and solutions to structural issues in the codebase as they arise.

· Advise on the feasibility of nonfunctional requirements and ensure the successful implementation of features while meeting those requirements.

Qualifications

Requirements:

· 3+ years experience with RDBMS and T-SQL.

· 3+ years experience with NoSQL and DataLakes.

· 3+ Knowledge of Spark, Python, or Scala

· Strong familiarity with map/reduce programming models

· Proficiency in writing production-quality code

· Deep expertise in database schema design, optimization, and scalability

· Experience with Azure or AWS cloud-based service-oriented architecture

· Solid understanding of testing pyramid (unit, integration, black box, service)

· Experience working in an Agile/Scrum environment

· Strong analytical and problem-solving skills

· Good time management and organizational skills

· Ability to work on challenging issues independently or in a team environment

· Ability to learn and adapt quickly to new technologies and environments

· Strong communication skills

Nice-to-have's:

· Experience with Databricks or Azure ML

· Experience applying machine learning algorithms to solve complex data mining problems

· Experience with BI tools

· Understanding of cloud platforms and providers and DevOps

· Bachelor's Degree in Computer Science, Engineering, Mathematics or related field

Additional Information

All your information will be kept confidential according to EEO guidelines.

Required profile

Experience

Level of experience: Mid-level (2-5 years)
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Communication
  • Time Management
  • Analytical Thinking
  • Organizational Skills
  • Open Mindset
  • Teamwork

Data Engineer Related jobs