Professional experience validating modern React applications and Node/TypeScript APIs., Proficiency in writing clean test code using tools like React Testing Library, Vitest, Playwright, or Cypress., Ability to craft concise quality plans focusing on user impact and risk., Familiarity with Python-based testing frameworks and CI/CD tools is a plus..
Key responsibilities:
Conduct exploratory testing of new UI flows and document reproducible issues.
Translate insights from testing into automated regression tests, including unit and integration tests.
Collaborate with Product, Design, and Engineering to embed testability into development stories.
Maintain a stable CI pipeline to ensure high-quality releases without hindering development.
Report This Job
Help us maintain the quality of our job listings. If you find any issues with this job post, please let us know.
Select the reason you're reporting this job:
Celara
51 - 200
Employees
About Celara
Celara transforms your vision into reality by building elite near-shore technology teams with CTO-level expertise.
Specializing in machine learning, enterprise software, and product development, Celara is dedicated to driving innovation through high-performance teams tailored to the unique needs of our ambitious clients.
At Celara, we are more than just a service provider; we are technologists, entrepreneurs, and innovators deeply invested in your success. We build and foster elite teams aligned with your most ambitious goals. Our approach mirrors that of a CTO—focused on people, talent, structure, systems, and innovation. We are your partners in innovation, bringing deep technical expertise and a relentless drive to push the boundaries of what’s possible. We thrive on turning complex challenges into solutions, working side by side with your team to transform bold ideas into impactful realities.
Ideal for:
- VC-backed companies needing top talent to fuel growth
- Established enterprises seeking more affordable elite technology professionals
- Organizations requiring scalable tech teams with embedded strategic guidance
Join us on this journey of growth and innovation. Let's transform your visions into reality together.
The Company is building an AI‑driven care‑coordination platform that relies on a React + Vite front end, a TypeScript back end with supporting Python services, and extensive large‑language‑model (LLM) workflows. We already maintain a robust Vitest suite for the server side, and we have designed a custom LLM‑aware test runner that automates the validation of model responses. Your mission is to bring the same rigor to our React client while continuing to evolve our AI‑centric quality strategy. Because many features hinge on generative output, testing often requires novel, out‑of‑the‑box thinking rather than simple yes/no assertions.
What you’ll do
Roughly half of your time will be dedicated to exploratory testing of new UI flows, identifying edge cases, stress-testing prompt variations, and documenting reproducible issues with clear context.
The remaining time will focus on translating those insights into automated regression tests, including unit and integration tests for React components, API-level tests using vitest, end-to-end scenarios that validate our LLM workflows, and enhancements to our custom test runner to ensure generative outputs stay within acceptable bounds.
You’ll work closely with Product, Design, and Engineering to embed testability into every story and help maintain a stable CI pipeline, contributing to safe, high-quality releases without slowing down development.
What makes you a great fit
You have professional experience validating modern React applications and Node/TypeScript APIs, and you write clean test code with tools such as React Testing Library, Vitest, Playwright, or Cypress.
You approach quality through a risk lens, crafting concise plans that focus coverage where it matters to users. Because our product leans heavily on LLMs, you’re comfortable reasoning about nondeterministic outputs, inventing creative test strategies, and refining heuristics in our custom runner to spot subtle prompt regressions.
You’re equally adept using DevTools to inspect a DOM tree or reviewing a pull request to suggest more testable designs. You communicate clearly and constructively in a collaborative team environment.
Bonus points
Experience with Python-based testing frameworks such as pytest or hypothesis, and familiarity with testing workflows involving LLM prompts or generative AI outputs.
Additional advantages include knowledge of CI/CD tools like GitHub Actions, accessibility testing practices, and prior work in healthcare or HIPAA-compliant environments. These are not required, but will be considered a plus.
Why join us
You’ll define the gold standard for quality at a mission‑driven startup improving the lives of caregivers and families.
Your work lands in production every sprint, immediately enhancing the reliability and safety of AI features thousands of people depend on each day.
Required profile
Experience
Spoken language(s):
English
Check out the description to know which languages are mandatory.