About TaskUs: TaskUs is a provider of outsourced digital services and next-generation customer experience to fast-growing technology companies, helping its clients represent, protect and grow their brands. Leveraging a cloud-based infrastructure, TaskUs serves clients in the fastest-growing sectors, including social media, e-commerce, gaming, streaming media, food delivery, ride-sharing, HiTech, FinTech, and HealthTech.
The People First culture at TaskUs has enabled the company to expand its workforce to approximately 45,000 employees globally. Presently, we have a presence in twenty-three locations across twelve countries, which include the Philippines, India, and the United States.
It started with one ridiculously good idea to create a different breed of Business Processing Outsourcing (BPO)! We at TaskUs understand that achieving growth for our partners requires a culture of constant motion, exploring new technologies, being ready to handle any challenge at a moment’s notice, and mastering consistency in an ever-changing world.
What We Offer: At TaskUs, we prioritize our employees' well-being by offering competitive industry salaries and comprehensive benefits packages. Our commitment to a People First culture is reflected in the various departments we have established, including Total Rewards, Wellness, HR, and Diversity. We take pride in our inclusive environment and positive impact on the community. Moreover, we actively encourage internal mobility and professional growth at all stages of an employee's career within TaskUs. Join our team today and experience firsthand our dedication to supporting People First.
Join our AI Safety Research team as our AI Safety Research Lead to advance the state of AI safety while developing practical applications for real-world systems. Drive original research across alignment, robustness, and interpretability domains and lead contributing to client projects through model evaluations, red teaming exercises, or custom safety assessments.
This role offers the intellectual freedom of academic research with the resources and real-world impact of industry applications. You'll advance the field while ensuring your research translates into practical safety improvements.
Key Responsibilities
- Research Leadership: Conduct original research across AI safety domains; publish findings in top venues; develop novel safety evaluation methodologies and benchmarks
- Applied Research: Build and test safety interventions on large-scale models; create reproducible experimental frameworks; translate theoretical advances into practical tools
- Technical Development: Implement safety mechanisms for production systems; contribute to open-source safety tools; develop proprietary evaluation frameworks
- Internal Collaboration: Work with internal teams to integrate safety research into client solutions. Mentor junior researchers & present findings at conferences and workshops
- External Collaboration: Initiate and manage research collaborations with external academic and industry partners to enhance the rigor, credibility, and reach of the company’s AI Safety research.
Required Qualifications
- MS in CS/ML/related field + 3-5 years ML experience (including 1+ years in AI safety/robustness)
- Technical: Python, PyTorch/TensorFlow, hands-on experience with adversarial testing, interpretability methods, alignment techniques (RLHF), mechanistic interpretability. Experience with large language models and multimodal systems
- Research: Track record of technical publications or comprehensive research projects; experience with experimental design and statistical analysis
- Communication: Strong technical writing, ability to present complex research to diverse audiences
- Leadership profile: Someone who can navigate shifting priorities, collaborate across functions, and lead applied research that delivers impact at both the operational and thought-leadership level.
Preferred
- Publications in AI safety venues (NeurIPS, ICML, AIES, FAccT)
- Contributions to safety benchmarks or evaluation frameworks
- Background in formal verification or theoretical CS
- Familiarity with emerging AI policies and regulatory frameworks, such as the NIST AI Risk Management Framework or the EU AI Act
How We Partner To Protect You: TaskUs will neither solicit money from you during your application process nor require any form of payment in order to proceed with your application. Kindly ensure that you are always in communication with only authorized recruiters of TaskUs.
DEI: In TaskUs we believe that innovation and higher performance are brought by people from all walks of life. We welcome applicants of different backgrounds, demographics, and circumstances. Inclusive and equitable practices are our responsibility as a business. TaskUs is committed to providing equal access to opportunities. If you need reasonable accommodations in any part of the hiring process, please let us know.
We invite you to explore all TaskUs career opportunities and apply through the provided URL https://www.taskus.com/careers/.
TaskUs
Sedgwick
Sedgwick Ireland
Aspen Dental
Aspen Dental Management, Inc. (ADMI)