Join Project Babel, an innovative AI initiative focused on improving multilingual capabilities in artificial intelligence. We’re seeking skilled AI Translation Evaluators to help assess the quality of translated content used in training cutting-edge language models.
As an evaluator, you will compare original English texts with their translations in your native language, assessing the accuracy, fluency, and overall quality of the translations. Your feedback will directly support the creation of high-quality, synthetic datasets used to refine AI language understanding and generation.
Key Responsibilities
- Evaluate translations of English prompts and responses into your native language.
- Assess the overall quality, correctness, and naturalness of translated content.
- Provide structured feedback on issues related to meaning, grammar, tone, and accuracy.
- Work within established guidelines to ensure consistent evaluation standards.
Requirements
- Native-level fluency in Estonian
- Strong command of English (written and comprehension)
- Experience working with Large Language Model (LLM) data is preferred
- Comfortable reviewing potentially sensitive or harmful content.
This role is a project-based opportunity with CrowdGen, where you will join the CrowdGen Community as an Independent Contractor. If selected, you will receive an email from CrowdGen regarding the creation of an account using your application email address. You will need to log in to this account and reset the password, complete the setup requirements, and proceed with your application for this project-based role.
Make an impact on the future of AI – apply today and start contributing from the comfort of your home.