Job Description
Note: The job is a remote job and is open to candidates in USA. Rex.zone is a company focused on AI/ML training workflows, and they are seeking individuals to produce and validate high-quality training data for language model evaluation. The role involves performing data labeling for NLP and computer vision tasks, ensuring compliance with annotation guidelines, and collaborating with various stakeholders to improve model performance.
Responsibilities
- Perform data labeling for NLP and computer vision annotation tasks using clear taxonomies and labeling tools
- Complete RLHF evaluation by ranking outputs, selecting preferences, and writing concise justifications
- Run QA evaluation passes to detect label noise, guideline drift, and ambiguity in prompt evaluation setups
- Support named entity recognition by applying consistent span boundaries and categories
- Contribute to content safety labeling for policy-aligned datasets
- Track common failure modes to improve training data quality and reduce rework
- Collaborate asynchronously with reviewers, project leads, and engineering stakeholders
Skills
- STEM foundation (engineering, CS, math, physics, statistics, data science, or related)
- Ability to read detailed instructions and apply rules consistently under QA evaluation
- Strong written communication for explaining RLHF choices and prompt evaluation rationales
- Attention to detail; basic proficiency with spreadsheets, web tools, or annotation platforms
- Ability to work full-time remotely with reliable internet
Company Overview
Apply To This Job