Overview
We are seeking a detail-oriented and responsible AI Content Moderator to review, analyze, and classify user-generated content flagged by AI systems. This role ensures that all content aligns with our platform policies while balancing user safety, freedom of expression, and brand integrity.
Key ResponsibilitiesContent Review & Decision-Making
- Review text, images, video, and audio content flagged by AI tools.
- Accurately apply moderation guidelines to approve, restrict, or remove content.
- Escalate complex or ambiguous cases to senior moderators or policy teams.
- Identify patterns of harmful content and provide feedback to improve AI model accuracy.
Quality & Compliance
- Ensure moderation decisions follow internal policies, legal requirements, and community standards.
- Maintain high accuracy rates and meet productivity benchmarks.
- Participate in regular training to stay up to date on evolving policies and safety risks.
Risk & Safety Management
- Detect violations such as hate speech, harassment, misinformation, graphic violence, self-harm content, adult content, and spam.
- Assess potential real-world risks and flag high-severity cases promptly.
Collaboration & Feedback
- Report recurring issues, new trends, and policy gaps to product, safety, and engineering teams.
- Provide structured feedback to help refine AI moderation systems.
- Collaborate with global teams to maintain consistency in moderation practices
Job Types: Full-time, Permanent, Fresher
Pay: ₹300,000.00 - ₹500,000.00 per year
Work Location: Remote