AI / Emerging Tech Security Analyst (AI Training)
About The Role
What if your security expertise could directly shape how the world's most powerful AI systems defend themselves against attack? We're looking for AI Security Analysts to probe, stress-test, and evaluate frontier AI models — identifying how they can be manipulated, misused, or pushed beyond their intended boundaries before those vulnerabilities cause real-world harm.
This is a fully remote, flexible contract role for security professionals who are curious about AI and want to work at the cutting edge of both fields. If you think in threat models, love finding what breaks, and want your work to matter — this is the role for you.
-
Organization: Alignerr
-
Type: Hourly Contract
-
Location: Remote
-
Commitment: 10–40 hours/week
What You'll Do
-
Analyze AI and LLM security scenarios to understand how models behave under adversarial or unexpected conditions
-
Review and evaluate cases involving prompt injection, data leakage, model abuse, and system misuse
-
Classify security vulnerabilities and recommend appropriate mitigations based on real-world impact and likelihood
-
Apply threat modeling frameworks to emerging AI technologies and deployment contexts
-
Help evaluate and improve AI system behavior so it remains safe, reliable, and aligned with security best practices
-
Work independently and asynchronously on task-based assignments at your own pace
Who You Are
-
Background in cybersecurity, information security, or a closely related field
-
Strong understanding of security threat modeling applied to modern software systems
-
Genuinely curious about how AI systems are built, deployed, and potentially exploited
-
Analytical and precise — you approach complex systems methodically and don't miss edge cases
-
Clear written communicator who can document findings and reasoning with structure and confidence
-
Self-motivated and reliable when working independently without supervision
Nice to Have
-
Hands-on experience with penetration testing, red teaming, or vulnerability research
-
Familiarity with large language models (LLMs), AI APIs, or prompt engineering concepts
-
Background in application security, cloud security, or API security
-
Experience with adversarial machine learning or AI safety concepts
-
Certifications such as OSCP, CEH, CISSP, or equivalent practical experience
Why Join Us
-
Work directly on frontier AI systems alongside the world's leading AI research labs
-
Fully remote and flexible — work when and where it suits you
-
Freelance autonomy with the structure of meaningful, high-stakes work
-
Be at the forefront of an entirely new discipline — AI security is one of the most important emerging fields in tech
-
Potential for ongoing work and contract extension as new projects launch