Find The RightJob.
At Variance, we are teaching machines to make the hardest judgment calls at scale. We build AI agents for the high-precision gray area of stopping fraud, scams, and abuse. This isn't another sales tool or a customer service system. We're solving real problems in investigations and fraud prevention to protect innocent people from being harmed.
We’re a small, talent-dense team in San Francisco working on a problem at the edge of what AI systems can reliably do: making good decisions in messy, adversarial, real-world environments.
We’re looking for a Research Engineer to help push that frontier forward. You’ll design evals, study failures, build new research loops, and turn research ideas into production capabilities.
This role sits at the intersection of research and engineering: part model builder, part experimentalist, part systems engineer.
Care deeply about protecting people from fraud, scams, and abuse
Have strong opinions about model quality, evaluation, and experimental rigor
Want to work on core model and agent behavior
Are excited to train, fine-tune, and improve models for hard real-world judgment tasks
Think in tight research loops: hypothesis, experiment, evaluation, failure analysis, iteration
Thrive in ambiguous, fast-moving environments where the path is not obvious and the feedback loop is short
Are motivated by the challenge of making AI systems work in adversarial, regulated, and high-consequence settings
Want to help define what trustworthy AI means in real-world use cases
Train, fine-tune, and improve models for fraud, scams, abuse, and other high-stakes judgment workflows
Own research threads focused on improving agent capability, reliability, and decision quality
Build proprietary benchmarks, datasets, and evals that reflect real customer workflows, regulatory constraints, and real failure modes
Design and run experiments across post-training, retrieval, tool use, planning, memory, and long-horizon agent behavior
Study where models break, why they break, and how to make them more robust
Prototype new training strategies, agent architectures, and evaluation methods, then turn the best ideas into production systems
Work closely with founders and engineering to translate research advances into deployed product capabilities
Push the boundary of what AI agents can do in regulated industries
Our models get materially better at making hard judgment calls in production
Our models are trusted at scale
We develop evals and training loops that compound over time
We understand failure modes more clearly and improve system behavior faster
New research ideas turn into real product capabilities quickly
Experience training, fine-tuning, or evaluating modern ML systems
Strong programming skills and comfort working in research-heavy codebases
Familiarity with LLMs, agent systems, post-training, reinforcement learning, retrieval, or adjacent areas
Ability to design clean experiments and draw reliable conclusions from noisy results
Strong engineering judgment and a bias toward building
Interest in fraud, risk, trust and safety, compliance, or other regulated and adversarial domains
We believe in ownership, urgency, and craft. We enjoy spirited debate, wild ideas, and building things we’re proud of. We’re fully in-person in San Francisco.
Competitive salary and meaningful equity
Platinum-level medical, dental, and vision insurance
Unlimited PTO, sick leave, and parental leave
Up to $100 per month in reimbursement for personal health and wellness expenses
401(k) plan
Compensation Range: $250K - $400K
Similar jobs
Meta
San Francisco, United States
7 days ago
Affirm
San Francisco, United States
7 days ago
VARIANCE
San Francisco, United States
7 days ago
Handshake
San Francisco, United States
7 days ago
Capital One
San Francisco, United States
7 days ago
Sephora
San Francisco, United States
7 days ago
Prolific
San Francisco, United States
8 days ago
© 2026 Qureos. All rights reserved.