Scaylor Intelligence Division · Internal Recruitment File
Case No. 2025-LLM-001
Filed: ██/██/2026
Classification: OPEN (BARELY)
Issuing Dept: Scaylor Data Ops
Duration: 1 Day · $1,000 Flat
Clearance Required: None. Anyone can snitch.
Scaylor
Official Job Posting · Compensated Position
LLM
Snitch
Your job is to catch AI lying, hallucinating, and making things up —
and report it directly to Scaylor.
CLASSIFIED
Mission Brief

Scaylor has reason to believe that large language models — including but not limited to             ,        , and           — are producing confident, authoritative, completely fabricated answers about enterprise data. We need someone to catch them in the act. Paid work. Real job.

"It told me the 2024 ARR figures with four decimal places of precision. It made every single number up. It didn't even hesitate." — Scaylor analyst, incident report #47

Compensation & Terms
Pay
$1,000Flat rate. One day. Paid in full.
Engagement type
1 DaySingle session. No ongoing commitment.
Location
RemoteAnywhere with internet and suspicion
Openings
1One snitch. That's all we need.
Duties & Responsibilities
Candidate Profile
Profile · The Ideal Snitch
Ref: SCA-2025-SNITCH
Required traits
Naturally skeptical. Believes nothing without a source.
Familiar enough with AI to know when it's faking confidence.
Methodical. Keeps records. Takes screenshots.
Some background in data, analytics, or BI a plus.
Enjoys being right about something being wrong.
Can write a clear summary without editorializing too much.
Instant disqualifiers
You trust AI outputs by default and don't verify.
You've cited a hallucinated statistic in a real document.
You think "the model said so" is a primary source.
You are, yourself, an LLM. (We will find out.)
You use AI to write your snitch reports about AI.
Why Scaylor

Scaylor builds enterprise data infrastructure. We deal in ground truth — actual numbers, actual pipelines, actual sources of record. When an AI tells a Fortune 500 company that their Q3 churn rate was 14.7% and it simply invented that figure, that's not a hallucination. That's a liability.

We need someone who understands the difference between a model that doesn't know something and a model that confidently makes something up. Those are different failure modes with different consequences. Your job is to catalog them both.

APPLY
Submit Your
Dossier

Application received and logged. Case file opened. We'll be in touch within 48 hours — or sooner if your hallucination example was sufficiently damning.

By applying, you confirm you are a human being and not the subject of this investigation.
Scaylor does not discriminate based on which model you distrust most, provided your distrust is evidence-based.