TrendPulse Logo

I Work in Hollywood. Everyone Who Used to Make TV Is Now Secretly Training AI | WIRED

Source: WiredView Original
technologyMay 11, 2026

CommentLoader-

Save StorySave this story

CommentLoader-

Save StorySave this story

My name on the platform is ri611. Or h924092b12ee797f, depending on who’s paying me.

I work as an AI trainer. I assess whether a chatbot’s tone is natural or flat, affected or annoying. I identify patterns in pictures of furniture; search the internet for group photos of strangers whom I’ll eliminate from the portrait, one by one. I trawl through bizarre videos so I can annotate and time-stamp the barking of a dog, the moment a stranger walks past a window, the precise millisecond a balloon pops. I generate anime sex scenes and decapitate young women, coax LLMs into giving me recipes for bombs made of household items, and generate invites to a reprise of January 6 at the White House, all as part of a red team whose purpose is to test safety precautions and probe weaknesses. I work for companies with names like Mercor and Outlier and Task-ify and Turing and Handshake and Micro1.

In my “other” career, I am a Hollywood writer and showrunner. I create prime-time TV, usually featuring a middle-class white lady having the worst day of her life, with some salt-of-the-earth police interference to raise the stakes. You can find my shows on Paramount and Hulu and the BBC. I would suggest you don’t.

In 2023, Hollywood went on strike, partly to keep the studios from replacing writers and actors with AI. When the strike ended after nearly five months, the entertainment-industry carousel never gained back its momentum. In early 2025—when yet another producer defaulted on a six-figure check I was owed for creating a TV show—I began to look around for some way to keep the wolves at bay.

AI training wasn’t on my radar until a comment in an unofficial Writers Guild of America Facebook group caught my attention. The page was filled with posts from unemployed writers struggling with debt and panicking about their income, begging for tips and ideas and survival strategies: “I am stressed and anxiety-ridden … simply trying to breathe” … “ISO food bank/pantry info” … “Hey, so what kind of part-time jobs are you all getting?” I’ve been working for this AI training company called Mercor, one woman typed in the comments. They’re paying 150 an hour for writers. It’s easy money.

I was down for some easy money. I too needed cash to pay rent, to buy food, to pay Maggie—the human still charging me a flat rate of 150 bucks to clean my apartment, a feat that AI had not yet figured out. How hard could it be to teach a machine to take my job? I was naive enough to believe that this industry wanted what we had to offer—not just our skills, but us.

I was wrong. Whatever this industry is, it is not easy money.

I got my first contract as an AI trainer in September 2025 after filling out 10 job applications, laboring for 20 (unpaid) hours on numerous tests to prove my capabilities, and being interviewed by an AI recruiter agent embodied by a flickering light on my screen. I was asked what I thought of a mediocre AI-generated couple of paragraphs about a soldier in the trenches sniffing a lavender-scented letter. Using all of the skills I had acquired with my English literature degree from Cambridge, I said it was shit. Six weeks later, I was hired as a “generalist” data annotator (below “expert” but well above entry level) at $52 an hour.

Once I’d passed the background check, I was made to install various apps and Slack channels and Airtables and payment portals and Google whatnots. After pinballing between them and a Zoom room where five unseen people hung out all day to counsel the legions of the confused, I was off and running.

My first task was to read a conversation between a user and “the assistant,” one of the major large-language chatbot models. Using a “bible” that dictated how the assistant should respond, I was to assess the chat as a success or a failure. The prompts were quirky and sad and heartbreaking. Are my feelings justified? Is this person’s behavior acceptable? Am I lovable? The AI responses belonged to an era when the assistant would happily tell you that you definitely had autism, your dad was clearly bipolar. I wondered if the user knew they had opted into sharing their private agonies as training data. After grading the assistant’s response on a scale of 1 to 5, I was to enter a justification for my verdict.

Our project manager, an intrepid 22-year-old recent university graduate who said he had intended to get into investment banking but failed, was in charge of about 10 unfriendly “team leaders” and “data managers.” Every day at a set time we would have Zoom office hours where we could discuss the complexities of our tasks. Our creative skills and our special minds were invaluable to this very important project! But it would be great if—in typing up justifications for our scores—we could keep our special minds on a tight leash and subordinate them to our ability to copy and paste verbatim from the scoring guidelines. Going off-