1
A few weeks ago, I became briefly famous for the wrong reasons.
Recommended Video
The Wall Street Journal ran a piece about how I use AI in my work as an editor at Fortune — prompting drafts, synthesizing interviews, and accelerating a reporting process that used to take me twice as long. The response was swift, loud, and chaotic. The “journalism community” was divided as editors perked up and reporters recoiled. Strangers on the internet called me lazy. A few journalists told me privately they were doing the same thing and would never admit it. One reader asked to meet for coffee specifically to explain why I was wrong.
I had not expected this. I had expected, maybe, curiosity. What I got instead felt like something older and more personal than a debate about journalism ethics — more like the look you get when a coworker figures out a shortcut and doesn’t share it.
I’ve been trying to understand the reaction ever since. The person who finally gave me a framework for it wasn’t a media critic or a journalism professor. She was a neuroscientist who has spent 30 years wiring AI into human beings.
The experiment
Vivienne Ming‘s career began in 1999, when her undergraduate honors thesis — a facial analysis system trained to distinguish real smiles from fake ones, which she proudly told me was partly funded by the CIA for lie-detection research — introduced her to machine learning before most people had even heard the term. She went on to build one of the first learning AI systems embedded in a cochlear implant, a model that learned to hear within a human brain that was also learning to hear. She has since founded companies applying AI to hiring bias, Alzheimer’s research, and postpartum depression. For three decades, her self-appointed mission has been to take a technology most people misunderstand and figure out how to use it to make the world better.
courtesy of Vivienne Ming
Last year, she ran an experiment that got a lot of attention for what she’s called the “cognitive divide” and even a “dementia crisis.” But she told me it clarified something she had long suspected.
Ming recruited teams of UC Berkeley students to use AI tools to predict real-world outcomes on Polymarket — the forecasting exchange where professionals with real money bet on geopolitical events, commodity prices, and economic indicators. The task was specifically designed to be impossible to game from memory: no amount of studying would tell you what a barrel of oil would cost in six months. She wanted to see not whether AI helped, but how humans used it — and what that revealed about the humans themselves.
She also put EEG monitors on some participants.
What the brain scans showed, before she had even fully analyzed the behavioral data, was something out of a Marvel Comic. When most students handed a question to the AI and submitted the answer, their gamma wave activity — the neural signature of cognitive engagement — dropped by roughly 40%. “That would be the equivalent of going from working on a hard math problem to watching TV,” she told me. These were bright students at a top university. With access to the most powerful AI tools in the world, they had become, in her words, “a very expensive copy-paste function that needed health insurance.”
She calls this group the automators. They were the majority.
A second group — the validators — used AI differently: to confirm what they already believed. They cherry-picked supporting evidence, ignored pushback, submitted answers that reflected their priors more than the data. They performed worse than AI operating alone.
Then there was the third group. Small — she estimates 5% to 10% of the general population. When she analyzed their interaction transcripts, something unusual appeared: you couldn’t tell who was making the decisions. The human and the machine were genuinely integrated. The humans would explore — surfacing hypotheses, chasing hunches, venturing into territory the data didn’t obviously support. The AI would ground them, correcting overreach, pulling back toward evidence. The human would update and push further. Round after round.
Ming calls them cyborgs. They outperformed the best individual humans in the study and they outperformed the best AI models running alone. They were roughly on par with Polymarket’s expert markets — professionals with millions of dollars on the line.
Here is the detail that most surprised her: it barely mattered whether the cyborg teams used a state-of-the-art model or a cheap open-source one you could run on a phone. The benchmarks that AI companies obsess over — the ones cited in Senate hearings and investor decks and every major tech announcement — predicted almost nothing about outcomes. What predicted everything was the quality of the human.
Specifically, Ming isolated four traits crucial for cyborg success: curiosity, fluid intelligence, intellectual humility, and perspective-taking. Ming notes that these same traits, measured in childr