This Startup Wants You to Pay Up to Talk With AI Versions of Human Experts | WIRED
CommentLoader-
Save StorySave this story
CommentLoader-
Save StorySave this story
It was probably inevitable that when AI hoovered up the world’s knowledge and learned to talk like a human being, people would use it to seek out personal guidance. It’s an enticing concept—AI is always available and generally costs less than a human—but the drawbacks are obvious. Large language models are prone to inaccuracies and outright hallucinations. There are privacy issues associated with sharing one’s secrets and woes with a big company. The wisdom dispensed by AI is not crisply sourced, and almost all of it is ripped from creators who never see a dime in compensation. Plus, it’s downright dystopian for human beings to be advised by robots.
This week, a new company is being launched, claiming to resolve all those issues—except the last one. Onix, cofounded and led by a former WIRED contributor named David Bennahum, describes itself as a Substack for chatbots. Just as you subscribe to a writer on Substack, you can subscribe to an AI doppelganger of a celebrated expert, called an “Onix.” These bots are trained to conduct conversations with subscribers, delivering the provider’s expertise and advice like they would if you had a face-to-face appointment in their offices. The bots even attempt to project the unique personalities of the experts (though I found the conversations rather dry).
Bennahum tells me that his company has spent years creating technology that protects users and experts. He calls it “Personal Intelligence.” The bots store information on the user’s device–encrypted. If a government demands the Canada-based company provide dirt on a user, all it can come up with is the person’s email. Since the experts themselves train the dupes with their personal content, there’s theoretically no intellectual property issue. Bennahum also claims that because the models have guardrails that limit the conversation to the subject of the consultations, hallucinations are kept to a minimum. During my testing, though, when I asked a bot therapist who it liked in the NBA playoffs—a change of subject it should have shut down— it called my jail-breaking pivot a “fun change of pace” and then hallucinated that we were in the middle of last year’s conference finals. I drew another Onix away from our exchange about Ketamine therapy, into a discussion of how a romantic split broke up the indie band the Mendoza Line, though it tried to cast the separation as a “powerful expression of their neurobiology in distress.”
Well, Onix is still in beta, so it’s not perfect. In this initial stage, a limited number of invited testers joined from those on a waitlist. After a shakedown period, Onix will be open to all.
Courtesy of Onix
The company isn’t exactly breaking new ground. The idea of a chatbot standing in for a human is fairly common. As is the idea of cashing in on it. For instance, Manhattan psychologist Becky Kennedy has built a parenting advice business that features a chatbot named Gigi trained on her acumen and knowledge. Kennedy’s company pulled in $34 million last year. So if you are an expert, Onix might sound pretty good—imagine a bot with your persona making money for you by interacting with thousands of clients with no effort on your part. As an Onix white paper puts it, “The expert’s knowledge base becomes a capital asset that generates revenue independent of their time.”
Onix hopes to eventually have many thousands of experts offering versions of themselves. But for now, it’s starting with a highly vetted group of 17, with a concentration on health and wellness. Though most of these experts have impressive professional resumes, they are notable as marketers and influencers as well. Some have books or podcasts to promote, or supplements or medical devices to sell.
One expert on the platform, Michael Rich, counsels kids and their parents on overuse of media and its effects. Naturally, his opinions on screen time dominate chats with his Onix. When I spoke to Rich, he told me that he agreed to transfer his knowledge to Onix because of its privacy protections—and also because of the company’s clear communication that it doesn’t provide actual medical treatments. “It’s about helping folks understand exactly what may be going on for them and how they might pursue seeking therapy if they need it,” said Rich. Bennahum confirms that, say, engaging with a bot representing a pediatrician is in no way akin to a doctor’s visit. “It's meant to augment [a user’s] ability to be thoughtful around whatever pediatric journey they're on,” he says. Indeed, a disclaimer appears when you access the system noting you are receiving guidance, not medical treatment. Still, in a world where countless people treat Claude and ChatGPT like therapists—and many people can’t afford real health care— this warning seems destined to be widely ignored.
Another Onix expert I spoke to, David Rabin, said that while he was originally concerned