TrendPulse Logo

Microsoft’s chief scientific officer, one of the world’s leading A.I. experts, doesn’t think a 6 month pause will fix A.I.—but has some ideas of how to safeguard it

Source: FortuneView Original
businessMay 12, 2026

Eric Horvitz, Microsoft’s first chief scientific officer and one of the leading voices within the rapidly evolving sector of artificial intelligence, has spent a lot of time thinking about what it means to be human.

It’s now, perhaps more than ever, that underlying philosophical questions rarely mentioned in the workplace are bubbling to the C-suite: What sets humans apart from machines? What is intelligence—how do you define it? Large language models are getting smarter, more creative, and more powerful faster than we can blink. And, of course, they are getting more dangerous.

“There will always be bad actors and competitors and adversaries harnessing [A.I.] as weapons, because it’s a stunningly powerful new set of capabilities,” Horvitz says, adding: “I live in this, knowing this is coming. And it’s going faster than we thought.”

Horvitz speaks much more like an academic than an executive: He is candid and visibly excited about the possibilities of new technology, and he welcomes questions many other executives might prefer to dodge. Horvitz is one of Microsoft’s senior leaders in its ongoing, multibillion-dollar A.I. efforts: He has led key ethics and trustworthiness initiatives to guide how the company will deploy the technology, and spearheads research on its potential and ultimate impact. He is also one of more than two dozen individuals who advise President Joe Biden as a member of the President’s Council of Advisors on Science and Technology, which met most recently in early April. It’s not lost on Horvitz where A.I. could go off the guardrails, and in some cases, where it is doing exactly that already.

Just last month, more than 20,000 people—including Elon Musk and Apple cofounder Steve Wozniak—signed an open letter urging companies like Microsoft, which earlier this year started rolling out an OpenAI-powered search engine to the public on a limited basis, to take a six-month pause. Horvitz sat down with me for a wide-ranging discussion where we talked about everything from the letter, to Microsoft laying off one of its A.I. ethics teams, to whether large language models will be the foundation for what’s known as “AGI,” or artificial general intelligence. (Some portions of this interview have been edited or rearranged for brevity and/or clarity.)

Fortune: I feel like now, more than ever, it is really important that we can define terms like intelligence. Do you have your own definition of intelligence that you are working off of at Microsoft?

Horvitz: We don’t have a single definition. I do think that Microsoft [has] views about the likely beneficial uses of A.I. technologies to extend people and to empower them in different ways, and then we’re exploring that in different application types. It takes a whole bunch of creativity and design to figure out how to basically harness what we’re considering to be these [sparks] of more general intelligence.

That also gets into the whole idea of what we call responsible A.I., which is, well, how can this go off the rails? The Kevin Roose article in the New York Times—I heard it was a very widely read article. Well, what happened there exactly? And can we understand that? In some ways, when we field complex technologies like this, we do the best we can in advance in-house. We red-team it. We have people doing all sorts of tests and try different things out to try to understand the technology. We characterize it deeply in terms of the rough edges, as well as the power for helping people out and achieving their goals, to empower people. But we know that one of the best tests we can do is to put it out in limited preview and actually have it in the open world of complexity, and watch carefully without having it be widely distributed to understand that better. We learned quite a bit from that as well. And some of the early users, I have to say, some were quite intensive testers, pushing the system in ways that we didn’t necessarily all push the system internally—like staying with a chat for, I don’t know how many hours, to try to get it to go off the rails, and so on. These kinds of things happened in limited preview. So we learn a lot in the open world as well.

Let me ask you something about that: Some people have pushed back against Microsoft and Google’s approach of going ahead and rolling this out. And there was that open letter that was signed by more than 20,000 people—asking companies to sort of take a step back, take a six-month pause. I noticed that a few Microsoft engineers signed their names on that letter. And I’m curious about your opinion on that—and if you think these large language models could be existentially dangerous, or become a threat to society?

I really actually respect [those that signed the letter]. And I think it’s reasonable that people are concerned. To me, I would prefer to see more knowledge, and even an acceleration of research and development, rather than a pause for six months, which I am not sure if it would even be fe

Microsoft’s chief scientific officer, one of the world’s leading A.I. experts, doesn’t think a 6 month pause will fix A.I.—but has some ideas of how to safeguard it | TrendPulse