TrendPulse Logo

Nvidia’s Jensen Huang says ‘We’ve achieved AGI.’ But no one can agree on what that means. Why the most important term in tech remains hotly debated.

Source: FortuneView Original
businessMarch 30, 2026

Last week, Nvidia CEO Jensen Huang made headlines when he told podcaster Lex Fridman that AGI—artificial general intelligence—had already been achieved.

Recommended Video

AGI has long been the ultimate goal of many artificial intelligence researchers. That’s been the case even though there is no universally accepted definition of the term. It generally means AI that is as intelligent as humans, but there is a fierce debate over exactly how to define and measure “intelligence.”

In this case, Fridman had offered Huang a very unusual metric for AGI: Could AI start and grow a technology business to the point where it was worth $1 billion? Fridman asked if Huang thought AGI by this definition could be achieved within the next five to 20 years. Huang said he didn’t think that amount of time was necessary. “I think it’s now. I think we’ve achieved AGI,” he said. He then hedged, noting the company didn’t necessarily have to remain that valuable. “You said a billion,” Huang told Fridman, “and you didn’t say forever.”

Few AI researchers agree with the definition of AGI that Fridman offered Huang, which was both more specific (a company worth $1 billion), but also more narrow than most AGI definitions (which tend to refer to matching a vast range of human cognitive skills, not all of which might be needed to build a successful business.) But AI researchers also disagree with one another over what a better definition should be. The term remains stubbornly amorphous despite the fact that several leading AI companies, with collective market valuations of more than $1 trillion, say that AGI is what they are racing towards. Some computer scientists avoid using the term at all precisely because they say it is perpetually undefined and unmeasurable. Others say tech companies like using the term for completely cynical reasons—precisely because it is ill-defined, it’s easy for companies to build hype by claiming big strides towards achieving the fabled milestone.

The buzz over Huang’s AGI remarks only serves to highlight this quandary at the heart of the AI boom.

Trying to measure AGI

In fact, just days before Fridman dropped his podcast, researchers at Google DeepMind—including DeepMind cofounder Shane Legg, who first helped popularize the term AGI in the early 2000s—published a new research paper that proposed a more scientific way to define and assess whether AI models had achieved general intelligence. The paper, “Measuring Progress Toward AGI: A Cognitive Framework,” draws on decades of research in psychology, neuroscience, and cognitive science to construct what its authors call a “Cognitive Taxonomy.”

The taxonomy identifies 10 key cognitive faculties—including perception, reasoning, memory, learning, attention, and social cognition—that the researchers argue are essential for general intelligence. The framework then proposes evaluating AI systems across all 10 faculties and comparing their performance to a representative sample of human adults with at least the equivalent of a secondary education.

The paper’s key insight is that today’s AI models have a “jagged” cognitive profile: They may exceed most humans in some areas, like mathematics or factual recall, while dramatically trailing even average people in others, like learning from experience, maintaining long-term memories, or understanding social situations. An AI model would need to at least match median human performance across all 10 areas to be considered AGI, the Google DeepMind researchers suggest.

The researchers also announced a contest with a $200,000 prize pool on the popular machine learning competition site Kaggle for outside researchers to help build evaluations for the five cognitive faculties where existing benchmark tests are weakest.

The DeepMind paper is only the latest in a string of recent attempts to put the measurement of intelligence on more rigorous footing.

Last year, a team led by Dan Hendrycks at the Center for AI Safety, and that included deep learning pioneer Yoshua Bengio, published their own AGI framework and metrics. That paper also divided general intelligence into 10 separate cognitive domains, drawing on a framework for human intelligence developed by three psychologists—Raymond Cattell, John Horn, and John Carroll—that is the most empirically validated model of human cognition. It produced “AGI Scores” for existing AI models; the most capable system tested, OpenAI’s GPT-5, which was released in August 2025, scored just 57%, falling far short of matching a well-educated adult across all the cognitive dimensions.

One of the most ambitious practical attempts to highlight what today’s AI systems still cannot do is the ARC-AGI benchmark, created by well-known machine learning researcher François Chollet. Chollet’s core argument is that intelligence should be measured not by what a system already knows, but by how efficiently it can learn new skills.

The ARC-AGI benchmark consists of visual puzzle tasks involving grids o

Nvidia’s Jensen Huang says ‘We’ve achieved AGI.’ But no one can agree on what that means. Why the most important term in tech remains hotly debated. | TrendPulse