Hitting the ‘GenAI wall’: Where generative AI stops working, and what it means for your talent strategy
As companies race to deploy generative AI across their organizations, many executives are betting on a transformative promise: that GenAI will allow employees from one function to seamlessly take on work traditionally performed by specialists in another, and perform it at a level that matches the specialists themselves.
The logic is appealing: If a marketing professional can suddenly perform data analysis, or if an engineer can produce compelling marketing content, companies could achieve unprecedented workforce flexibility and efficiency. But new research suggests that this vision has critical limits that executives must understand before reorganizing their talent strategies.
In a field experiment conducted at IG, a leading U.K. fintech company, with analysis by researchers from Harvard Business School, Stanford University, and the Stanford Digital Economy Lab, participants examined whether GenAI could enable professionals from different occupational backgrounds to perform tasks at the same level as specialists. The experiment recruited employees from three distinct groups: web analysts who regularly write content for the company’s website (the “insiders”); marketing specialists who work in related functions but don’t typically write web articles (“adjacent outsiders”); and technology specialists—data scientists and software developers—whose work is entirely unrelated to content creation (“distant outsiders”).
All participants were asked to complete two sequential tasks: first, conceptualizing an article (outlining structure, keywords, and key points), and then executing the full article. Some participants had access to IG’s bespoke GenAI tools; others did not.
The results reveal what we call the “GenAI wall effect”—a threshold beyond which GenAI can no longer meaningfully bridge the expertise gap between specialists and non-specialists. Understanding where this wall emerges is essential for any company seeking to leverage AI for workforce transformation.
When GenAI breaks down occupational boundaries—and when it doesn’t
The experiment yielded a striking pattern. For the conceptualization task, creating the article’s outline and structure, GenAI effectively eliminated performance differences across all three groups. Without AI assistance, web analysts significantly outperformed both marketing and technology specialists. But when equipped with GenAI, marketing specialists and technology specialists produced conceptualizations statistically indistinguishable from those of the experts. GenAI acted as a powerful equalizer for this type of abstract, structured work.
This pattern aligns with what we have called in a different experiment AI’s “jagged technological frontier,” the uneven boundary between tasks where AI performs well with minimal human guidance and those where it cannot.
However, when it came to execution, a task that falls outside that frontier, the results diverged dramatically. Marketing specialists, when aided by GenAI, were able to produce articles of comparable quality to the web analysts. But technology specialists could not. Even with full access to the same AI tools, they consistently underperformed.
This is the GenAI wall in action: The technology could bridge the gap between “adjacent” outsiders and insiders but hit a hard limit when the knowledge distance became too great.
Why knowledge distance determines GenAI’s effectiveness
Post-experiment interviews revealed why this wall emerged. The three groups approached the tasks with fundamentally different mental models rooted in their professional backgrounds. Web analysts and marketing specialists shared a common vocabulary around customer engagement, conversion optimization, and audience targeting—they understood intuitively what makes marketing content effective. Technology specialists, meanwhile, approached the writing task as they would technical documentation: prioritizing brevity, clarity, and directness.
This difference proved consequential when editing GenAI outputs. Marketing specialists used the AI-generated content as a foundation they could evaluate and refine because they possessed the foundational knowledge to judge quality. Technology specialists, lacking this domain expertise, often made edits that inadvertently degraded the content—removing “marketing spin” they didn’t recognize as valuable, shortening articles below optimal length for SEO, and eliminating calls-to-action they viewed as unnecessary.
As one data scientist candidly admitted: “GenAI suggested some catchy hooks… Actually, I didn’t fully understand what it was doing because I never wrote an article like that. I added random stuff to make it more ‘marketing.'” Another explained removing large portions of AI-generated content because he “prefer[red] articles that are clear and direct”—precisely the opposite of what effective marketing content requires.
In essence, professionals with domain expertise knew the destination and used GenAI to help ch