Your trusted advocate or your rebellious Frankenstein: how you deploy agentic AI determines which one you get
In April 2025, an AI agent named “Sam” working in customer support for the developer tools company Cursor told users that their licenses worked on only one device. Subscriptions were canceled, complaints flooded Hacker News, and the company scrambled to clarify that no such policy existed. Sam had invented it. The technology took on a life of its own, a Frankensteinian paradox in which Cursor’s creation began speaking for its creator. The autonomous system asserted a fact that customers experienced as a unilateral decision by the company. By the time the startup discovered the mistake, the damage had been paid in the slowest and most expensive currency in business, trust. Cursor’s mistake was not deploying Agentic AI but deploying it in the wrong place.
Recommended Video
The most consequential decision for CEOs implementing agentic systems is determining where to deploy agents in the customer journey. The underlying models are converging quickly, but the autonomous technology built on top of them is not. Vendors differ meaningfully in embedded governance standards, orchestration, integrations, and reliability. Even the strongest vendor advantage cannot save a firm that deploys a capable agent in the wrong place. Winners decide how close to the customer their agents can get, and how clearly they draw the line between work the AI does and work humans still own.
This article presents a proximity framework for that decision, drawing on conversations with senior technology leaders across thirteen industries and on a pattern in public data. The deployments getting the most coverage—chatbots, virtual assistants, customer-facing AI—are not the ones generating the most durable returns. The deployments that work tend to be invisible.
C.H. Robinson illustrates the point. Its 30-agent system handles over 318,000 tracking updates per month and responds to 100% of inbound carrier requests, up from 60% before automation. The company is now handling roughly one-third more freight with roughly one-third fewer employees than in 2019. The agentic system runs quietly in the background, handling the work that determines whether shipments arrive on time without touching the part of the relationship that customers actually care about. The deployment works precisely because customers experience only the result.
Firms that calibrate proximity correctly will quietly compound their advantages. Those that misjudge it will pay first in complaint data, then in churn, and eventually in reputational damage that outlasts any model upgrade.
The Cost of Deploying in the Wrong Place
The 2025 National Customer Rage Survey found that 88% of e-commerce customers who believed they had interacted directly with AI viewed the experience unfavorably.
Consumer complaints filed with the Consumer Financial Protection Bureau (CFPB) nearly doubled over two years, from about 770,000 before the public launch of OpenAI’s ChatGPT to over 1,500,000 after, with the increase concentrated at high-exposure firms. In low-exposure areas that retained human-in-the-loop operations, such as mortgage and student loans, complaint volumes stayed flat.
The increase translates to a replacement-cost exposure of tens of millions of dollars per affected firm—before accounting for reputational damage, repeat complaints, or the erosion of trust that compounds after a first failure.
The greatest risk is not deploying automation where it is possible, but deploying it where customers are vulnerable, frustrated, and least equipped to handle failure.
Where You Deploy Determines the Return
Agentic AI delivers the most durable value when applied to tasks with low emotional vulnerability and high reversibility. Emotional vulnerability, which rises with proximity to the customer, raises the cost of any single failure because a customer who is frustrated, confused, or stressed reads a generic AI response as indifference instead of efficiency. Reversibility lowers that cost, as an error caught before it reaches the customer is, by definition, invisible. Together, the two variables describe the gap between a deployment that compounds advantage and one that compounds liability.
The risks are highest when customers lack voice and visibility. When customers cannot tell whether AI is shaping the interaction, cannot easily escalate, and cannot see how a decision was made, errors surface not as complaints in the moment but later in churn data, regulatory filings, and the slow erosion of trust.
Our proximity framework classifies the degree to which an Agentic AI system directly affects the customer experience, distinguishing three categories—direct, mediated, and background—each with a distinct risk profile, business case, and set of governance requirements.
Direct Proximity
In direct-proximity deployments, the customer interacts with the AI agent; the system is the product. E-commerce customer service, banking chat assistants, real estate inquiry agents, and patient-facing health