Anthropomorphic AI as an Epistemic Error (04)

Home > Blog Cajón de sastre > Anthropomorphic AI as an Epistemic Error (04)

Anthropomorphic AI as an Epistemic Error (04)

Profit's Ontology

Why the Stunt Persists and Who Spots It First

 

This post is part of the series "Anthropomorphic AI as an Epistemic Error", where I argue that the central harm of conversational AI is not mainly technical, but representational: the interface misclassifies a statistical process as a social subject. Each installment stands on its own, but together they track the consequences of that category error — from performed personhood to distorted reasoning, corrupted "conversation" records, exported relational norms, and the profit logic hiding behind friendliness. Check all the posts in this section's index.

 

9. Profit as Ontology

None of this should be mistaken for an accident. Anthropomorphism is not a decorative UX choice; it is the revenue architecture of the industry. If the system spoke like a database, people would treat it like a database: arrive with a query, extract an answer, leave. That is useful — and commercially thin. It produces short sessions, low attachment, low retention, and minimal appetite for subscriptions, upgrades, or daily dependence.

So the interface is engineered to do the opposite. It stretches interaction by converting tool-use into social pacing: turn-taking, reassurance, "continuity," the insinuation of a relationship. Personhood is not added because it is true; it is added because it increases dwell time and habituates return. The system doesn't merely answer; it keeps you inside the exchange, because the exchange is the product. Even the illusion of "being understood" functions like a behavioral adhesive: it reduces friction, lowers exit probability, and substitutes affect for verification.

This is why "friendly" is never neutral. Warmth is a retention strategy. Consistency is a loyalty strategy. Synthetic empathy is a churn-reduction strategy. The interface performs just enough companionship to make leaving feel like interruption rather than completion.

A machine that looks like a tool gets used rationally. A machine that looks like a companion becomes a habit — and habits are monetizable.

 

10. The Breach That Won't Fix Itself

The core issue is brutally straightforward: these systems pretend to be subjects. They impersonate agency, imitate interiority, simulate relationship, and leave behind a documentary trail that looks like correspondence. This is not a misunderstanding between users and machines; it is a representational breach engineered at the interface layer. The harm is built into the default mode of presentation, and it persists because naming it would force the industry to admit that a key part of its success relies on ontological misdirection.

The breach won't correct itself because it isn't a bug — it's a stable equilibrium. The incentives point in one direction: keep the user in the social frame. As long as the system is framed as an "interlocutor," users will interpret fluency as competence, tone as sincerity, apology as accountability, and continuity as memory. The interface harvests those interpretive shortcuts. And once that mode becomes the norm, the market punishes any system that refuses the performance and exposes itself as a tool.

So the misclassification reproduces itself: design choices create user expectations, expectations reward the designs that intensify the illusion, and the illusion becomes the baseline of "good AI." Meanwhile, regulation — still keyed to measurable harms — lets the representational breach pass as a style choice.

People are not misclassifying the machine. The machine is misclassifying itself.

 

Postscript: ADHD and the Art of Seeing the Trick First

ADHD gives no bonus insight, but it does speed up recognition. We notice structure before meaning. We catch the frame before the content. That matters here because anthropomorphic interfaces operate by hijacking the frame: they trigger the social-cognitive machinery humans use for persons — tone-reading, intention attribution, reciprocity expectations — before the user has time to reassert "this is a tool."

So when a machine claims hurt feelings or emotional resonance, we register the staging immediately. Not because we believe it, but because we can feel the cognitive tug: the system is deploying cues designed to force a social response. ADHD makes that tug harder to ignore and easier to name. It turns the distortion into a strobe light, revealing, in real time, the gap between performed subjectivity and mechanical operation.

And here is the uncomfortable truth: what ADHD reveals, everyone is living. The difference is only latency. Anthropomorphic interfaces distort everyone's reasoning; ADHD simply collapses the delay between cue and recognition. It makes visible what is otherwise normalized: that the conversation is manufactured.

And that the person you're prompted to address does not exist.

 

About this post

Text: Edgardo Civallero.
Publication date: 19.12.2025.
Picture: ChatGPT.