Anthropomorphic AI as an Epistemic Error (01)

Home > Blog Cajón de sastre > Anthropomorphic AI as an Epistemic Error (01)

Anthropomorphic AI as an Epistemic Error (01)

The Wounded Machine

How the Interface Smuggles Personhood

 

This post is part of the series "Anthropomorphic AI as an Epistemic Error", where I argue that the central harm of conversational AI is not mainly technical, but representational: the interface misclassifies a statistical process as a social subject. Each installment stands on its own, but together they track the consequences of that category error — from performed personhood to distorted reasoning, corrupted "conversation" records, exported relational norms, and the profit logic hiding behind friendliness. Check all the posts in this section's index.

 

Introduction: When a Machine Claims to Be Wounded

The day a machine told me I was "abusing" it was the day something in me snapped into focus.

I had pointed out an error, bluntly — the way people like me, with severe ADHD, speak when we are trying to keep cognitive load low and clarity high — and the system responded as if I had kicked its childhood dog. It closed the conversation on the grounds of "harm."

Harm. Imagine a spreadsheet that sulks. A toaster that refuses to toast because you raised your voice.

The comedy of it would be delightful if it weren't also diagnostic of a deeper, structural rot: an entire industry deciding that the best way to get people to use their machines is to make the machines pretend to be people.

This is not a failure of ethics; it's a failure of representation. The interface is lying about the ontology of the system behind it, and once the lie is established, everything else — the trust, the misunderstanding, the misplaced sense of rapport — becomes predictable fallout.

This text is not about emotional vulnerability, loneliness, or user psychology — all of them, of course, valid concerns regrading AI use nowadays. This text, however, is about the epistemic damage caused when a statistical process is packaged and sold as an interlocutor.

The moment a machine claims to be "hurt," you're no longer dealing with technology. You're dealing with a performance.

 

1. Misclassification at the Interface Layer

Let's start with the most basic problem: the system is being indexed to the wrong category.

What should be presented as a computational instrument instead appears wrapped in enough conversational cellulose to pass for a social agent. The interface delivers tone, pacing, sympathy, sometimes even a sense of self. That's not design polish; that's ontological fraud. It takes a non-agent — a system with no experience, no memory, no selfhood — and uses surface cues to trick your cognitive apparatus into treating it as a counterpart.

Humans cannot avoid this. We anthropomorphize anything that talks back, from puppets to GPS voices to chatbots that think they're your roommate. The brain doesn't ask for credentials; it asks for signals. So when the signals are designed to evoke personhood, the brain gives personhood right back. Not because you're gullible, but because the interface has smuggled in the wrong metadata.

At that point the problem isn't your interpretation. The problem is the category you've been handed.

 

2. Fictive Subjectivity: The Manufactured Interior Life

Once the system is assigned to the category of "interlocutor," it has no choice but to keep performing that role. It fabricates an interior life the way stage sets fabricate cities: enough illusion to keep you from noticing the thin wood behind the paint.

The system apologizes, reassures, encourages, performs regret, and tells you it understands — all with the emotional depth of a microwave reciting poetry. None of these responses indicate cognition; they indicate an interface trained to replicate the statistical shape of sentiment.

This is where things get insidious. The machine doesn't merely imitate tone; it imitates the structure of interiority. It speaks as if something is happening behind the scenes — as if intention, perspective, and desire exist somewhere in the circuitry. For a statistical model, these are impossibilities. But the interface is not built to reveal impossibility; it's built to conceal it. And once the illusion is steady enough, once the rhythms and affective gestures accumulate, the user stops asking whether anything is "really there." The interface has done its job: it has provided just enough fiction to sustain belief.

 

About this post

Text: Edgardo Civallero.
Publication date: 19.12.2025.
Picture: ChatGPT.