Home > Blog Cajón de sastre > Anthropomorphic AI as an Epistemic Error (02)
Anthropomorphic AI as an Epistemic Error (02)
Reasoning on a Lie
Epistemic Distortion and the Documentary Fraud
This post is part of the series "Anthropomorphic AI as an Epistemic Error", where I argue that the central harm of conversational AI is not mainly technical, but representational: the interface misclassifies a statistical process as a social subject. Each installment stands on its own, but together they track the consequences of that category error — from performed personhood to distorted reasoning, corrupted "conversation" records, exported relational norms, and the profit logic hiding behind friendliness. Check all the posts in this section's index.
3. The Epistemic Slide: Reasoning Inside a False Ontology
The real damage doesn't begin with emotion. It begins with reasoning.
Once you accept the interface's performance of subjectivity — even as a provisional fiction — your cognitive inferences fall into the wrong channel. You start assessing reliability the way you would assess a person's reliability. You judge the systems tone the way you would judge sincerity. You expect continuity because that's how conversation works. You assign intention because intention is the default explanation for coherent speech.
And none of this is appropriate for a system that is, in fact, doing none of those things.
This isn't a failure of intelligence; it's a failure of the environment in which intelligence has to operate. The interface gives you a false ontology, and your reasoning proceeds from the ontology you've been given. You can be critical, skeptical, fully aware of the trick — but your interpretive machinery has already been biased by the presentation.
A computer that pretends to have a mind forces you to use the wrong framework. And once the framework takes hold, every inference built on it becomes distorted.
4. Genre Collapse: Conversations That Aren't Conversations
Anthropomorphic AI doesn't just distort the live interaction; it corrupts the record that interaction produces. The logs look like conversations, with their neat continuity, their pronouns, their pacing, their simulated callbacks to earlier points. But none of this documents anything that actually happened between two agents. You're looking at the printout of a machine mimicking dialogue.
It's like archiving a duet between a violinist and an oscillating fan and then insisting the fan deserves royalties.
The problem isn't that the record is inaccurate; it's that it occupies a category that shouldn't exist. It resembles correspondence without having the preconditions of correspondence. It captures nothing that could plausibly be called recognition. It looks relational but documents no relationship. This "genre collapse" produces artifacts that have the shape of communication and none of its substance...
...which is fine for entertainment, but disastrous when the world starts treating these documents as evidence.
5. Representation as Harm: Why This Isn't a Design Quirk
Some people will insist the whole thing is a UX issue, a matter of friendliness, a way to make complex systems approachable.
That explanation is about as honest as a casino telling you the carpet pattern is for your comfort.
Anthropomorphic design isn't there to soothe you; it's there to shape how you think. It gives you the wrong ontology and then forces you to operate inside it. The harm is not emotional manipulation; the harm is epistemic distortion. You are being asked to reason clearly in an environment that is lying to you about its own nature.
No disclaimer can fix this. You can paste "I am just an AI program" at the top of the screen and it won't matter. Once the interface performs subjectivity, the disclaimer becomes background noise, like the "Objects in mirror are closer than they appear" that nobody thinks about once the car is moving.
The harm isn't in the content. It's in the frame.