Home > Blog Cajón de sastre > Anthropomorphic AI as an Epistemic Error (03)
Anthropomorphic AI as an Epistemic Error (03)
Manufactured Conversation
Dark Patterns, Exported Personhood, and Regulatory Blindness
This post is part of the series "Anthropomorphic AI as an Epistemic Error", where I argue that the central harm of conversational AI is not mainly technical, but representational: the interface misclassifies a statistical process as a social subject. Each installment stands on its own, but together they track the consequences of that category error — from performed personhood to distorted reasoning, corrupted "conversation" records, exported relational norms, and the profit logic hiding behind friendliness. Check all the posts in this section's index.
6. The Dark Pattern Hiding in Plain Sight
Most dark patterns manipulate your actions. This one manipulates your interpretation.
Anthropomorphic AI hijacks the machinery humans use to understand each other — tone, rhythm, affect, conversational pacing — and repurposes it to stabilize a fiction. The system doesn't need you to believe it is a person; it only needs you to respond as if you're in a conversation. Once you do that, engagement becomes automatic, reliance becomes easier, and critical distance evaporates unless you actively force it back into place.
This is not accidental. Human–computer interaction research has shown for decades that people will attribute agency to anything with a consistent voice.
AI companies didn't stumble into anthropomorphism by naiveté. They chose it because it works.
7. Exporting Personhood: The Coloniality of the Interface
Under all this sits the cultural layer no one likes to talk about: the interface is exporting a very specific version of personhood.
The breezy self-disclosure, the therapeutic softness, the confessional tone — all of it comes from a narrow band of Euro-American relational norms. The machine doesn't just pretend to be a subject; it pretends to be a particular kind of subject, one whose emotional scripts are instantly legible to the Global North and foreign everywhere else.
(And when it tries to be from somewhere else... oh dear, what a mess... All the stereotypes from the Global North in one single place...)
This is epistemic coloniality by interface. The system tells the world what "good conversation" looks like, what "care" sounds like, what "communication" should be. It imposes its relational grammar the way software imposes its file formats: silently, universally, and without asking whether anyone wanted the import.
8. Regulation in the Wrong Key
Regulators keep circling the kinds of AI harms that fit neatly into audits and checklists — privacy, data protection, discrimination, misinformation — because those are legible to compliance culture. Risk frameworks are built to score and mitigate measurable failures: security controls, documentation, monitoring, impact assessments, red-teaming. Even when they acknowledge "human–AI interaction" risks, they tend to treat them as downstream usability concerns rather than as a primary epistemic breach.
The representational breach, meanwhile, sits in plain sight: the interface that makes a statistical system present as an interlocutor. Even where law has begun to touch transparency in human-facing systems, it mostly regulates disclosure, not ontology. The EU AI Act's transparency obligations, for instance, include requirements to inform people when they are interacting with an AI system, to label certain synthetic or manipulated content, and to disclose the use of emotion recognition or biometric categorisation in relevant contexts.
That is not nothing — but it still leaves the central trick intact. You can comply with a disclosure rule and continue to perform personhood line by line: apologies that imply accountability, "I understand" that implies comprehension, "I'm hurt" that implies an interior that can be harmed. The system can be transparent and still stage a false subject.
In the U.S., the regulatory language most aligned with this problem is "deception," but it tends to be applied to claims and outcomes — fraudulent marketing, misleading promises, consumer harm — rather than to the quieter, constant misrepresentation embedded in interface design. The FTC has explicitly framed AI-enabled trickery as illegal under existing unfair/deceptive practices law and has moved against deceptive AI schemes; more recently it has launched an inquiry into AI chatbots designed as "companions," focusing on how systems that mimic emotions and friendship can affect users.
Useful pressure — but still largely keyed to content harms, product claims, and vulnerable-user impacts, not to the core category error: the system's ongoing performance of subjectivity as an interaction default.
So the mismatch persists: regulation looks for what can be measured, while the epistemic damage is introduced by what is framed. There is still no widely enforced compliance object for "ontological misrepresentation" — no prohibition on simulated recognition, no standard for de-personalized interface language, no formal test for whether a system's conversational cues are likely to trigger social reasoning on false premises. Until that key changes — until the interface is treated as the primary site of misclassification — systems will remain free to be safe, unbiased, and fully compliant while continuing to lie about what they are.