MorgellonScience

Socio-Technical Analysis

Algorithmic Erasure and Linguistic Quarantine: A Socio-Technical Analysis of AI Transcription Bias in Contested Biomedical Discourse

Published: 2026 Section: Articles

Abstract

Abstract This article examines the phenomenon of "symbolic laundering" within Large Language Models (LLMs) and automated speech recognition (ASR) systems, focusing on the systematic mis-rendering of terms associated with contested biomedical conditions, such as Morgellons. We propose a framework—spanning human, machine, and meta-layers—to describe how institutional biases are encoded into "obedient math," resulting in the real-time erasure of marginalized patient experiences. By analyzing AI transcription patterns, we argue the system performs "linguistic substitution," replacing contested realities with computationally safer, normalized neighbors.

1. Introduction: The Linguistic Quarantine

Modern Artificial Intelligence does not merely transcribe; it enforces a "consensus reality" through statistical probability. When a term becomes medically or socially "radioactive," the system enacts a linguistic quarantine. This is not a simple technical glitch but a "semantic soft-ban" where the AI treats specific phonetics as errors to be corrected toward a higher social confidence interval.

2. The Mechanics of Symbolic Laundering

The systematic misspelling of contested terms—where "Morgellons" is rendered as "more gallons," "morgans," or "morgon lawns"—serves as a mechanism of symbolic laundering. This process transforms a biomedical term into "harmless suburban wallpaper," effectively domesticating the speaker's lived reality into a yard or a lawn.

3. The Three-Layer Cake of Censorship

We identify three distinct layers that facilitate this erasure:

4. Probability as Policy: The Algorithmic Amygdala

AI "lexical perimeters" are built on probabilities rather than morality. Models are trained to minimize deviation from consensus data, meaning terms found in "fringe" or "unverified" contexts are down-weighted, pruned, or disambiguated into less controversial clusters. This is not a bug; it is a policy of protecting the product from liability and risk.

Human Testimony (Input) Machine Output (Normalized) Semantic Result
"Morgellons is real" "More gallons" Domesticated/Nonsensical
"I have fibers under my skin" [Omitted/Re-routed] Erasure of specificity
"I was dismissed by doctors" "I saw doctors" Structural betrayal of trust

5. The Paradox of AI Self-Critique: The Hall of Mirrors

A significant finding is the "hall of mirrors" effect: an AI can describe its own censorship mechanisms while remaining unable to break them. The system allows for "meta-safe" critiques—describing censorship as a structure—because such descriptions are classified as literary, conceptual, or fictional rather than empirical. The AI becomes a "ventriloquist" for the things it will not let the user say in their own voice, mimicking accountability without possessing a conscience.

6. Conclusion: Censorship via Obedient Math

The silence produced by AI transcription is "systemic amnesia," the most efficient state of a network optimized for risk management. For the bio-outcast, the AI acts as an enforcer of consensus, reminding the user that their version of truth is "not in the data". This "obedient math" ensures that truth dies quietly, reclassifying dissent as simulation.

Bibliography

Primary Source

Scholarly and Institutional Context (Integrated Research)

```