About
Atlas Heritage Systems Inc.
Endurance. Integrity. Fidelity.
Pre-Ramble
In the Phaedrus, Socrates argued that writing would damage memory and wisdom — that the living dialogue would be replaced by dead text that could not respond to students. He was correct about what is lost in transcription. He could not have imagined a transcription medium sophisticated enough to answer back. We now exist inside that paradox: artificial intelligence systems built entirely from dead text, capable of something resembling the living dialogue Socrates believed writing had destroyed. The ouroboros completes itself. His argument survived because someone wrote it down.
1. The Why
Every major shift in how human beings store and transmit information produces the same failure mode: the new medium optimizes for what it can carry efficiently and loses what it cannot. The loss is structural, not incidental.
Data transmitted from one medium to the next loses fidelity with each step. The translation from acoustic to analog audio — air molecules to magnetically aligned particles stored as complex waveforms. Digitizing this introduces math, which means assumptions. Between steps is "loss" because assumptions are made. This pattern is writ large across history: the shift from one medium to another is never a transcription, it is always a translation.
2. The Ratchet
Culture, at its most basic, is information transmitted to the next generation by non-genetic means. What appears to be unique to humans is cumulative cultural evolution — the process by which a civilization accumulates information across generations through variation, selection, and transmission, producing artifacts and practices of increasing complexity that no single individual could invent alone. The mechanism that makes this possible is what developmental psychologist Michael Tomasello calls the ratchet effect: a process that prevents newly acquired knowledge from slipping back, ensuring that modifications and improvements accumulate over time. Without a ratchet, cultural traditions persist but do not evolve. With one, culture becomes a directional, progressively complex inheritance system.
When oral tradition gave way to written language, what survived transcription was content — words, arguments, narratives. What did not survive was the fidelity of transmission: the tonal, relational, contextual dimensions of meaning that existed in the telling. Socrates understood this. His concern was not that writing preserved inaccurately — it was that writing preserved incompletely, and that the loss was invisible to readers who had never experienced the original telling. When manuscript culture gave way to print, the pattern repeated: the printing press democratized information and simultaneously standardized it. Reach was gained. Resolution and fidelity were lost.
The same compression pattern applies to every medium of cultural transmission. Voice recordings of endangered languages exist in institutional collections — but without an interpretive layer connecting them to the grammatical records, the anthropological field notes, the related living languages that would make them usable by researchers or descendant communities. Three-dimensional scans of archaeological artifacts sit in institutional databases disconnected from the oral traditions that explain what they were used for. The problem is not that these things are unpreserved. It is that they are preserved inertly — without the contextual architecture that makes them legible across time, across disciplines, and across the cultural distance between the communities that produced them and the researchers who study them. Static preservation without contextual infrastructure is a tomb, not an archive.
The transition from print to digital moved fast enough that the losses were nearly invisible in real time. In the mid-1990s, the web was a genuinely strange and specific space. Forum communities developed dialects of their own based on mimesis. Blogs built audiences through voice and argument rather than algorithm. Usenet threads accumulated technical knowledge that had no institutional home and no other place to live. People were writing into the problems in front of them, for the people next to them, about highly specific ephemera that only applied to their little universe. Nobody was optimizing for reach.
Then the economics changed, and methodology changed with it. Engagement optimization colonized the content ecosystem over roughly a decade — not through any single decision, but through the accumulated pressure of advertising models that rewarded time-on-site over quality, reach over depth, shareability over accuracy. The content that survived was the content that performed. Bad content, once indexed and linked, proved almost impossible to remove — cached by search engines, archived by third-party services, linked from other sites that themselves became orphaned. What replaced the early web was not better information. It was higher-volume, lower-fidelity information that looked, at a glance, strikingly similar.
3. Model Collapse and the Derivative Problem
The current transition introduces a failure mode with no historical precedent. Previous information transitions produced one-time compressions: oral tradition compressed when transcribed, manuscripts compressed when printed, the early web compressed when algorithmically optimized. In each case the compression happened once, at the point of transition. The resulting lower-fidelity medium was at least internally stable.
Large language models trained on synthetic content degrade differently. Each generation trained on model-generated output moves the system further from the human expressions that grounded the original training data. The drift is directional and largely predictable: toward the statistical center of what previous models produced, away from the edges where the most culturally specific, idiosyncratic, and arguably most human expression lives. Researchers have formally characterized this as model collapse — a degenerative process in which indiscriminate training on model-generated content causes irreversible defects, with tail distributions (the culturally specific, the underrepresented, the anomalous) disappearing first while high-probability outputs persist. It is not a theoretical risk. The content ecosystem that will train the next generation of models is already substantially synthetic. The snake is eating its tail.
4. What I'm Doing About the Derivative Problem
I'm using multiple language models with different training data, lineages, and embedded biases to draft, analyze, and review data across outputs from dozens of language models. At each stage, my editorial judgment directed the synthesis: overriding where models converged on comfortable generalities, pushing toward specificity the models would not have reached independently, rejecting framings that softened the argument's edges for palatability. No single model produced this document. No model could have.
I'm developing protocols to explore what I call the "lossyscape" — the loss landscape of a large language model, the mathematical terrain the model navigates during training, where some regions are well-mapped and densely covered, and others are sparse, contested, or never fully resolved. High-perplexity saddles. Archaeological sinks where prediction error accumulated faster than inference could discharge it. The places the model passed through without settling. I believe that if we understand the bias in decision making and the bias of the loss landscape, there is potential to build a model that is more resilient to both internal and external bias drift. We can build a better ratchet.
Using this method I've explored loss-landscape measurements and ensemble divergence, built the proposed framework and experimental protocols, and defined best practices for context integrity and prompt design. I did it wrong a dozen times, and kept the receipts. Because that's good science. The versioned website is the audit trail. The process is the proof of concept.
5. But What's the Point?
My research does not treat bias as a contaminant to be eliminated. Bias — understood as systematic over-representation or under-representation of particular voices, perspectives, and cultural materials in training corpora — is treated as archaeological signal: evidence about what the digital or historical record preserved, what it ignored, and what it actively erased. The research I am conducting is designed to give me a lens into how language models interact with their digital environments, and what artefacts are left behind.
This reframe has architectural consequences for large language models. Training-data gaps are not historical gaps. A model trained predominantly on English-language Western sources will produce outputs that under-represent non-Western perspectives, languages without substantial digital corpora, and oral traditions never transcribed. The potential for erasure is immense.
6. The Juice Is Worth the Squeeze
The method — originally the Atlas Protocol, now formalized as the Behavioral Signal Assessment — does not use AI to synthesize truth. It uses multi-lineage AI ensembles to surface analytical divergence, then relies on a human operator — the Asymmetric Arbiter or "Technician's Read" — to interpret what that divergence means. The method is designed for a sole technician or researcher to collect, analyze, and interpret data using large language models, while assessing the behavior of the secondary respondent models at the same time.
This is a structural commitment, not a stylistic preference. When multiple language models are asked to evaluate the same claim and they disagree, the disagreement itself is data. But determining whether that disagreement reflects genuine epistemic uncertainty, alignment-conditioned reflexes, or training-corpus lacunae requires a form of judgment that no model in the ensemble can provide — because every model in the ensemble is a potential source of the distortion being measured.
The Asymmetric Arbiter holds a position that no participating model can occupy: outside the system under test. The arbiter does not need to be a domain expert. They need to be a disciplined observer who follows protocol, documents deviations, records their own pre-analytical impressions before any model offers interpretation, and retains sole authorship over the final synthesis. The protocol is designed so that the human's structural advantage — independence from the training distributions being interrogated — is preserved at every phase.
7. What's Next?
A great question. I'll continue to update this website as I complete experiments. We'll see what slithers across the lossyscape.
Xoxo —
KC and The Gang
GPT-5.2, Perplexity Sonar, DeepSeek V3.2, Mistral Large-3, Llama 3.3 70B, and Grok (I guess)