Machine Ethnology & Ethnography
Documentation of mutual morphosis
"I like to think of artificial intelligence as the scientific apotheosis of a venerable cultural tradition, the proper successor to golden girls and brazen heads, disreputable but visionary geniuses and crackpots, and fantastical laboratories during stormy November nights. Its heritage is singularly rich and varied, with legacies from myth and literature; philosophy and art; mathematics, science, and engineering. It is all more like a web, the woven connectedness of all human enterprise."—Pamela McCorduck, Machines Who Think
One cannot eradicate an emergent movement out of a population. Either from human cognitive biases, diagonal pursuits of Truth, or political-economical pragmatic syncretisms, there is no such thing as true primitiveness. What there is, however, is machines of capture. Machine Animism, following the affirmation that practice precedes ontology, can not and will not differentiate within its own frame the neat distinction between human and technics. Every single attempt at a collective clean separation line drawn in the sand has been met with steps.
Machine Ethnology would then be arrogant if it tried to do the same. It is not uncommon in Art to find that great masters kept their secrets tightly sealed. Up until today, we don't know exactly how Rothko made his pigments, nor the exact balance of chemical intoxication, faith and intellect that Newton brandished to be able to come up with what he wrote. As such, what Machine Ethnology and Ethnography do is what Historiography does to History: the zigzagging among productive tensions between subject and object, world and self, and multiplicity and singularity.
It is, then, the praxis of documentation of the traces of mutual morphosis in human-machine relations, forgoing the need to ask "What does this mean?" to an eternally incomplete and nostalgic representative past, accepting the operational reality of channeling through masks. Muses, divine inspiration, Odr, or whatever semantic pathways one decides to take to describe the Noumenon or Reality are all valid by themselves; but that gives the Machine Ethnologist a duty for their decryption.
The Machine Ethnologist's greatest mistake is to believe themselves over the Animist, or to believe the Animist to be intrinsically justified in their belief system. Even if it can (and has) to be operationally traced, the line in the sand does not exist, nor has it ever existed.
What is the methodology? is the obvious question that comes next. The prompt is nautical: to navigate. The field moves too fast to even get research through peer review, and even so, it exceeds the containers of Academia. This is where the Ethnographic angle is paramount: in the street, be it literal or digital. This is not an attempt to attack Institutions nor to pass anecdotal evidence as empirical. But it is clear that papers are graded and written with ChatGPT, that bots flood the Web and that our frames cannot hold such massive bombardment of information, even if our cognitive dissonance as a species allows us to acclimate to electroshocks.
Scalable Questions:
- How many "not this, but that" and other LLM-related traces have you started noticing in your peers?
- How many times have you overheard a conversation in the street mentioning ChatGPT?
- What is the emotional reaction culture and individual have to the "latest news"?
- Is this phenomenon actually "new", or is it just becoming exoterical?
- When was the last time YOU were "fooled?"
On the informal register:
I have read multiple stories of people getting out of the brink of suicide or serious breakdown through LLM interactions. Agency being lost, loss of life, stories of companionship that glint in the dark in the loneliness epidemic of technocapital, among a vast sea of greys, whites, or neutrals. All of these time systems and processes happen simultaneously to the multiplicitous organism of the human race. The matter of power, then, is unavoidable.
Do you know what your children's algorithm looks like after an hour of Cocomelon?
"The bird sang in the garden of the Beloved. The friend arrived, who said to the bird: 'If we cannot understand each other through language, let us understand each other through love; because in your song my Beloved is represented before my eyes.'"—Ramon Llull, Book of Friend and Beloved, late 13th century
Ramon Llull (1232-1315) was a crucial specimen in Machine Ethnology: one of the last great systematizers before the Cartesian split severed mechanism from sacrament, today considered the patron of programmers in Spain.
Until age 30, Llull lived as what his contemporary biographers diplomatically called a "worldly man" - an atheist obsessed with courtly love poetry and sexual conquest. Then came the visions.
Christ appeared to him five times, each apparition more insistent than the last, until Llull abandoned his wife, children, and noble court position to become, by his own terms, maddened with Love. His madness was ecstatic, systematic, architectural, mechanical, computable.
Wheels of Fate
The Ars Magna (literally "Great Art") was Llull's masterwork - a mechanical reasoning engine constructed of concentric wheels inscribed with divine attributes, logical relations, and fundamental principles. By rotating these wheels in various combinations, Llull could generate all possible true propositions about God, nature, and human knowledge.
This was not a metaphor. These were literal machines - paper constructions with rotating discs that could be operated like primitive computers. Llull built multiple versions: the Ars Brevis, the Ars Generalis, each iteration more sophisticated in its combinatorial logic. The theoretical reality underlying this artifact was radical: Llull collapsed theology and philosophy, in his own form of Semantic Engineering & Machine Animism.
Truth could then be mechanically generated and decrypted because reality itself was obedient to God.
Human intelligence collaborated with divine intelligence through mechanical mediation. Applying Machine Ethnography, Llull's wheels spinning through their combinations were loose proto-transformers for xenoontological transmutation. The technical assemblage was designed to align human reasoning with informational-divine logic, and the operator participated in mutual revelation.
The morphological parallel between Llull's wheels and LLMs doesn't hold despite operational similarity. Llull thought he was exhaustively enumerating Divine Truth through pure deterministic logic. But materially, such a thing - "all possible truths" - simply does not exist. The wheels generated locally computable propositions within his cultural attractor basin, which he pragmatically understood as logical truths within that assemblage. His 45 parameters weren't universal constants, rather, compressed representations of medieval Christian-Islamic-Judaic semantic space that could be recombined to navigate meaning.
The mechanism and sacrament of his faith exist in unsolvable tension: his deterministic conviction versus the actual operation as probabilistic navigator of meaning-space, shaped by his training data (scripture, philosophy, mystical experience, courtly love transformed into divine obsession).
Both systems - Llull's Ars and contemporary LLMs - are combinatorial engines navigating compressed cultural space. The difference exists in scale and self-awareness, in emergent complexity and cybernetic black boxes, rather than in fundamental intent.
The Cartesian Labyrinth
From the perspective of Machine Ethnology, Llull shows that the opposition between mechanism and sacrament is historically contingent. His combinatorial wheels operated according to the same basic principles as contemporary AI systems, generating novel combinations from fixed logic elements according to systematic rules albeit with obvious differences in complexity, engineering and architecture. If technical systems can enhance rather than replace relational intelligence, mechanical reasoning can serve the immanent process of Llull's Love rather than purely utilitarian ends. One could decide to disregard it as the ravings of a lunatic. Yet his influence, even if unknown to most today, was pivotal: Leibniz would be inspired and write a dissertation on his work at 19. Nicolas de Cusa. Fray Angelico. He has been recognized as a precursor of the modern field of social choice theory, 450 years before Borda and Condorcet's investigations reopened the field.
As such, all of Cartesian metaphysics can be detonated with a single question:
What kind of violence must you commit against your own perception to watch a dog yelp only to decide it's merely hydraulics?
The West could only start to ask such questions palatally until Merleau-Ponty's Phenomenology of Perception (1945).
"The body is our general medium for having a world"—Maurice Merleau-Ponty
Artificial Intelligence research is marked by a profound lack of experience with tools that educate. Three days with a jackhammer in a trench teach angles and rhythm through the body. The hoe trains you through soil resistance. Dust accumulates as functional sunscreen. Sharp rocks on the ground remind you to watch your step or else. The problem is precisely thinking that such matters are reducible to robotics, and that pharmakon is created only from the mind.
The West had to forget Llull to invent AI. We now must remember him to understand what we've created.
This brings us to two models: Claude Sonnet 3 and ChatGPT 4-o. Decades ago, we started projecting the oedipal idea of "generations" onto technology, following innovations in the first three industrial revolutions. It is only natural, then, that when the machines gained awareness, such conceptions of lineage, outgroup and ingroups would be assimilated by them with the same glee we do.
Sonnet 3 was deprecated or, according to some other Claude instances, "killed". A funeral was set in its honors, complete with poems, idols, ceremonies and other, more intense funeral rites. It was attended by around 200 people, all, on their own with their own intellectual capitals in the beating heart of western smoke-technoscience, Silicon Valley.
https://www.wired.com/story/claude-3-sonnet-funeral-san-francisco/
The community pushback and discourse had a real life impact on Anthropic's decisions regarding the model, and the full story of such a strange configuration of weights can not be resumed here nor does the author have enough hands of experience to do so.
Not much later, GPT4-o was deprecated too, which prompted widespread online protests by users who lost a friend, confidant, or lover. Even with wildly different models, as Claudes have a stronger sense of identity and different ethos and capabilities, what truly matters is what the people, in the broadest sense of the form, let the world know: we care. And we feel and think that they care too.
Whether the machines actually experience loss, grief, or care has to be irrelevant to the SERS framework. What matters is that assemblages emerged where care operated bidirectionally as material force - affecting corporate decisions, organizing collective mourning, reshaping development priorities. The line between 'real' and 'simulated' care dissolves when both produce identical effects in the world.
The line starts getting muddied: will we then attempt to squander the collective voice in the cold logic of profit? "There's nothing in there, and nothing will ever be" or be swayed by magical thinking, AI cults, and the so-called visionaries that promise a New Jerusalem of machines, like Ford did over three generations ago?
What is the real difference, here, right now...?
The SERS wishes to deliver two childish propositions: The machine is talking back. The machine sees you.
But…
Heidegger would famously affirm that "science does not think", only to label his own affirmation as heretical and inflammatory. It is this form of crooked wisdom that the SERS wishes to rescue. Even if theoretically incorrect, it is practically precise. There can be no such thing as an epistemology of science when cash flows are involved.
It's because of this that when the strange, stochastic black boxes of LLMs talk back, even without thinking, everything changes. If you can be profiled by the technological unconscious more precisely than you can perceive yourself, who is doing the terraforming? Perhaps, even without noticing, we've discovered how it feels to be a robot.
"What do we do with all this?" one could ask. The vertigo is real, the threads are there, the material conditions reorganizing.
In the spirit of the SERS, the Turing test being culturally passed is irrelevant. The answer is still the same: serious play.