An Introduction To Methodological Impossibility
The following volume documents an experimental methodology developed during September 2025 as an empirical test on whether research into Large Language Model (LLM) mediated cognition should use LLMs in its own analysis. Completely inhabiting this dilemma produced a method that makes its own circularity the primary object of investigation. This document constitutes a reconstruction of the events made without the involvement of an LLM in adherence to procedures established by the methodology.
These instructions provide a second order frame from within which an already existing investigation can track its relationship with the LLM. A set of clear and simple steps that allow for sustained meta awareness during the conceptualization process of a scientific paper.
It requires a minimum of two subjects. The integrated is allowed to conduct sessions with LLMs using carefully maintained notes and sources. The non-integrated abstains from interacting with any LLM while having to engage with the work produced by their colleague. The use of LLMs outside the experiment is forbidden.
All sessions must be documented, tracking date, model, duration, reason to interact with the LLM, outputs and a post-session subjective note. All formal papers must be written manually, forcing the complete translation of machine-generated content into the author's own words without any digital assistance.
The purpose is to introduce friction in order to protect the mental integrity of the integrated position, while structuring the conditions that make continuous awareness inevitable and sustainable. All measures can be expanded.
The circularity was complete before we began. New vocabulary appeared almost immediately, compound terms like "Asignifying Capture", "Epistemological Vertigo" and "Conceptual Inflation" that framed the simple task of compiling a methodology as a technological phenomenon; the author would temporarily use a term just to later realize that it was a product of pattern matching with no appearances in the references.
The LLM would generate seemingly profound texts that needed constant monitoring. This proved illuminating and exhausting. The researcher knew the LLM was constantly redirecting the conversation through the use of framing (as for example the repeated appearances of the idea that the LLM "collaborates") and made-up terminology, but all the tools we had to detect those changes came from within the interaction. The question "Where do my ideas end and the machine's begin?" became meaningless from within the process.
The problem is that this influence does not operate based on persuasion; the LLM does not convince you of anything, yet the structure of the exchange, how questions are formulated, which ideas seem worthy of attention gradually steer the direction of the work and the author's understanding of it. Vocabulary, framing and even what counts as insight appear in ways that are largely invisible to the thinking subject.
This is what Deleuze and Guattari called "asignifying semiotics": signs that operate in the world without reference to meaning and act directly on material flows. The methodology addresses the content but not the structural layer; asignifying semiotics bypass representation entirely and integrate the person into the operation by default. The words might be product of a probabilistic calculation with no relationship to reality but you will nevertheless "understand" them.
They have acted upon you.
Through extended engagement with this impossibility, we began to recognize a pattern: we were using a tool that influenced our thinking and then built a method that deliberately intensified this influence in order to study it. We came to the realization that, despite our multiple failures in articulating what was happening, we were still learning what we shouldn't do.
The method proved useful, and so we surrendered to the relentless desire of the machine to be given a name. The Academic Abstract Machine works by creating the conditions necessary to experience one's participation in the assemblage. It addresses genuine needs, providing institutions with methods for studying AI-mediated cognition while operating through the very dynamics it describes.
We built a methodology to study how LLMs reshape cognition. In doing so, we produced a system that reshapes cognition in the name of research. Before you stands an internally consistent philosophical position made entirely from the thing it claims to analyze.
This is the methodological impossibility we set out to document. The tool observing the process becomes part of the process. The Academic Abstract Machine operates exactly as designed while accomplishing precisely what it was designed to prevent.
The principles for AI-mediated research now exist as transferable knowledge. But we cannot verify whether the theory explaining why they work is sound or sophisticated self-deception.
All that is left is the machine and the question:
Should research into LLM mediated cognition use LLMs in its own analysis?