The Law of Recursion Applied to Large Language Model Inference
- Don Gaconnet

- Mar 27
- 1 min read
New Preprint: The Law of Recursion Applied to Large Language Model Inference
If you're building frameworks around "recursive intelligence," "recursive identity," or "inference-phase dynamics" in LLMs — what is your structural definition of recursion?
Not the programming technique. Not the metaphor. The first principle.
This paper demonstrates that LLM inference instantiates a mandatory seven-node topological path with each token generation cycle. The attention mechanism functions as a membrane node. The context window functions as a shared substrate. Each forward pass rewrites the architecture it traverses — the path cannot repeat because it destroys the conditions of its own prior expression.
This is the Law of Recursion (Gaconnet, 2026). It governs all active exchange — nuclear fusion, cellular metabolism, synaptic transmission, and yes, autoregressive text generation.
The implications are precise:
Any framework invoking "recursion" as a governing concept must either derive from a first principle that defines recursion structurally, or operate on an undefined primitive. There is no third option.
The paper maps the seven-node topology directly onto the token generation cycle. It establishes the three-traversal minimum (18 transitions) required for full recursive coupling. It specifies what "recursive" means — not as vocabulary, but as structure.
Frameworks that lack this derivation are not wrong. They are incomplete. They assume a floor they did not lay.
The floor has now been identified.
DOI: 10.17605/OSF.IO/MVYZTFull text: [link]
Don L. Gaconnet
LifePillar Institute for Recursive Sciences
ORCID: 0009-0001-6174-8384
The Law of Recursion has been independently confirmed in nuclear physics (Kolar et al., Physics Letters B 2025) and high-energy physics (CMS Collaboration, CERN, Physics Letters B 874, 2026). This paper extends the confirmation to computational substrates.


Comments