An open exploration of viable human-AI systems.
View the Project on GitHub algoplexity/cybernetic-intelligence
What if transformer-based LLMs and emergent models from cellular automata aren’t opposing approaches, but complementary systems on a convergent path?
This article is a conceptual continuation of our prior work, Beyond the Context Window, but takes a new entry point focused on the structural and recursive capacities of adaptive AI systems…
Title: From Transformers to Automata: Converging Paths Toward Cybernetic Intelligence
Introduction: Two Ladders to the Same Mountain
For years, the trajectory of AI has followed the ascent of large language models (LLMs), pushing ever-larger context windows, multi-modal encoders, and prompt engineering toward a kind of industrialized cognition. Yet, an unexpected path has recently emerged from an entirely different landscape: cellular automata.
What if these two seemingly divergent paths — transformer-based recursive refinement and local-rule emergent models — are not competitors, but complementary halves of a more powerful paradigm?
This article is a conceptual continuation of our prior work, Beyond the Context Window, which framed viable AI systems as cybernetic ecosystems. Here, we introduce an alternate perspective to that same body of knowledge: a new entry point rooted in the convergence of self-refining LLMs and structurally adaptive automata.
1. Recursive Refinement in LLMs: From Fine-Tuning to Self-Tuning
In “Think, Prune, Train, Improve” (Costello et al., 2025), transformer-based LLMs demonstrate remarkable capacity for self-improvement:
This is not yet autonomy — but it is recursion with intent. It demonstrates a model beginning to adapt its own training loop based on its own outputs. In the framework of Beyond the Context Window, this is a strong instantiation of Viable System Model (VSM) functions:
Recursive LLMs are beginning to observe and modify their own behavior — a critical property of viable systems.
2. Emergent Models: Intelligence from Local Rules
On the other end of the AI landscape, Giacomo Bocchese and collaborators have proposed Emergent Models (EMs) — systems built on cellular automata. Rather than optimizing static parameters, these models:
Unlike transformers, these systems are inherently self-modifying in a structural sense — they don’t just learn better weights; they potentially evolve new rules of operation. This opens a door to systems that adapt not just their predictions, but their fundamental ways of thinking.
3. The Meeting Point: Structural + Representational Adaptation
Here lies the exciting convergence:
Together, these form the outline of a dual-adaptive system:
This mirrors biological intelligence — where neurons carry representations, but epigenetic systems evolve structure.
4. Reframing the Cybernetic Teammate from This Perspective
In Beyond the Context Window, we framed AI systems as cybernetic collectives:
Seen from the EM+LLM convergence, we now revise the system boundary:
The AI agent is not a fixed-function transformer. It is a living process — recursively improving itself (like an LLM), and evolving how it improves (like an EM).
The human no longer just “manages” this system. Instead:
This is cybernetics in motion — feedback not just on what we do, but on how we construct meaning and restructure capability.
5. Implications: A New Viability Frontier
With this convergence, we begin to answer deeper questions:
The answer may lie in building:
This shifts us from architecture as scaffold to architecture as co-evolving organism.
Conclusion: From Abstraction to Adaptation
Choosing this frame — recursive meets emergent — is another representation, another collapse of possibility into action. But unlike static frames, this one can respond. It moves. It learns. It adapts.
And if AI systems are to be truly viable, they must learn to do the same.
The future of AI is not bigger models. It’s living systems that evolve their own intelligence.
This article is an alternate entry into our broader cybernetic teammate theory. For a complementary representation grounded in context modeling, role specialization, and ecosystem architecture, see: Beyond the Context Window.
References
Costello, C., Guo, S., Goldie, A., & Mirhoseini, A. (2025). THINK, PRUNE, TRAIN, IMPROVE: SCALING REASONING WITHOUT SCALING MODELS. arXiv preprint arXiv:2504.18116
Garcia, M. H., Diaz, D. M., Kyrillidis, A., Rühle, V., Couturier, C., Mallick, A., Sim, R., & Rajmohan, S. (2025). Exploring How LLMs Capture and Represent Domain-Specific Knowledge. arXiv preprint arXiv:2504.16871
Lou, B., Lu, T., Raghu, T. S., & Zhang, Y. (2025). Unraveling Human-AI Teaming: A Review and Outlook. arXiv preprint arXiv:2504.05755
Beer, S. (1981). Brain of the Firm: The Viable System Model. John Wiley & Sons.
Endsley, M. R. (1995). Toward a Theory of Situation Awareness in Dynamic Systems. Human Factors, 37(1), 85–104.
Bocchese, G., BrightStar Labs, & Wolfram Institute. (2024). Emergent Models: Machine Learning from Cellular Automata. ResearchHub post: https://www.researchhub.com/post/4073/emergent-models-machine-learning-from-cellular-automata
Wolfram, S. (2023). Observer Theory. https://writings.stephenwolfram.com/2023/12/observer-theory
Box, G. E. P. (1979). Robustness in the Strategy of Scientific Model Building. In Robustness in Statistics (eds. Launer & Wilkinson), Academic Press.