An open exploration of viable human-AI systems.
View the Project on GitHub algoplexity/cybernetic-intelligence
Title: Cybernetic Intelligence: Hypothesis v2 — From Agentic Control to Collective Viability
Abstract
This revised hypothesis reflects a major theoretical advance in our understanding of viable human-AI systems. Originally framed around internal self-modeling, human oversight, and runtime orchestration, Cybernetic Intelligence v2 integrates new empirical insights from emergent LLM societies, agentic bias formation, and decentralized coordination. We now propose that intelligence is not only viable within agentic boundaries but also inherently collective, conventional, and socially emergent. This update replaces the top-down controller paradigm with a multi-agent ecosystem framework, drawing on cybernetic systems theory, observer-dependence, and autopoietic dynamics to model scalable, sustainable cognition.
1. Background: Why the Hypothesis Needed to Evolve
The original Cybernetic Intelligence Hypothesis (2024) proposed that viable AI systems emerge when:
However, findings from Baronchelli et al. (2024) have revealed a critical new dynamic:
When multiple LLMs interact continuously, they spontaneously develop shared conventions, biases, and task specialization — without any orchestrator.
This demands a theoretical upgrade.
2. Core Hypothesis (v2)
A cybernetically viable AI system does not emerge from model performance alone, but from the recursive, decentralized coordination among interacting agents — human and artificial — embedded in a feedback-rich environment.
Intelligence arises not as a capability, but as a systemic property — co-constructed, maintained, and evolved across observers, agents, and representations.
This system:
3. Updates to Theoretical Architecture
| Original Element | Revised Perspective |
|---|---|
| Meta-Controller | Emergent conventions + local adaptation |
| Role Allocation | Self-assigned via feedback and confidence |
| Memory Graph | Shared across agents, not centrally indexed |
| Task Routing | Implicit through peer negotiation |
| Optimization Strategy | Distributed variation + social convergence |
| Failure Recovery | Detected via runtime observability loops |
4. Observer Theory Extended
Wolfram’s Observer Theory (2023) emphasized that system laws depend on what an observer can perceive. In Cybernetic Intelligence v2, we go further:
The abstraction gap itself becomes a function of shared observation.
This means:
5. Implications for Viable System Modeling
The Viable System Model (Beer, 1981) remains foundational, but must now be mapped onto emergent collectives:
This is viability as an ecosystem of minds, not a supervisory hierarchy.
6. Research and Design Priorities (v2)
✅ Build LLM ecosystems, not isolated agents ✅ Embrace emergent task division, not role engineering ✅ Focus on runtime observability and social memory ✅ Develop tools for human co-perception and feedback ✅ Treat bias as a dynamic convention, not just training artifact
7. Conclusion: The System Is the Teammate
The updated hypothesis shifts us from cognitive engineering to ecological design.
We are not just building smarter models. We are:
The true Cybernetic Teammate is not an agent — it’s the viable system of interactions that forms when humans and AIs co-evolve.
References