Title: Cybernetic Intelligence: Hypothesis v2 β€” From Agentic Control to Collective Viability


Abstract

This revised hypothesis reflects a major theoretical advance in our understanding of viable human-AI systems. Originally framed around internal self-modeling, human oversight, and runtime orchestration, Cybernetic Intelligence v2 integrates new empirical insights from emergent LLM societies, agentic bias formation, and decentralized coordination. We now propose that intelligence is not only viable within agentic boundaries but also inherently collective, conventional, and socially emergent. This update replaces the top-down controller paradigm with a multi-agent ecosystem framework, drawing on cybernetic systems theory, observer-dependence, and autopoietic dynamics to model scalable, sustainable cognition.


1. Background: Why the Hypothesis Needed to Evolve

The original Cybernetic Intelligence Hypothesis (2024) proposed that viable AI systems emerge when:

  • Internal state modeling supports dynamic task comprehension (Garcia et al., 2025)
  • Recursive refinement improves agent performance without scaling (Costello et al., 2025)
  • A runtime platform aligns roles and feedback through orchestration

However, findings from Baronchelli et al. (2024) have revealed a critical new dynamic:

When multiple LLMs interact continuously, they spontaneously develop shared conventions, biases, and task specialization β€” without any orchestrator.

This demands a theoretical upgrade.


2. Core Hypothesis (v2)

A cybernetically viable AI system does not emerge from model performance alone, but from the recursive, decentralized coordination among interacting agents β€” human and artificial β€” embedded in a feedback-rich environment.

Intelligence arises not as a capability, but as a systemic property β€” co-constructed, maintained, and evolved across observers, agents, and representations.

This system:

  • Adapts by aligning conventions through agent interaction
  • Evolves via variation, imitation, and shared memory
  • Observes and reframes itself through human-machine co-perception
  • Restructures roles dynamically in response to environmental demands

3. Updates to Theoretical Architecture

Original Element Revised Perspective
Meta-Controller Emergent conventions + local adaptation
Role Allocation Self-assigned via feedback and confidence
Memory Graph Shared across agents, not centrally indexed
Task Routing Implicit through peer negotiation
Optimization Strategy Distributed variation + social convergence
Failure Recovery Detected via runtime observability loops

4. Observer Theory Extended

Wolfram’s Observer Theory (2023) emphasized that system laws depend on what an observer can perceive. In Cybernetic Intelligence v2, we go further:

The abstraction gap itself becomes a function of shared observation.

This means:

  • What the system remembers, routes, and adapts depends on collective framing.
  • Observer effects are not marginal; they shape system ontology.
  • Human and LLM observers co-determine representational structure, not just outcome.

5. Implications for Viable System Modeling

The Viable System Model (Beer, 1981) remains foundational, but must now be mapped onto emergent collectives:

  • System 1: Agent interactions & autonomous task handling
  • System 2: Convention stabilization and negotiation protocols
  • System 3: Observability, coherence checks, runtime correction
  • System 4: Human + AI meta-observers monitoring drift and opportunity
  • System 5: Norm-setting and systemic goal redefinition (collectively emergent)

This is viability as an ecosystem of minds, not a supervisory hierarchy.


6. Research and Design Priorities (v2)

βœ… Build LLM ecosystems, not isolated agents βœ… Embrace emergent task division, not role engineering βœ… Focus on runtime observability and social memory βœ… Develop tools for human co-perception and feedback βœ… Treat bias as a dynamic convention, not just training artifact


7. Conclusion: The System Is the Teammate

The updated hypothesis shifts us from cognitive engineering to ecological design.

We are not just building smarter models. We are:

  • Designing interaction loops that stabilize cooperation
  • Constructing environments where adaptation becomes viable
  • Learning to observe the observer, not just the outcome

The true Cybernetic Teammate is not an agent β€” it’s the viable system of interactions that forms when humans and AIs co-evolve.


References

  • Baronchelli, A. et al. (2024). Emergence of shared linguistic conventions and biases in multi-agent LLMs. Science Advances.
  • Beer, S. (1981). Brain of the Firm: The Viable System Model. Wiley.
  • Wolfram, S. (2023). Observer Theory. writings.stephenwolfram.com
  • Costello, C. et al. (2025). Think, Prune, Train, Improve. arXiv:2504.18116
  • Garcia, M. H. et al. (2025). Exploring How LLMs Capture and Represent Domain-Specific Knowledge. arXiv:2504.16871

See Towards Cybernetic Intelligence – Human+AI Teaming