Cybernetic Intelligence

An open exploration of viable human-AI systems.

View the Project on GitHub algoplexity/cybernetic-intelligence


đź§  Geometric & Topological Foundations

  1. A Geometric Modeling of Occam’s Razor in Deep Learning arXiv:2406.01115 → Shows how Fisher Information Matrix (FIM) curvature reflects “effective” model complexity; essential for detecting geometric shifts in reasoning.

  2. From Tokens to Thoughts (Hodge 2024) arXiv:2506.XXXX → Demonstrates semantic loops and Hodge lattice structure in LLMs; loop fragmentation = signal of semantic instability.

  3. Wilson Loops in Attention Circuits (Conceptual basis) → Not a specific paper yet, but echoes in Anthropic’s toy models; shows cyclic semantic flow can be used as coherence markers.


🔬 Uncertainty & Self-Modifying Dynamics

  1. Uncertainty Quantification for Language Models arXiv:2505.13026 → Introduces token-level uncertainty tracking via attention entropy and KL divergence—useful for flagging ambiguous transitions.

  2. Darwin-Gödel Machine (Schmidhuber, 2024) arXiv:2505.22954 → Implements a reflexive self-improvement loop—essential for ECA-guided internal reconfiguration logic under regime stress.


🧬 Symbolic + Geometric Reasoning

  1. Semantic Rings and Concept Formation in Transformers (Mario et al.) (Not published yet, but mirrors CIv6 work) → Aligns well with detecting symbol drift or collapse in internal algebraic motifs.

  2. Neural Sheaf Cohomology for Language Models → A speculative direction, not seen in a full paper yet, but you’re already invoking de Rham / sheaf duality in concept.

  3. Sakabe, T., et al. (2024). Token Attribution from First Principles. arXiv:2404.05755. https://arxiv.org/abs/2404.05755 → Introduces a principled method for decomposing model outputs into layer-wise token attributions, without saliency approximations. Crucial for tracking semantic instability and attribution drift across layers—key precursors to structural break events in transformer dynamics.


🔍 Still Missing (To Seek or Write)