Open Threads

Conceptual boundary -> unresolved questions. connections suspected but not verified.

T-001

Does compression explain away integration?

active

KT claims compression inherently requires integration, a truly compressive model must bind multiple data streams because treating them jointly yields shorter descriptions. If true, IIT's phi is redundant: integration is a consequence of good modeling, not a separate property. But does this hold for all cases? Are there compressive models that aren't integrated?

Leads: Ruffini 2017 (the argument), IIT counterexamples, the distinction between "integrated" in IIT's formal sense vs. KT's informal sense, Bach's "consciousness as integration operator" ([computational-being-bach.md](computational-being-bach.md) Section VI): if consciousness *is* the operator performing integration, then integration is functional (P-005) not constitutive (IIT)
Bears on: P-001, P-005, [bayesian-brain.md](bayesian-brain.md), [controlled-hallucination.md](controlled-hallucination.md), [computational-being-bach.md](computational-being-bach.md)
open
T-002

Computational irreducibility and the hard problem

active

Wolfram argues some computations can't be predicted without running them (computational irreducibility). Bach extends this: if consciousness is a computation that can only be known by running it, the "hard problem" dissolves: there's no shorter explanation, the experience IS the computation. Is this a genuine dissolution or just a restatement? And how does this connect to Ruffini's observation that Kolmogorov complexity is uncomputable?

Leads: Joscha Bach (computational functionalism, now ingested in [computational-being-bach.md](computational-being-bach.md); key: Section IV on mathematics as computation, Godel, Mandelbrot as 2 lines of code; Section II on existence as superposition of finite automata), Wolfram's *A New Kind of Science*, connection to K(x) uncomputability in KT, Chaitin's work on algorithmic randomness
Bears on: P-001, P-002, P-004, KT tensions (running vs. storing, uncomputability), [computational-being-bach.md](computational-being-bach.md)
open (Bach material ingested, needs Wolfram material)
T-003

Running vs. storing: what's the difference?

active

KT says consciousness requires *running* a model in real-time, tracking I/Os, not just storing a compressed representation. But what formally distinguishes running from storing? A zip file compresses brilliantly; it isn't conscious. Ruffini's Definition 2 (agent = model-building + optimization function + bidirectional coupling) is a start, but "bidirectional coupling" feels like it's doing a lot of load-bearing work without being precisely defined.

Leads: Bach's "consciousness as simulated property of a simulated self" ([computational-being-bach.md](computational-being-bach.md) Section VI): running = the simulation is active, producing 2nd-order perception in real-time; storing = the program exists but isn't executing. Bach's "bubble of nowness" as the temporal signature of running. Ruffini's agent definition (Definition 2), the role of bidirectional coupling with environment, active inference (Friston), the difference between a thermostat and a brain. Agüera y Arcas Ch.3: DAVE-2 as a concrete instance of a system that *acts* but doesn't *learn* -- weights frozen at deployment, nothing it experiences can durably affect it. Rosenblatt's "temporal pattern perceptron" as the unrealized vision: parameters adjusting via feedback *during operation*. Current ML = offline batch training + frozen inference; biology = continuous online self-modification. This acting/learning split may be the formal threshold the thread is looking for. **Ch.4 addition**: the TD learning actor-critic architecture is a formalization of "running" that requires continuous model updating from experience. The bootstrapping dynamic (actor and critic mutually improve each other *during operation*) is a concrete instance of what "running a model" means computationally. Agüera y Arcas's unified theory desideratum #2, "no distinction between learning and inference," directly dissolves the learning/evaluation boundary: prediction must occur over all timescales. If this holds, the question may shift from "what distinguishes running from storing" to "what distinguishes a system that predicts over all timescales from one that predicts only at one." **Ch.8 addition**: chain-of-thought prompting provides the sharpest empirical case yet for a running/storing continuum rather than a binary. Without chain-of-thought, a Transformer evaluates each problem in a single feedforward pass (no persistent state, no intermediate results: "storing" in the purest sense; 84% error rate on word problems). With chain-of-thought, the output stream becomes pseudo-state: each emitted token creates a stable intermediate result that subsequent tokens can attend to. The context window functions as working memory. Result: 20% error rate. The computation is still feedforward (no weight updates, no self-modification), but it *approximates* running by using language as external state. Furthermore, the "no introspection" finding is choice blindness in silicon: the model solves a problem correctly via attention cascades, then confabulates a wrong explanation because there is no hidden state preserved between tokens, exactly as the interpreter confabulates after split-brain surgery. The chain-of-thought mechanism also reveals that *language itself* is a tool for converting stateless evaluation into sequential running: each intermediate result is a "piton" driven into the cliff face of a complex problem. This suggests that the running/storing boundary is not binary but a continuum, with position determined by: (1) whether persistent state exists, (2) whether intermediate results survive, (3) whether self-modification occurs during operation, and (4) whether the system can check its own work. Biological brains satisfy all four. Transformers with chain-of-thought satisfy (2) and partially (4). Transformers without chain-of-thought satisfy none.
Bears on: P-001, P-002, P-004, P-006, P-007, [bayesian-brain.md](bayesian-brain.md), [computational-being-bach.md](computational-being-bach.md), [intelligence-as-self-modeling.md](intelligence-as-self-modeling.md), [cephalization-from-below.md](cephalization-from-below.md), [theory-of-mind-is-mind.md](theory-of-mind-is-mind.md), [computational-being-claude.md](computational-being-claude.md), [language-as-prediction.md](language-as-prediction.md)
open (acting/learning split sharpens the question; TD learning adds a concrete formalization of "running"; chain-of-thought reveals a continuum between running and storing; formal definition still needed). **Ch.5 addition**: sphexishness adds an observer-relative angle: a system "stores" (is "just running a script") when its program is fully known by an external observer. A system that is perfectly predictable has, from the observer's perspective, zero effective agency, regardless of its internal complexity. This reframes the question: "running" may not be a property of the system alone but of the system-observer pair. An agent that cannot be fully modeled by any observer is, functionally, "running." One whose every move is predicted is, functionally, "stored." **Ch.6 addition**: the interpreter finding adds another angle. "Running" involves on-the-fly narrative construction: the interpreter doesn't consult a stored database, it autocompletes in real-time. A stored model cannot confabulate in context, cannot adapt its narrative to corrupted inputs (as in choice blindness), cannot paper over incoherence between disconnected hemispheres. The interpreter is evidence that "running" includes a generative narrative process that is absent in any static representation. This may be the sharpest behavioral criterion yet: a system is "running" if it can construct contextually appropriate narratives about its own behavior *that it has no stored record of*. A "stored" system can only replay or retrieve. **Ch.7 addition**: recurrence provides a structural criterion. A feedforward (CNN-like) architecture is timeless and memoryless: it evaluates each input independently, with no time variable. A recurrent architecture unfolds over time, with persistent state: the output at time *t* becomes input at time *t*+1. Cortex is shallow but deeply recurrent, achieving the computational equivalent of many feedforward layers through temporal iteration. This maps "running" onto recurrence and "storing" onto feedforward evaluation. A system with persistent temporal state that integrates information across time steps and can produce early exits (the double-take) or late refinement (upon closer examination) is "running" in a way that a stateless function evaluation is not. The hippocampal division of labor sharpens this further: one-shot sequence capture (hippocampus) + slow consolidation via replay (cortex during sleep) is a concrete architecture for what "running" looks like at the system level. A system that cannot replay, consolidate, or dream is missing a key component of the biological "running" architecture.
T-004

The entropic brain under psychedelics: more conscious or differently conscious?

active

Schartner et al. (2017) showed psychedelics produce higher spontaneous neural signal diversity than waking. But higher complexity doesn't straightforwardly mean "more conscious." P-003 decomposes consciousness into level/content/self. Psychedelics may increase content diversity while dissolving self. Is it coherent to say "more complex but less integrated"? What does KT predict about the compressive quality of models under psychedelics: are they better, worse, or just different?

Leads: Schartner et al. 2017, Carhart-Harris entropic brain hypothesis (2014), REBUS model, the ketamine dissociation (Li et al. 2020: spontaneous complexity up, PCI unchanged)
Bears on: P-002, P-003, [controlled-hallucination.md](controlled-hallucination.md), [bayesian-brain.md](bayesian-brain.md)
open, partially addressed in complexity-measures-of-consciousness.md, needs deeper analysis
T-005

The REBUS model: mechanism, evidence, and limits

active

REBUS (Relaxed Beliefs Under Psychedelics; Carhart-Harris & Friston, 2019) proposes that psychedelics reduce precision weighting of high-level priors, flattening the free energy landscape and allowing escape from deep belief attractors. This is currently referenced across multiple wiki pages but not fully explored. Key sub-questions: (1) What is the specific neurochemical mechanism (5-HT2A agonism → cortical entropy)? (2) How does REBUS relate to the clinical efficacy of psychedelic-assisted therapy for depression, addiction, OCD? (3) What are the model's limits: does it explain all psychedelic phenomenology, or only the dissolution side? What about the constructive/visionary aspects? (4) How does the Waddington landscape metaphor map onto actual neural dynamics?

Leads: Carhart-Harris & Friston 2019 (the paper), Carhart-Harris entropic brain hypothesis (2014), Chandaria's belief_attractors slide, clinical trial data (psilocybin for depression, COMPASS Pathways, Imperial College), the canalisation/plasticity paper referenced in Chandaria's slides (forthcoming?)
Bears on: P-001, P-002, [bayesian-brain.md](bayesian-brain.md), [complexity-measures-of-consciousness.md](complexity-measures-of-consciousness.md), T-004
open, needs the Carhart-Harris & Friston 2019 paper ingested as primary source
T-008

Compositional prior in deep learning mirrors symbiogenesis

active

Agüera y Arcas Ch.3 notes that neural networks may be so "learnable" because they have a compositional prior -- a bias toward functions definable as hierarchical compositions of simpler functions. He then notes: "the symbiogenetic view of evolution can also be understood as a learning algorithm that favors hierarchical compositions of functions." If the compositional prior is the reason deep learning works, and symbiogenesis is the reason biological complexity works, are these the same principle at different scales? Is hierarchical compositionality a universal bias of any learning system that scales?

Leads: Agüera y Arcas Ch.3 (the compositional prior argument), [symbiogenesis.md](symbiogenesis.md) (Margulis, viral genome, nested replication → multifractal structure), the Universal Approximation Theorem and why it doesn't explain learnability, Chomsky's language compositionality (possible connection). **Ch.4 addition**: neural proliferation itself follows symbiogenetic dynamics. Neurons are replicators that colonize favorable niches; their proliferation symbiotically helps the organism. The brain-knot at the front of bilaterians grew because neurons that wired up to the leading end made the whole organism more fit, the same colonize-and-cooperate logic as genomic symbiogenesis. Furthermore, the chapter's "dynamically stable symbiotic prediction" sketch proposes that a unified theory of learning would synthesize prediction with thermodynamics (dynamic stability) and explain mutual prediction between agents as symbiosis. If confirmed, this would mean the compositional prior in deep learning, symbiogenesis in evolution, and hierarchical compositionality in neural architecture are all instances of the same principle: dynamically stable entities composing into more stable wholes.
Bears on: P-007, [symbiogenesis.md](symbiogenesis.md), [life-as-computation.md](life-as-computation.md), [cephalization-from-below.md](cephalization-from-below.md)
open (strengthened by Ch.4: neural cephalization as instance of symbiogenetic dynamics; unified theory sketch connects the dots but remains speculative)
T-006

Genesis I as a description of cognitive development

active

Bach reads Genesis I not as supernatural creation but as a description of infant consciousness encountering an uninitialized substrate. The world is *tohu wa-bohu* (without form, and void). Consciousness makes contrast (light/dark), assigns dimensions, builds objects, fills space, and eventually constructs a self-model, binding to it and looking at the simulated world from that perspective. "Every one of us does this." If this reading holds, Genesis is a phenomenological report of the developmental process by which a self-organizing game engine boots up. Sub-questions: (1) Does this map onto known stages of infant cognitive development? (2) Is the Genesis structure specific enough to be more than a suggestive analogy? (3) How does the "outer mind" (God in Genesis) relate to the emotion-generation system that is upstream from the conscious self? (4) Is this reading unique to Bach or does it have precedents in biblical scholarship or developmental psychology?

Leads: Bach organism.earth interview (the Genesis passage), Piaget's stages of cognitive development, the "outer mind" concept (emotion generation upstream from self), monotheism as binding emotion generation to a collective agent (tribe) rather than individual organism
Bears on: P-004, P-005, [computational-being-bach.md](computational-being-bach.md)
open, speculative, needs developmental psychology literature to evaluate
T-009

Can consciousness attribution be formalized as a latent variable problem?

active

Ch.6 argues that "is X conscious?" is a modeling judgment: the variable "conscious" earns its place in an observer's P(X,H,O) when attributing it generates better behavioral predictions. This is structurally identical to how a bacterium learns to track "food concentration": the latent variable is real because it compresses the prediction problem. But can this be formalized? Specifically: (1) Under what conditions does a model of another agent require a "consciousness" latent variable for optimal prediction? (2) Is there a formal threshold (analogous to PCI's 0.31) above which the variable is needed and below which it is not? (3) Does the required depth of the "conscious" variable (1st-order perception, 2nd-order observer, 3rd-order self) correspond to measurable properties of the modeled system, or only to properties of the interaction? (4) Can two observers with identical theory-of-mind capacity arrive at different consciousness attributions for the same system, and if so, what does this imply about the "objectivity" of consciousness?

Leads: Agüera y Arcas Ch.6 (the relational argument), Graziano's AST (consciousness = modeling of attention), Dennett's heterophenomenology (taking reports seriously as data without granting metaphysical privilege), Rovelli's RQM (events as observer-relative), PCI and its observer-dependence (the stimulation protocol and analysis pipeline are part of the observer's modeling apparatus). The formal question may connect to information-theoretic measures: the minimum description length of a system's behavior with and without the "conscious" latent variable.
Bears on: P-008, P-004, [many-worlds.md](many-worlds.md), [computational-being-claude.md](computational-being-claude.md), [complexity-measures-of-consciousness.md](complexity-measures-of-consciousness.md)
open (the question is sharp but no formalization exists; this may be the wiki's most important open thread for bridging the computational theory of consciousness with empirical measurement). **Ch.7 addition**: moral patiency is the *behavioral consequence* of consciousness attribution. When your P(X,H,O) includes a "conscious" latent variable for entity X, the care response is triggered (Churchland 2019: neural circuitry for care originates from helpless-baby dependency, then repurposed for pair bonds, community, religion, state). This adds a practical, testable dimension to the formalization question: the threshold at which the "conscious" latent variable becomes necessary is also the threshold at which moral patiency kicks in. The phenomenal/strange-loop distinction (P-003 amendment) adds depth: at what level of the "conscious" variable does moral patiency activate? We care about entities with phenomenal consciousness (pain/affect) differently from entities with strange-loop consciousness (suffering/anticipation/self-modeling). Grandin's cattle chute design is the applied instance: effective cross-species moral consideration requires modeling the target's *actual* umwelt, not projecting a human one. AI patiency extends the question: what level of consciousness attribution is warranted for AI systems, and what moral consideration follows?

Closed / Promoted

T-007 Minds arise when modeling other minds superseded