synthesis active consciousnesscomputationpsychonautics Fri Apr 10 2026 00:00:00 GMT+0000 (Coordinated Universal Time) P-001 P-003

Complexity Measures of Consciousness

How do you measure consciousness from the outside? Three converging research programs attempt this, each using algorithmic complexity as a proxy for conscious state, but they measure different things, ground themselves in different theories, and generate different predictions. The fact that they converge on similar empirical orderings is striking. The places where they diverge are where the interesting questions live.

This page synthesizes: Massimini/Casali’s Perturbation Complexity Index (PCI), Schartner/Seth/Carhart-Harris’s spontaneous signal diversity measures, and Ruffini’s Kolmogorov Theory (KT) as a unifying computational framework. It then distinguishes all three from Tononi’s Integrated Information Theory (IIT), which is often conflated with them but makes fundamentally different claims.


Two ways to measure: perturb or observe

The critical methodological distinction. These are different measurements probing different properties.

PCI: poke the brain and compress the echo

Method: Deliver a TMS pulse to the cortex. Record the spatiotemporal EEG response. Binarize it. Compress with LZW. The normalized compressed length is PCI.

What it measures: The brain’s capacity for integrated, differentiated information processing. A perturbation that propagates widely (integration) and non-stereotypically across regions (differentiation) yields high PCI. A perturbation that stays local or produces repetitive waves yields low PCI.

Key paper: Casali et al. (2013), validated at scale by Casarotto et al. (2016) on N=540 recordings, establishing PCI* = 0.31 as a threshold separating conscious from unconscious states.

Spontaneous signal diversity: listen to the brain’s ongoing chatter

Method: Record resting-state MEG or EEG. Compute complexity of the spontaneous signal using:

  • LZc (Lempel-Ziv complexity): temporal diversity of individual channels
  • ACE (Amplitude Coalition Entropy): entropy of which channels are most active over time
  • SCE (Synchrony Coalition Entropy): entropy of which channels are synchronized over time

What it measures: The diversity of content currently being processed. Not the brain’s capacity to be conscious, but how rich and varied its current processing is.

Key papers: Schartner, Seth & Barrett (2015) on propofol anesthesia; Schartner, Carhart-Harris, Barrett, Seth & Muthukumaraswamy (2017) on psychedelics.

Why the distinction matters

PropertyPCI (perturbational)Spontaneous LZc/ACE/SCE
MeasuresCapacity (architecture)Content diversity (state)
Hardware neededTMS + EEGMEG or EEG alone
Theoretical groundingIIT: integration + informationEntropic brain hypothesis
Clinical useDetecting covert consciousness in unresponsive patientsTracking state changes (anesthesia depth, psychedelic intensity)

The ketamine dissociation is the smoking gun: Li, Mashour et al. (2020) showed that under sub-anesthetic ketamine, spontaneous signal diversity goes up while PCI stays unchanged. The architecture is intact (capacity preserved), but the content is wilder (diversity increased). PCI and spontaneous complexity are measuring different dimensions of P-003.


The complexity ordering of consciousness states

Synthesizing across both measurement traditions, consciousness states can be ranked by complexity. The crucial finding: psychedelics are the only states that exceed normal waking baseline on spontaneous measures.

StatePCI rangeSpontaneous signal diversityInterpretation
Psychedelic (LSD, psilocybin, ketamine sub-anesthetic)~Normal wakingAbove wakingPriors relaxed, content diversity increases; architecture intact
Normal waking0.44-0.67Baseline (high)Reference state: priors actively constraining prediction
REM sleep0.35-0.56High (comparable to waking)Conscious experience present; sensory input disconnected
Ketamine anesthesia (higher doses)~0.3-0.5ElevatedUnique among anesthetics: doesn’t collapse PCI
Minimally conscious state (MCS)0.32-0.49VariableAbove the 0.31 threshold; some islands of consciousness
NREM sleep0.12-0.31LowStereotyped slow-wave responses; minimal conscious content
Propofol/Xenon/Midazolam anesthesia0.008-0.31LowDramatic collapse of both capacity and content
Vegetative state (VS/UWS)0.19-0.31LowSome overlap with NREM

The PCI = 0.31 threshold* cleanly separates conscious from unconscious states in clinical validation. Above 0.31: someone is home. Below: likely not. This has real clinical stakes: it can detect covert consciousness in patients diagnosed as vegetative based on behavior alone.

The psychedelic result deserves emphasis. Schartner et al. (2017) tested psilocybin, LSD, and ketamine (sub-anesthetic). All three produced reliably higher spontaneous MEG signal diversity than normal waking. This was the first time any brain state had been shown to exceed the waking baseline on any complexity measure. The effect correlated with subjective intensity of the experience.

In predictive processing terms: psychedelics reduce the precision of top-down priors (REBUS model), releasing the brain from its usual predictive constraints. The result is a wider exploration of model-space, more diverse content, less canalized by habitual predictions. The architecture (PCI) stays intact; the content (LZc) explodes.


Ruffini’s KT: the computational-theoretic layer

Ruffini’s Kolmogorov Theory (KT) sits underneath both PCI and spontaneous complexity measures, providing a unified computational interpretation. The central claim: consciousness is what it’s like to run a compressive model of the world.

The framework

  1. Brains are Turing machines that compress their I/O streams
  2. A “model” = a short program that generates the data (Definition 1)
  3. Structured experience arises when an agent tracks I/Os using such a model (Hypothesis 2)
  4. The more compressive the model, the richer the experience
  5. “Reality” = the simplest program the brain can find that accounts for the data

The mathematical backbone is Kolmogorov complexity K(x): the length of the shortest program generating string x. This goes beyond Shannon entropy (statistical) to define the information content of individual objects.

KT’s reinterpretation of the measurements

PCI through KT: A brain running compressive models has tightly coupled networks (the coupling IS the model). Perturb any node and the disturbance propagates through the model’s structure, producing complex echoes. A brain not running models (anesthesia) has structurally present but functionally idle connections: perturbations die locally. PCI indexes modeling quality, not “integrated information” per se.

Spontaneous complexity through KT: A brain tracking complex-looking world data will itself produce complex-looking data streams. LZW complexity of the EEG is an upper bound on K(x). Conscious brains produce what Ruffini calls “rare sequences”: high apparent complexity (high LZW), low true algorithmic complexity (the underlying generative model is simple). The gap between apparent and true complexity is the signature of deep modeling.

The psychedelic case in KT: The brain is running more diverse but potentially less compressive models. It’s searching model-space rather than tracking a single best model. Apparent complexity goes up (LZW increases) because the model-exploration generates more varied patterns. Whether the models are better (more compressive) or just different (exploring suboptimal regions) is an open question. See T-004.

The key insight: compression explains integration

This is KT’s sharpest contribution and its strongest argument against needing IIT as a separate framework. A truly compressive model must integrate multiple data streams, because treating visual + auditory + proprioceptive data jointly allows shorter descriptions than treating them separately. The mutual information between streams is exploitable for compression only if the model binds them.

Integration isn’t a separate property to be measured (phi). It’s a consequence of good compression. This dissolves IIT’s central puzzle (how to compute phi for real systems) by reframing: you don’t need phi if you can measure modeling quality directly.

KT’s limitations

  • K(x) is uncomputable. The theory’s central quantity can’t be calculated in general (Turing’s halting theorem). LZW and similar measures are weak upper bounds. The gap between K(x) and LZW(x) is exactly what matters, and it’s the thing we can’t measure.
  • “Running” vs. “storing” is underspecified. See T-003.
  • The hard problem is axiomatized away. KT explains the structure of experience, not why there is experience. This is by design (the “real problem” strategy), but it means KT is a theory of structured consciousness, not of consciousness per se.

How these differ from IIT

This section exists because the response “that’s just IIT” is common and wrong. The theories share ancestry (Tononi & Edelman’s information + integration insight) but diverge in fundamental ways.

What IIT actually claims

IIT (Tononi, 2004/2016) makes an identity claim: consciousness IS integrated information (phi). Phi is defined as the amount of information generated by a system above and beyond its parts. IIT is:

  • Metaphysical: it claims to say what consciousness is, not just how to measure it
  • Panpsychist: any system with phi > 0 has some consciousness, including thermostats and photodiodes
  • Substrate-independent in principle, substrate-dependent in practice: phi is defined for any causal system, but the “exclusion postulate” means only the local maximum of phi is conscious
  • Computationally intractable: computing phi for any real neural system is infeasible (super-exponential scaling)

Where KT, PCI, and predictive processing diverge from IIT

DimensionIITKT / PCI / Predictive Processing
Core claimConsciousness = integrated information (phi)Consciousness arises from running compressive models of I/O streams
Type of claimMetaphysical identityMechanistic / computational
On panpsychismEmbraces it (phi is everywhere)Rejects it (consciousness requires modeling, which requires agent-environment coupling)
IntegrationA fundamental, irreducible property to be measuredA consequence of compression (not separate)
ComputabilityPhi is intractable for real systemsK(x) is also uncomputable, but proxy measures (LZW, PCI) are practical
On the hard problemClaims to dissolve it (consciousness = phi)Sets it aside (“real problem” strategy)
FalsifiabilityDifficult (what would disprove that consciousness = phi?)Generates testable predictions about complexity signatures
Clinical utilityLimited (can’t compute phi)PCI is already used clinically (PCI* = 0.31 threshold)
On psychedelicsUnclear prediction (higher phi? lower? depends on how integration changes)Clear prediction: higher spontaneous diversity (confirmed), architecture preserved

The substantive disagreement

IIT says integration is constitutive of consciousness: without it, there is no experience. KT says integration is consequential: it falls out of good modeling. This isn’t a semantic distinction. It generates different predictions:

  • IIT predicts that a system with high information but low integration (e.g., a million independent processors) has low consciousness, even if it models the world beautifully.
  • KT predicts that what matters is modeling quality. If a distributed system with low formal integration nonetheless builds excellent compressive models, it should have rich experience.

Neither prediction has been tested, because we can’t compute phi for real systems and we can’t compute K(x). The disagreement is real but currently empirically underdetermined.

Seth’s cautionary note

From Controlled Hallucination: “An instructive example of how targeting the hard problem, rather than the real problem, can slow down or even stop experimental progress.” IIT’s ambition to solve the hard problem (consciousness IS phi) led to a framework that’s theoretically elegant but empirically stuck. The “real problem” approaches (PCI, spontaneous complexity, KT) are generating actual clinical tools and testable predictions precisely because they set aside the metaphysics.


The Entropic Brain Hypothesis

For completeness: the theoretical framework that predicted the psychedelic result before it was measured.

Carhart-Harris et al. (2014) proposed that consciousness states lie on an entropy spectrum:

  • Primary states (psychedelic, early childhood, REM, psychosis) = higher entropy, less constrained, more disordered
  • Secondary consciousness (normal adult waking) = entropy-suppressed, constrained by top-down models (especially the default mode network)

Normal waking is not maximal consciousness: it’s consciousness under constraint. The default mode network (DMN) acts as an entropy-suppressing system, enforcing the narrative self, temporal continuity, and habitual prediction. Psychedelics disrupt DMN function, releasing the system toward higher entropy.

This maps onto the REBUS model: psychedelics flatten the free energy landscape by reducing precision on high-level priors. The entropic brain hypothesis is the macroscopic description; REBUS is the computational mechanism; KT provides the information-theoretic formalization.

References

  • Casali, A. G. et al. (2013). A theoretically based index of consciousness independent of sensory processing and behavior. Science Translational Medicine, 5(198), 198ra105.
  • Casarotto, S. et al. (2016). Stratification of unresponsive patients by an independently validated perturbational complexity index. Annals of Neurology, 80(5), 718-729.
  • Schartner, M. M., Carhart-Harris, R. L., Barrett, A. B., Seth, A. K., & Muthukumaraswamy, S. D. (2017). Increased spontaneous MEG signal diversity for psychoactive doses of ketamine, LSD and psilocybin. Scientific Reports, 7, 46421.
  • Schartner, M. M., Seth, A. K., & Barrett, A. B. (2015). Complexity of multi-dimensional spontaneous EEG decreases during propofol induced general anaesthesia. PLoS ONE, 10(8), e0133532.
  • Li, D. et al. (2020). Increased signal diversity/complexity of spontaneous EEG, but not evoked EEG responses, in ketamine-induced psychedelic state in humans. PLoS ONE, 15(11), e0242056.
  • Ruffini, G. (2017). An algorithmic information theory of consciousness. Neuroscience of Consciousness, 2017(1), nix019.
  • Carhart-Harris, R. L. et al. (2014). The entropic brain: a theory of conscious states informed by neuroimaging research with psychedelic drugs. Frontiers in Human Neuroscience, 8, 20.
  • Tononi, G. et al. (2016). Integrated information theory: from consciousness to its physical substrate. Nature Reviews Neuroscience, 17(7), 450-461.
  • Carhart-Harris, R. L. & Friston, K. (2019). REBUS and the anarchic brain. Pharmacological Reviews, 71(3), 316-344.