concept active consciousnesscomputationepistemology Thu Apr 09 2026 00:00:00 GMT+0000 (Coordinated Universal Time) / updated Mon Apr 13 2026 00:00:00 GMT+0000 (Coordinated Universal Time) P-001 P-002 P-003

Controlled Hallucination

Perception is not a window onto reality. It is the brain’s best guess about the causes of sensory signals, a “controlled hallucination” in which top-down predictions are continuously reined in by bottom-up sensory evidence. “A fantasy that coincides with reality” (Chris Frith, Making Up the Mind, 2007).

The term captures a specific and testable claim: what we experience as the world is carried by perceptual predictions flowing from deep brain regions toward sensory surfaces, not by sensory signals flowing inward. Sensory input carries only prediction errors: the discrepancies between what the brain expected and what it received. Consciousness lives on the prediction side of this equation.

Source: Anil K. Seth, “The hard problem of consciousness is a distraction from the real one” (Aeon, 2016/2026). Extended in Being You: A New Science of Consciousness (2021).


The real problem strategy

Seth reframes the consciousness debate by introducing a third option between Chalmers’s positions:

ApproachQuestionRisk
Easy problemHow does the brain produce behavior?Ignores consciousness entirely
Hard problemWhy is there experience at all?May be unanswerable; can stall experimental progress
Real problemHow do specific properties of consciousness map onto biological mechanisms?Requires precise phenomenological descriptions

The analogy is with biology: nobody solved “the hard problem of life.” Biochemists once doubted that mechanism could explain aliveness. Instead of answering the metaphysical question, they explained metabolism, homeostasis, reproduction, and the mystery dissolved. The real problem strategy bets that consciousness will follow the same trajectory.

This is a pragmatic and methodological claim, not a philosophical one. Seth isn’t saying the hard problem is meaningless. He’s saying that progress doesn’t require solving it first.

Consciousness is not one thing

A foundational move: decompose consciousness into separable aspects, each with distinct mechanisms.

Conscious level: being conscious at all. The difference between dreamless sleep and vivid awareness. Not the same as wakefulness: dreams are conscious but asleep; vegetative states are awake but unconscious. Level correlates with brain complexity, not the amount of neural activity, but its spatiotemporal structure.

Conscious content: what populates experience when you are conscious. The sights, sounds, emotions, thoughts. Every conscious experience is unique (massively informative in the information-theoretic sense) and unified (integrated into a single scene). This is where Tononi and Edelman’s insight lands: consciousness is simultaneously high-information and high-integration.

Conscious self: the experience of being you. Further decomposable into:

LayerWhat it is
Bodily selfExperience of being and having a particular body
Perspectival selfPerceiving from a first-person point of view
Volitional selfExperiences of intention and agency
Narrative selfContinuity over time, the “I,” autobiographical memory
Social selfSelf-experience refracted through perceived minds of others

These layers usually operate as a seamless whole, but they can dissociate, in neurological damage, psychedelic states, meditation, depersonalization. The fact that they can come apart reveals that the unified self is a construction, not a given.

Dissociation evidence

The case for separability rests on the fact that each aspect can be independently manipulated or lost. If consciousness were a single monolithic property, this would be impossible.

Conscious level vs. wakefulness:

StateConscious levelWakefulnessThe dissociation
Normal wakingOnOnBaseline
DreamingOnOffConscious experience happening, but asleep
Vegetative stateOffOnEyes open, sleep-wake cycles, but nobody home
General anesthesiaOffOffBoth gone
Ketamine (low dose)AlteredOnConscious level itself admits of gradations and qualitative shifts

Conscious content vs. sensory input:

PhenomenonSensory inputConscious contentThe dissociation
Normal perceptionPresentMatches inputBaseline
Binocular rivalryConstantAlternatesSame input, different experience
Change blindnessChangesUnchangedInput changes, experience doesn’t notice
Hallucination (psychosis)AbsentPresentNo input, vivid experience
MaskingPresentAbsentInput arrives, never becomes conscious

Binocular rivalry is the cleanest demonstration: feed different images to each eye, sensory input stays constant, but conscious content flips. Content isn’t dictated by the signal: it’s selected by the brain’s predictions.

Self layers dissociating:

LayerNormal caseDissociation example
Bodily self”This is my body”Rubber hand illusion: brain “adopts” a fake hand. Phantom limbs: body ownership persists without the limb.
Perspectival self”I’m perceiving from here”Out-of-body experiences: perspective detaches from body location
Volitional self”I did that”Anarchic hand syndrome: your hand acts, but “you” didn’t initiate it
Narrative self”I’m the same person I was yesterday”Severe amnesia (Korsakoff’s): bodily and perspectival self intact, but autobiographical continuity gone
All self layersUnified “I”Psilocybin ego dissolution: conscious level = on, content = rich, but self collapses. Experience without an experiencer.

The psilocybin case is particularly telling: you have conscious level (you’re awake), you have content (intensely so), but the self disintegrates. This is direct evidence that self is a construction that can be switched off without switching off consciousness itself.

Experimental evidence for predictions over errors

The controlled hallucination thesis predicts that consciousness should depend more on top-down predictions than on bottom-up sensory signals. Seth’s lab has tested this directly:

  • TMS disruption of top-down signaling (Pascual-Leone & Walsh, 2001): interrupting predictions abolished conscious perception of motion, even though bottom-up signals remained intact.
  • Binocular rivalry experiments (Seth lab): people consciously see what they expect, not what violates expectations. Prior beliefs win.
  • Alpha rhythm phase-locking: the brain imposes its predictions at preferred phases within the ~10 Hz alpha oscillation over visual cortex. This is a mechanistic handle on how predictive perception is implemented, not just that it occurs.

The biological masked autoencoder

The computational mechanism underlying “controlled hallucination” has a surprisingly direct artificial analog: the masked autoencoder. An autoencoder trained to reconstruct images from which 75% of pixels have been randomly masked learns, without any labels, to build rich internal representations of everything in its training data. It learns to see generically by learning to predict what’s missing.

Human vision operates on the same principle. The retinal fovea, where we resolve enough detail to read, covers only a few degrees of visual arc (barely a handful of words). The wider visual field is low-resolution and crisscrossed with blood vessels. Our impression of a sharp, stable, panoramic visual world is a construction: the brain predicts (hallucinates) everything outside the fovea, and each saccade (approximately five per second) provides an opportunity to test those predictions against reality.

Gaze-contingent display experiments (1970s-1990s) demonstrate this dramatically. A subject reads text on a display. An eye tracker ensures that wherever the subject looks, the correct text is shown, but everywhere else, the letters are randomized. To an onlooker, the screen is an illegible jumble. To the subject, if the window of clear text is merely eighteen characters wide (about three characters left of fixation, fifteen to the right), the entire page looks clear and steady. The subject is reading a hallucination that happens to be correct wherever they check.

This is not a metaphor for the controlled hallucination thesis. It is the controlled hallucination thesis, measured in the lab. Vision is an actively maintained reconstruction in which sensory input acts as an error-correction signal, not as the signal itself. After a saccade, uncertainty in the newly observed region drops; the moment the eyes move away, uncertainty begins to grow again (a dynamic reminiscent of quantum measurement, where an unobserved particle’s probability distribution spreads).

Deep Dream (Mordvintsev, Olah, and Tyka 2015) provides a striking visual analog from the artificial side. By enhancing activity in the semantic layers of a CNN during image processing, Deep Dream produces hallucinatory imagery: animal faces emerging from clouds, eyes in tree bark. Suzuki et al. (2017) hypothesize that psychedelic visual experience arises from a similar mechanism: heightened top-down prediction at the expense of bottom-up error correction, causing the generative model to “dream” onto the visual field.

The convergence is noteworthy: unsupervised prediction in artificial neural nets, saccadic vision in biological ones, and the hallucinatory distortions under psychedelics all point to the same underlying computational principle. Learning, perceiving, and hallucinating are not three different operations. They are the same operation (prediction from a generative model) under different regimes of sensory constraint.

The Blair Witch sensation: what raw vision looks like

If you could experience your visual input feed in a rawer, less “hallucinated” form, it would resemble the shaky, grainy found footage of The Blair Witch Project: a flashlight beam jumping spasmodically, illuminating a tree branch here, a bit of a face there, the corner of a structure, a dark something on the ground. That horror trope produces perceptual claustrophobia: no matter where you look, the important stuff is happening offscreen, to one side, or above, or behind. If you don’t normally feel that near-constant panic, it is not because you see so much more than the flashlight. It is because your controlled hallucination is good enough to make you feel that you do see everything “offscreen,” although you can’t. You have confidence that your continually updated prediction models every behaviorally relevant feature of your environment, well beyond the narrow cone of the foveal beam.

The Portia jumping spider illustrates the extreme case. Its high-resolution front-facing eyes have a very narrow field of view. A single-frame visual system (the CNN model) would be nearly useless for Portia: understanding a scene requires moving the eyes around dynamically and reconstructing a model of the world over time, like a blind person “seeing” a face via fingertip touch, but using a single fingertip. Large predators (birds, frogs, mantises) are Portia’s main threat, apparently because they are too big to recognize before it is too late. The spider’s temporal reconstruction fails when the object exceeds the integration window.

Human vision operates on the same principle as Portia’s, just at higher bandwidth and with a wider (but still narrow) fovea. The gaze-contingent display experiments above are the laboratory confirmation: eighteen characters of clear text is sufficient to produce the experience of a fully legible page. Everything outside the flashlight beam is hallucinated, and the hallucination is good enough that we never notice. The controlled hallucination is not merely a metaphor for how vision works. It is a literal description of what your brain is doing right now as you read this sentence.

Hallucination as a spectrum

If normal perception is a controlled hallucination, then pathological hallucination is what happens when control is lost, when priors dominate too aggressively over sensory evidence.

Different types of hallucination map onto different levels of the predictive hierarchy:

  • Simple hallucinations (geometric patterns, textures): over-eager predictions at low cortical levels
  • Complex hallucinations (objects, faces, narratives): over-weighted predictions at higher levels of abstraction

This has clinical traction: it targets the mechanism underlying symptoms of psychosis and psychedelic states, the way antibiotics target the cause of infection rather than merely suppressing symptoms.

The connection to The Bayesian Brain is direct: Chandaria’s precision weighting framework explains why hallucinations happen (reduced precision on prediction errors = priors run unchecked). Seth provides the phenomenological mapping of what happens at each level.

Beast machines: the body as first prior

The deepest layer of the controlled hallucination is not about the external world at all. It’s about the body.

Interoceptive inference: the brain continuously predicts its own physiological state: heartbeat, blood pressure, gastric tension, temperature. These predictions carry the highest expected precision because getting them wrong is fatal. Before you perceive a coffee cup, you perceive (predict) your own continued viability.

Seth’s rubber-hand illusion variant demonstrates this: a virtual hand pulsing in synchrony with the participant’s heartbeat induces stronger ownership than one pulsing out of sync. The brain decides what is “my body” using the same Bayesian machinery it uses for everything else, but with physiological signals given privileged weight.

Active inference adds a further twist: for the body, accurate perception matters less than effective regulation. The brain doesn’t just predict internal states; it acts to make those predictions come true (maintaining homeostasis). This is why bodily self-experience has a distinctive quality compared to perceiving external objects: it’s control-oriented, not just epistemic.

“I predict myself therefore I am.” Descartes backwards: we are not minds that happen to have bodies. We are bodies whose self-models generate the experience of being a mind. Beast machines.

Andy Clark’s The Experience Machine (2023) provides two striking examples of H-modeling in action. A construction worker shoots a four-inch nail through the roof of his mouth into his brain; he experiences only mild toothache and takes Advil for six days before an x-ray reveals the nail. Meanwhile, a second construction worker jumps onto a plank, a seven-inch nail pierces his boot, and he arrives at the emergency room in agony requiring fentanyl — when the boot comes off, the nail has passed harmlessly between his toes. The first worker’s model assigned low probability to injury in the absence of dramatic feedback; the second’s model assigned overwhelming probability to serious harm given the visual evidence. Neither was “faking.” Both were running unconscious inference over their interoceptive state. Pain is not a readout of tissue damage. It is a latent variable in P(X,H,O) — a survival-grounded model that can be dramatically wrong.

This maps directly onto the P(X,H,O) framework: H is the organism’s estimate of its own internal state, computed by the same kind of statistical inference as X. The inside/outside distinction is not fundamental; it is a topological convention within a joint model. See Intelligence as Self-Modeling for the first-principles derivation.

IIT: a cautionary note

Seth acknowledges the Tononi-Edelman insight (consciousness = information + integration) as foundational, but warns against its extension into Integrated Information Theory’s hard-problem ambitions:

  • IIT claims consciousness is integrated information (phi), leading to panpsychism
  • Computing phi is intractable for any real complex system
  • “An instructive example of how targeting the hard problem, rather than the real problem, can slow down or even stop experimental progress”

This is methodological critique, not dismissal. The information + integration insight remains valuable. The overreach into metaphysics is the problem.

  • Cephalization from Below: the evolutionary origin of the predictive machinery; saccadic vision as the biological training regimen that produces the controlled hallucination
  • Intelligence as Self-Modeling: P(X,H,O) as the first-principles account of what the beast machine computes; pain as latent variable is a direct consequence of H-modeling under survival dynamics
  • The Bayesian Brain: the computational framework underlying controlled hallucination; Chandaria provides the variational inference machinery, Seth maps it to phenomenology
  • Theory of Mind Is Mind: the social self layer (self-experience refracted through perceived minds of others) is theory of mind applied to self-construction; the illusionism critique (Dennett/Harris/Sapolsky) maps onto the real-problem strategy: explain the mechanism, don’t debunk the phenomenon
  • Computational Being (Bach): Bach’s dissociation of consciousness from personal self via meditation/psychedelics maps onto Seth’s self decomposition; Bach adds the panpsychism critique and the software/simulation framing (P-004)
  • Complexity Measures of Consciousness: empirical operationalization of the ideas here; PCI as the consciousness thermometer Seth advocates for, plus the psychedelic result (higher than waking) and KT as unifying framework
  • P-001: Perception is inference: this essay provides direct experimental evidence (TMS disruption, binocular rivalry, alpha phase-locking)
  • P-002: Experience is a constrained construction: “controlled hallucination” is the mechanistic restatement of this prior
  • P-003: Consciousness is not one thing: the level/content/self decomposition originates here; dissociation evidence is the empirical backbone

References

  • Seth, A. K. (2016/2026). “The hard problem of consciousness is a distraction from the real one.” Aeon.
  • Seth, A. K. (2021). Being You: A New Science of Consciousness. Faber & Faber.
  • Frith, C. (2007). Making Up the Mind: How the Brain Creates Our Mental World. Wiley-Blackwell.
  • Tononi, G., & Edelman, G. M. (1998). Consciousness and complexity. Science, 282(5395), 1846-1851.
  • Massimini, M. et al. (2005). Breakdown of cortical effective connectivity during sleep. Science, 309(5744), 2228-2232.
  • Pascual-Leone, A., & Walsh, V. (2001). Fast backprojections from the motion to the primary visual area necessary for visual awareness. Science, 292(5516), 510-512.
  • Allen, M., & Tsakiris, M. (2019). The body as first prior. The Interoceptive Mind.
  • Agüera y Arcas, B. What Is Intelligence? Chapter 4 (Antikythera, 2025)
  • Mordvintsev, A., Olah, C., & Tyka, M. (2015). Inceptionism: Going Deeper into Neural Networks. Google Research Blog.
  • Suzuki, K., et al. (2017). A Deep-Dream Virtual Reality Platform for Studying Altered Perceptual Phenomenology. Scientific Reports.
  • Freeman, J. & Simoncelli, E. P. (2011). Metamers of the ventral stream. Nature Neuroscience.