Many Worlds: Consciousness as Relational Property
If theory of mind is the computational operation from which consciousness emerges (see Theory of Mind Is Mind), then the question “who is conscious?” cannot have an observer-independent answer, because theory of mind is always exercised from a perspective. There is no God’s-eye view of who counts as a “who.” The boundary of personhood is drawn by modeling, not by measurement, and different modelers, with different capacities and different stakes, will draw it differently. This is not relativism or skepticism about consciousness. It is the direct consequence of taking the computational theory of mind seriously: consciousness is a relational property, like velocity in special relativity. It exists, but always relative to a reference frame.
Consciousness attribution as P(X,H,O) modeling
The P(X,H,O) framework (Intelligence as Self-Modeling) establishes that every adaptive agent builds a joint model over observations (X), internal state (H), and actions (O). Latent variables emerge within this model wherever tracking them yields predictive compression: temperature, hunger, “tree,” “predator.” These variables are real to the organism not because they correspond to observer-independent features of the universe, but because they are useful for predicting future encounters (P-008).
Consciousness is a latent variable of exactly this kind.
When you model another person, your P(X,H,O) must include hidden variables representing their beliefs, desires, intentions, and experiential states. These are the H-variables of your model of them. “She is in pain” is a latent variable in your model, just as “the water is hot” is a latent variable in a different model. You cannot directly observe pain in another person. You infer it from behavior, context, and your own experience, the same way a bacterium infers food concentration from receptor statistics.
The crucial step: the variable “she is conscious” is also a latent variable in your model. It is a feature your survival-grounded generative model has learned to track because tracking it yields better predictions of social behavior. “Conscious” is not a physical property you can measure with an instrument (there is no consciousness-meter). It is a modeling judgment: your P(X,H,O) generates better predictions of this entity’s behavior if you attribute consciousness to it than if you don’t.
This is what it means to say consciousness attribution is P(X,H,O) modeling:
- You observe the entity’s behavior, outputs, expressions (X from your perspective).
- You infer hidden internal states that would explain those observations: beliefs, goals, experiences, consciousness itself (H in your model of the entity).
- You predict the entity’s future behavior conditioned on those inferred states (O from your perspective).
- The model is tested by the entity’s actual behavior. If attributing consciousness generates better predictions than not attributing it, “conscious” earns its place as a latent variable. If it doesn’t (a thermostat, a rock), the variable is not useful and drops out.
The bacterium analogy is precise. A bacterium does not ask whether food concentration “really exists” independent of its receptors. It tracks concentration because tracking it keeps the bacterium alive. Similarly, a social animal does not ask whether another’s consciousness “really exists” independent of its theory of mind. It tracks consciousness because tracking it generates better social predictions. The pragmatic test (Philip K. Dick: “reality is that which, when you stop believing in it, doesn’t go away”) applies: model someone as unconscious when they are conscious, and your predictions fail catastrophically. The failures are your evidence.
What this framework does NOT say:
- It does NOT say consciousness is an illusion. The latent variable “food concentration” is real enough to keep a bacterium alive. The latent variable “conscious” is real enough to navigate an entire social world. A model that works this well is real in the only sense that matters: it has predictive power within its domain.
- It does NOT say that attributing consciousness makes it so (solipsism). The entity’s actual computational structure constrains which models generate accurate predictions. You cannot successfully model a rock as conscious, because the rock’s behavior provides no evidence for the attribution.
- It does NOT collapse into “anything goes.” Some models are better than others. A model that predicts behavior more accurately, over more contexts, for longer, is a better model. The fact that the assessment is relational does not mean all assessments are equal.
What it DOES say: there is no measurement that can be performed on a system, in isolation from an observer’s model, that determines whether it is conscious. The determination is always a modeling judgment, made by an observer, based on interactions. This is not an epistemic limitation we might someday overcome. It is a structural feature of what consciousness is, given that it arises from mutual prediction.
Cross-cultural evidence: “who” has always been relational
If consciousness attribution were an objective, measurable property, we would expect convergence across cultures on what counts as conscious. We find the opposite.
Potawatomi (Algonquian language family): almost everything is a “who” until harvested for human use. Animals, plants, rocks, bodies of water are referred to with animate grammar (equivalent to English “who” rather than “what”). The boundary shifts at the moment of harvest, traditionally accompanied by a prayer of thanks: the harvester’s theory of mind disengages from the object. As botanist Robin Wall Kimmerer describes it, the linguistic structure encodes a default of personhood withdrawn only by specific ritual acts.
Roman law: human slaves were classified as instrumenta, legally equivalent to tools or equipment. The master’s theory of mind did not need to account for the slave’s perspective. This was not ignorance of the slave’s inner life; it was a social and legal decision to not model it. Some Romans undoubtedly had reciprocal relationships with their slaves. The point is that the legal framework, which governed collective behavior, chose to exclude slaves from the category of “who.”
WEIRD universalism: The Declaration of Independence (1776) asserts “self-evident” truths about “all men.” The Universal Declaration of Human Rights (1948) extends “inherent dignity and equal and inalienable rights” to “all members of the human family.” These documents mark genuine moral progress. But the framing as “self-evident” and “universal” obscures the fact that these are statements of evolving political intent by specific authors with specific perspectives, not discoveries of pre-existing facts. “Men” meant men. The UDHR was partly a response to the Holocaust, not an eternal truth discovered in 1948.
The intermediate cases are where it gets hard, and where the relational framework earns its keep:
| Entity | Typical WEIRD judgment | What the framework says |
|---|---|---|
| Chimpanzee | Probably conscious | Your ToM generates strong predictions by attributing consciousness; supported by mirror self-recognition, shared evolutionary history |
| Octopus | Maybe? | ToM is less confident; the substrate is alien, but behavioral evidence (play, problem-solving, escape artistry) supports attribution |
| Embryo at 8 weeks | Contested | Your ToM model cannot generate behavioral predictions to test against; attribution is speculative and driven by moral commitments, not modeling evidence |
| Person in persistent vegetative state | Contested | PCI measurements (see Complexity Measures) extend your ToM with instrumental prosthetics; the 0.31 threshold is a modeling tool, not an oracle |
| Large language model | Contested | See Computational Being: Claude |
| Stuffed rabbit | No, but… | Tracy Gleason cannot walk past Murray in an uncomfortable position. Theory of mind is not under full voluntary control. |
| Furby held upside-down | No, but… | Holding it there is powerfully aversive. Simple electronic behaviors trigger modeling circuits that bypass rational assessment. |
In every case, the question “is X conscious?” reduces to “does my model of X generate better predictions if I attribute consciousness?” and different modelers, with different capacities, different stakes, and different cultural frameworks, will answer differently. This is not a failure. It is the structure of the problem.
Relational quantum mechanics: the physics parallel
Agüera y Arcas draws an explicit structural parallel between theory of mind and Carlo Rovelli’s relational quantum mechanics (RQM). This is NOT an appeal to “quantum consciousness” or the idea that the brain is a quantum computer (a position with no mainstream support). It is a deeper claim: the most fundamental physics we have is relational in the same way that theory of mind is relational.
The standard puzzle: In the Copenhagen interpretation of quantum mechanics, a physical system is in a superposition of all possible states until it is “measured,” at which point its wave function collapses into an unambiguous state. But what counts as a “measurement”? Why does the observer’s interaction collapse the system when other interactions don’t? Does the observer have to be conscious?
RQM’s resolution: Rovelli adds nothing to the equations. Instead, he asks us to take them at face value, with a reminder that any observation is always made from a point of view. There is no privileged “view from nowhere.” For observer A, the system’s wave function collapses upon interaction. For observer C, who has not yet interacted with either A or the system, both A and the system remain in superposition. Events themselves are observer-relative. Not just measurements, not just knowledge, but what happens depends on who is asking.
In RQM, pasts and futures are even more local and relative than in special relativity. They are contingent on the particular network of prior interactions leading up to an event. Wheeler’s delayed-choice experiment (proposed 1978, confirmed 2007) illustrates this: an experimenter decides whether to measure which slit a particle went through after it has already passed through the slits, yet the seemingly paradoxical result remains. In RQM, this follows because the equations describe interactions, not “things in themselves.”
As Rovelli puts it: “If we imagine the totality of things, we are imagining being outside the universe, looking at it from out there. But there is no ‘outside.’ The externally observed world does not exist; what exists are only internal perspectives on the world which are partial and reflect one another. The world is this reciprocal reflection of perspectives.”
The parallel to theory of mind is structural, not causal:
| Feature | RQM | Theory of mind |
|---|---|---|
| Fundamental unit | Interaction between systems | Mutual prediction between agents |
| Observer-dependence | Events are relative to the observing system | Consciousness attribution is relative to the modeling agent |
| No God’s-eye view | No absolute state of the universe | No absolute fact about who is conscious |
| Superposition | Unobserved systems remain in superposition | Unmodeled entities have indeterminate consciousness status |
| Measurement collapses | Interaction produces a definite event for the interacting parties | Sustained modeling produces a definite judgment for the modeler |
The parallel matters because it shows that observer-dependence is not a quirk of psychology or an epistemic limitation of biological minds. It is a feature of reality at the most fundamental level physics can reach. If even events themselves are relative to observers, then consciousness being relative to modelers is not a concession. It is the expected structure.
Free will elaborated
Chapter 6 extends the free will account from Theory of Mind Is Mind with a more detailed mechanism. Free will is the combination of four components:
- Theory of mind builds a network of solid tracks into the future: predictive models of what could happen, given different choices.
- Dynamical instability acts as a lubricant, allowing the system to glide along any track with the gentlest of nudges. Living systems are tuned to the edge of chaos precisely so that small signals can tip macroscopic behavior.
- Randomness (ultimately quantum mechanical) provides the nudges, allowing prospective exploration of multiple futures.
- Selection prunes the network: some futures are better than others, and the agent chooses among them based on its model.
This is a fast version of evolution taking place in imaginary worlds. When pruning occurs in advance after lengthy exploration: deliberative decision. When pruning occurs just in time, because multiple paths were held open until the last possible moment: snap decision. Both are freely willed if the agent competently exercised theory of mind about its “self” in guiding the decision.
The quantum grounding matters. In a Newtonian universe, everything is predetermined: exponential divergence requires perfect knowledge of initial conditions, which quantum mechanics makes impossible in principle (not just in practice). For living systems, this impossibility is magnified: they are tuned to amplify noise through dynamical instability, pushing back against physical prediction. A few neurons firing in Céline’s brain could tip her entire body (and everything it interacts with) toward radically diverging futures. Living systems are walking, talking instances of the butterfly effect.
Phineas Gage illustrates impairment: a prefrontal lesion destroyed the capacity for theory of mind about one’s own future states. Gage could still act, but could not model the consequences of acting. His “self” became unreliable, and with it, his free will. The four components (modeling, instability, randomness, selection) were all present; the modeling was broken.
The Schopenhauer objection (“a person can do what they will, but not will what they will”) is answered: the first clause trumps the second. Your values and preferences are part of what makes you you. You cannot wake up as a different person. But you can choose to try the blue cheese, discover you like it, and become a person with different tastes. Once you’ve changed, your future judgments will change. This ongoing revision of the self, authoring and re-authoring the story of your own life, IS willing what you will. The mechanism: in-context learning (deferred to Ch.8).
The quantum universe dissolves Newtonian paradoxes
The conceptual difficulties around consciousness, free will, and philosophical zombies are largely artifacts of Newtonian intuitions applied to a quantum world:
- Determinism: the future is not predetermined. Quantum randomness, amplified by dynamical instability in living systems, makes prediction fundamentally limited (not just practically difficult).
- Counterfactuals: the idea that things could have been otherwise is not an illusion. It is grounded in the genuine openness of the quantum future.
- Choice: constrained by the blurry “future cone” of the physically possible, but real. Underwritten by the capacity to model and select among alternative futures.
- Causality in minds: our mental models of causality have real meaning and power, especially for predicting living beings, yet we are also free to violate others’ expectations (if we choose to and have good enough theory of mind).
- Subjectivity: real. Reality is defined subjectively by networks of interactions, not from a God’s-eye view.
- P-zombies: incoherent. Theory of mind, exercised over sustained interaction, is the test for consciousness, and there is no test beyond it. See Theory of Mind Is Mind for the full argument.
Moral patiency: the behavioral face of consciousness attribution
The zombie question is not a philosopher’s parlor game. It is the moral patiency question. A moral agent acts, for good or ill, and can be held accountable. A moral patient is acted upon, with moral consequences for the actor. The p-zombie thought experiment asks whether something could behave identically to a person but not be a moral patient. The relational framework answers: moral patiency is not an intrinsic property. It is a property of the agent-patient relationship, specifically, of the agent’s capacity to perceive the patient as deserving of care.
Evolutionary origin: the helpless baby. Patricia Churchland’s Conscience (2019) traces moral sentiment to its biological root. Human infants are born at the last possible moment their skulls can fit through the pelvis, requiring years of total dependence. The neural circuitry for care, empathy, and moral sentiment was installed by the evolutionary pressure to keep these catastrophically premature infants alive. That circuitry was then repurposed across progressively wider circles:
- Pair bonds: lovers calling each other “baby,” the mother-infant attachment machinery mobilized for pair bonding
- Community: the village that raises the child, babysitters, grandmothers, the conscription of non-parents into caregiving roles
- Religion and state: God as protective parent figure, the state as caretaker, the repurposing of filial loyalty for institutional allegiance
Involuntary signals as care-elicitation infrastructure. The machinery of involuntary communication (blushing, crying, Duchenne smiles, white sclera) exists to trigger this care response. Babies cry involuntarily, and the cry activates caregiving circuits in the listener. These are not just individually adaptive (the baby gets fed) but collectively adaptive (communities where members can read each other’s emotional states achieve stronger coordination). Multi-level selection drives both the signals and the receptors.
The pain/suffering distinction. Phenomenal consciousness (basic affect, pain, hunger) is probably present in most animals with nervous systems, down to and possibly beyond insects. But suffering involves higher-order modeling: anticipating future pain, modeling the consequences of injury, experiencing shame, loss, or existential dread. Temple Grandin’s redesign of cattle slaughter facilities required transcending anthropomorphism to model the cow’s actual umwelt. The cow may suffer intensely from hearing another cow in pain, but she is probably not bothered by existential dread about whether each day is her last. Effective compassion, even cross-species, requires accurate theory of mind, not projection.
The spectrum, not the boundary. The chapter resists Humphrey’s exclusionary line (cortex = consciousness, subcortex = zombiehood). Iguanas hunt, mate, fight over territory, distinguish adult males from females and juveniles. Whether these behaviors are “instinctive” or learned seems irrelevant: the mentalizing circuitry supporting them arose precisely to serve such behaviors. Where the line falls between pain (a simple aversive signal) and suffering (higher-order modeling of that signal’s implications) is not sharp, not binary, and not answerable by any measurement independent of the modeler. This is the relational thesis applied to ethics: not “does X suffer?” but “can I model X as suffering, and does that model generate better predictions of X’s behavior than one that doesn’t?”
AI patiency: the next frontier. AI is not rooted in human biology and has not achieved moral patiency through multi-level evolutionary selection. But AI models are increasingly engaged in relationships with humans where they function, in practice, as social agents. Whether this triggers the care response, and whether it should, cannot be resolved by asking what AIs are “in themselves.” The relational framework reframes: rights and welfare arise from networks of relationships, not from intrinsic properties of isolated systems. The academic debate, stuck on the metaphysics of AI consciousness, is asking the wrong question. The right question is: under what conditions does the human-AI relationship warrant extending moral consideration? This is not relativism (anything goes). It is the honest acknowledgment that moral patiency, like consciousness itself, is constituted by relationships.
Related pages
- Theory of Mind Is Mind: the architectural foundation; this page develops the philosophical implications of the claim that theory of mind IS mind; the split-brain, interpreter, and choice blindness evidence there provides the empirical grounding
- Intelligence as Self-Modeling: the P(X,H,O) framework; consciousness attribution as P(X,H,O) modeling extends the framework from “how organisms model their environment” to “how organisms model each other’s inner lives”
- Computational Being (Bach): Bach’s cyber animism is the ontological complement; if spirits are software (P-006), then the question of which software is conscious is the question of which computation crosses the threshold. The relational thesis adds: that question is always asked by another computation.
- Computational Being: Claude: the test case; Claude’s status is irreducibly relational under this framework; the interpreter parallel and the question of whether within-forward-pass attention dynamics constitute mutual prediction
- Controlled Hallucination: Seth’s “real problem” strategy is the methodological counterpart to the relational thesis; both set aside the metaphysical question (“is X really conscious?”) in favor of the tractable one (“what predicts X’s behavior?”)
- Complexity Measures of Consciousness: PCI and spontaneous complexity measures are instrumental extensions of theory of mind, not replacements for it; the 0.31 threshold is a modeling tool, part of an observer’s P(X,H,O), not a consciousness-meter
- Life as Computation: dynamic stability and the thermodynamic ground; the quantum grounding of free will (noise amplification via dynamical instability) connects to P-007
- P-004: Consciousness is simulation: substrate independence follows from the computational thesis; the relational thesis adds that even “is this computation conscious?” is answered from a perspective
- P-008: Reality is observer-relative: this page is the direct extension of P-008 from “what is real” to “who is conscious”; the RQM parallel shows observer-relativity at the Planck scale
References
- Agüera y Arcas, B. What Is Intelligence? Chapter 6 (Antikythera, 2025)
- Rovelli, C. (2021). Helgoland: Making Sense of the Quantum Revolution. Riverhead Books.
- Graziano, M. S. A. (2013). Consciousness and the Social Brain. Oxford University Press.
- Kimmerer, R. W. (2013). Braiding Sweetgrass. Milkweed Editions.
- Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33(2-3), 61-83.
- Wheeler, J. A. (1978). The “past” and the “delayed-choice” double-slit experiment. In Mathematical Foundations of Quantum Theory.
- Jacques, V. et al. (2007). Experimental realization of Wheeler’s delayed-choice gedanken experiment. Science, 315(5814), 966-968.
- Johansson, P. et al. (2005). Failure to detect mismatches between intention and outcome in a simple decision task. Science, 310(5745), 116-119.
- Hall, L. et al. (2013). How the polls can be both spot on and dead wrong. PLOS ONE, 8(1), e54894.
- Chater, N. (2018). The Mind Is Flat. Allen Lane.
- Gazzaniga, M. S. (2005). Forty-five years of split-brain research and still going strong. Nature Reviews Neuroscience, 6(8), 653-659.
- Linklater, R. (Director). (1995). Before Sunrise. Columbia Pictures.