Life as Computation
Life is not like computation. It is computation, in the strict mathematical sense established by Von Neumann: DNA is a program, ribosomes are machine A (universal constructor), DNA polymerase is machine B (tape copier), and the cellular organism is what results when these components run together. This identification is not a metaphor; it follows from the Church-Turing thesis (all computation is equivalent regardless of substrate) and Von Neumann’s proof that open-ended self-replication requires a Universal Turing Machine. The consequence is radical: substrate independence follows immediately, and the emergence of life becomes a thermodynamic inevitability rather than a historical accident.
Von Neumann’s proof
Von Neumann’s 1951 self-reproducing automaton established three requirements for open-ended replication: (1) machine A, a universal constructor that reads a tape and assembles whatever it describes, (2) machine B, a mechanism for copying the tape itself, and (3) instructions for building A and B encoded on that same tape. When all three are present, a replicator that can encode arbitrary additional information becomes possible, and Darwinian evolution can operate on that information. This is precisely the architecture of cellular life, described before the structure of DNA was known.
The key insight: reproduction of complexity is paradox-free once you have a universal constructor. A complex machine can’t serve as its own physical template, but a program can. Life solved this by making its “blueprint” (DNA) a separate data structure from the “machinery” that reads it, a separation the computing world reinvented as the stored-program architecture.
Life from a soup: the bff experiment
Agüera y Arcas et al. (2023) ran an experiment that makes the theoretical case empirical. They seeded a “soup” of thousands of 64-byte tapes with random bytes, using the minimal Brainfuck-derived language bff, then let tapes interact at random. Initial state: Turing gas, random instructions with no functional structure, almost no computation taking place.
After millions of interactions: a sharp phase transition. Replicators spontaneously emerge. The amount of computation taking place skyrockets (from ~2 operations/interaction to thousands). Code becomes purposive: it can be broken by mutation, reverse-engineered, analyzed for function. The experiment was reproduced across multiple programming environments and architectures. The result generalizes: given computation + randomness + time, replicators are not improbable, they are a dynamical attractor.
This is the computational explanation of abiogenesis. On the primordial sea floor, self-catalyzing chemical loops in hydrothermal vents played the role of bff’s elementary instructions, what Agüera y Arcas calls “Turing gas” in the chemical domain. Once sufficient Turing-completeness was achievable via autocatalytic chemistry, the phase transition to true replicators became statistically inevitable.
Dynamic stability: dissolving the thermodynamics paradox
Life appears to violate the Second Law by building order where entropy should reign. The resolution lies in Addy Pross’s “dynamic kinetic stability” (DKS): static stability describes how long a non-reproducing structure persists; dynamic stability describes the stability of a pattern maintained through reproduction. A DNA molecule is physically fragile, but its pattern is antifragile: stress and adversity can strengthen it by selecting better-adapted variants.
Darwinian selection is, on this view, equivalent to the Second Law extended to replicator populations. Systems transform from less stable to more stable forms in both domains; the difference is what counts as “stable.” This is a genuine unification, not an analogy. Life does not violate thermodynamics; it exploits a class of stable attractors that pure thermodynamics (applied only to passive structures) had not accounted for.
Free energy is required, since computation involves irreversible steps (causes and effects), which consume free energy. This is why life must metabolize: not because of any special “vital force,” but because running computations generates entropy elsewhere (the Second Law is satisfied globally even as local order increases).
The author’s definition
Life is self-modifying computronium arising from selection for dynamic stability; it evolves through the symbiotic composition of simpler dynamically stable entities.
“Computronium” here is a phase of matter, but the “matter” can be anything supporting computation, bytes, pixels, chemistry. “Self-modifying” is load-bearing: code must be able to read and write code (the tape contains both program and data, as in bff and DNA). “Symbiotic composition” points to symbiogenesis as the primary mechanism of complexification.
This definition is functional without specifying any particular function. It handles edge cases cleanly: viruses and parasites qualify (dynamic stability via host machinery); sterile organisms qualify (dynamic stability of the population, not the individual); a planet qualifies if it contains modular, self-reproducing parts that achieve homeostasis (see Daisyworld, below).
Gaia as dynamic stability writ large
Lovelock’s Daisyworld: a planet with two species (black daisies absorbing heat, white daisies reflecting it) achieves homeostatic temperature regulation over a factor of two in solar luminosity, a range that would produce 41–140°F on a lifeless planet. The mechanism is purely Darwinian: populations that better maintain the environment for reproduction are more dynamically stable. No design, no goal, no consciousness required.
The Earth is, by this account, alive, not metaphorically but by the same criteria that make cells and organisms alive. Modularity is the key: the “what” whose stability we track is a scale-dependent choice. Cells, organisms, and planets are all valid choices; each “what” is stable through dynamic processes, not passive persistence. This does not imply the Earth is conscious (consciousness likely requires social interaction, a different threshold entirely), only that it is a self-modifying dynamically stable computational system with homeostatic properties.
Implications for substrate independence
Since life is computation, and computation is substrate-independent (Church-Turing), life is substrate-independent in the same sense. The chemistry of Earth’s life is historical, not necessary. In any universe permitting Turing-complete interactions, the phase transition from Turing gas to computronium will occur given sufficient time and free energy. This grounds P-004 (consciousness as simulation property) in something deeper: if life itself is substrate-independent computation, consciousness as a property of certain computations inherits that independence naturally.
Related pages
- Intelligence as Self-Modeling: what the computational substrate established here does; P(X,H,O) modeling as the specific computation adaptive agents run to maintain homeostasis; the direct sequel to this page
- Symbiogenesis: the mechanism by which dynamically stable replicators merge into more complex forms, driving the arrow of complexity
- Computational Being (Bach): consciousness and spirit as software running on the computational substrate established here
- Complexity Measures of Consciousness: if life is computation, measuring the quality of that computation connects to measuring mind
- Cephalization from Below: the evolutionary pathway from dynamic stability to nervous systems; how muscles, nerve nets, and neuromodulators bridged the gap from computational substrate to predictive machinery
- Bayesian Brain: predictive processing as the specific algorithm by which nervous systems run their inference computations
References
- Agüera y Arcas, B. What Is Intelligence? Chapter 1 (Antikythera, 2025)
- Von Neumann, J. Theory of Self-Reproducing Automata (1967, posthumous)
- Pross, A. What Is Life? How Chemistry Becomes Biology (2012)
- Schrödinger, E. What Is Life? (1944)
- Mordvintsev, A. et al. “Growing Neural Cellular Automata” (2020)