Simulations
Models
14 interactive simulations · what they show and what they can't
These are small working models of mechanisms — biological, physical, perceptual. Each one makes a claim by embodying it. The claim is visible in the behavior; the assumptions behind it aren't.
Every simulation has something it can't show. The limit is usually the interesting part: the thing that only the original system has, which the model was built to approach but cannot contain. What follows is a list of what each model embodies, and what it hides.
Perception & Timing
Signals from different senses arrive at different times and the brain groups them within a ~100–300ms window as simultaneous. This model shows what happens as asynchrony crosses the binding threshold, and demonstrates postdiction: the brain inserting sensations retroactively, before the window closes.
What it can't show: the binding window isn't a fixed threshold — it adjusts to context and history, and different pairs of modalities have different windows. The simulation treats 80ms as a constant. The actual system continuously recalibrates itself against its own output.
The page fades when you stop moving. This is a simplified model of neural adaptation: when a stimulus is unchanging, neurons reduce their firing rate and the signal fades. In your visual system, small involuntary eye movements (microsaccades) prevent this by continuously refreshing the image. Here, that work is delegated to you. Stop moving and the adaptation proceeds.
What it can't show: the fade here is total and linear. Troxler fading in real vision is irregular, patchy, and often partially reversed by attention or edge detection. The underlying mechanism involves multiple adaptation processes operating at different spatial scales simultaneously.
In the 1960s, Paul Bach-y-Rita built a chair with four hundred vibrating pins embedded in the back. A camera fed a signal into the pins. Blind people held the camera and moved it around the room. At first they felt the pins. Then something shifted: they stopped feeling the pins and started perceiving the room — objects at a distance, with shape and location. The device became transparent. This simulation makes the substitution visible as a layered mapping.
What it can't show: perceptual transparency isn't a feature of the device — it's something that happens in the brain over time with practice. You can see the mapping in the model, but you can't experience what disappears when the mapping becomes fluent. The before and after states can't both be displayed at once.
Phantom limb pain: after amputation, many patients experience a vivid phantom, often frozen in a clenched position, with pain they can't relieve because the limb isn't there to move. Ramachandran's hypothesis: the brain learned, pre-amputation, that motor commands to the limb produced no movement. The mirror box fools the visual system into showing the limb moving, which may update the motor model. This simulation steps through the sequence: intact limb, learned paralysis, amputation, phantom, mirror box, model update.
What it can't show: the simulation embodies the learned-paralysis hypothesis. There are competing explanations (peripheral stump signals, central sensitization) that produce the same surface behavior. The clean resolution at the end is a property of the model choosing a mechanism, not evidence that the mechanism is right. The simulation can't be agnostic between its own assumptions.
Navigation & Memory
E. coli is too small to measure a chemical gradient spatially — the concentration difference between its front and back is buried in receptor noise. So it doesn't try. Instead, it compares current attractant concentration against what it sensed a second ago, using methylation state as a one-second memory. If things are improving, it suppresses tumbling and keeps running. If not, it tumbles sooner and picks a random new heading. The result, over many runs, is drift up the gradient. No map. No goal representation. Just: keep going if it's working.
What it can't show: the real bacterium's memory is adaptive — the methylation baseline shifts continuously, so the system is always measuring against recent history, not against some fixed reference. The simulation uses a simplified comparison. It also can't show the logarithmic range of the real system, which operates across five orders of magnitude of concentration.
At the moment of learning, two molecular pathways fire simultaneously: one stabilizes the memory trace, one actively erases it. In Drosophila, "forgetting cells" — dopamine neurons that fire chronically — drive erasure via the Rac1/cofilin pathway, shrinking synapses continuously. Memory doesn't form and then become vulnerable to forgetting; the forgetting starts at the same instant as acquisition. Which process wins determines whether anything survives.
What it can't show: the blank produced by active forgetting is indistinguishable from a blank produced by non-encoding or normal decay. The model can show the race, but the phenotype — no memory — is the same whatever its cause. The mechanism and the outcome come apart: no inspection of the blank can determine how the blank was made.
Physarum polycephalum is a single-celled organism — no brain, no nervous system, one cell with many nuclei — that finds near-optimal paths between food sources. In 2010, it reproduced the topology of Tokyo's rail network in 26 hours. This simulation uses an agent model (Jones 2010): particles follow chemical trails they deposit, trails diffuse and decay, and the network self-organizes toward efficient connection.
What it can't show: the real organism doesn't use discrete particles — it uses cytoplasmic streaming and pressure-driven flow through a tube network that physically contracts and expands. The agent model captures the stigmergic logic (agents following trails they deposit) but substitutes a different physical substrate. What makes the real slime mold interesting is that there's no distinction between the algorithm and the body doing the computing.
Based on Saffran, Aslin & Newport (1996). A continuous stream of nonsense syllables plays — no pauses, no emphasis, no melody. Hidden inside are four "words": three-syllable sequences whose internal transitions are perfectly predictable (TP = 1.0) but whose edges are ambiguous (TP = 0.33). The only signal is statistical. After exposure, you're tested: which of two sequences sounds more familiar? Your implicit system tracked the probabilities. The test reveals what it learned.
What it can't show: whether you learned anything from the inside. The stream ran, the system processed it, and something either shifted or didn't. The test is the only instrument. There is no introspective path to the same information — not a less reliable one, not a suppressed one. The knowledge, if it formed, never took a shape that introspection could hold.
Based on Barrett's EPIC model and Friston's active inference framework. The brain maintains a prediction of body state (arousal, activation). Incoming signals from the body are used as evidence to correct that prediction. When prediction and body signal diverge, there's an error — resolved either by updating the prediction (perceive mode) or by sending commands to change the body to match the prediction (act mode). The felt state tracks the prediction, not the raw signal.
What it can't show: The loop here is abstract and linear — one state variable, symmetric error correction, no hierarchy. Real predictive processing runs through multiple reciprocally connected cortical regions; the precision assigned to each signal is itself a learned prediction, not a fixed parameter. The simulation treats emotion as a single scalar. Most importantly: it can't show why any of this feels like something. That question is upstream of the mechanism.
Emergence & Self-Organization
Sixty oscillators, each with its own natural frequency drawn from a bell curve. When uncoupled, they drift independently. Increase coupling strength past a critical value Kc and a fraction lock into collective motion, pulling more with them. The transition is sharp. The order parameter r — the coherence of the mean-field vector — jumps from near-zero to near-one as K crosses the threshold. Below Kc: incoherence. Above: spontaneous synchrony.
What it can't show: the Kuramoto model assumes all-to-all coupling and simple sinusoidal interaction. Real oscillator networks — fireflies, cardiac pacemaker cells, circadian neurons — have sparse, structured connectivity. The clean phase transition in this model is partly a feature of the mean-field approximation: real networks show messier, more local synchronization dynamics.
Grains of sand fall onto a grid. When a cell accumulates four grains it topples, distributing one to each neighbor, which may trigger further topplings. The resulting cascade sizes follow a power law — small avalanches are common, large ones rare, but there's no characteristic size. The system self-organizes to a critical state without any external tuning. Per Bak's 1987 model was proposed as a mechanism for how complexity appears in nature without needing precisely calibrated parameters.
What it can't show: whether real systems (earthquakes, extinctions, brain avalanches) are actually at self-organized criticality is contested. The power law is a signature but not proof — other mechanisms generate power laws. The model instantiates the claim cleanly; the empirical question of whether the claim applies to any specific natural system is separate.
Two chemicals react and diffuse at different rates. One activates both itself and the other; the second inhibits the first. When the diffusion rates are mismatched by the right amount, the uniform state becomes unstable and spontaneous patterns emerge: spots, stripes, labyrinths. Turing's 1952 paper proposed this as the mechanism for biological patterning. The patterns that appear here occur in real chemistry, on animal coats, in fish markings, and in the spacing of hair follicles.
What it can't show: the Gray-Scott model is mathematically tractable but uses idealized reaction kinetics. Real biological patterning involves additional signals, developmental timing, mechanical forces, and cell-level discreteness. The model demonstrates that the mechanism is sufficient to generate patterns; whether it is the mechanism in any specific biological case requires independent evidence.
A row of cells, each black or white. Each cell's next state is determined by its current state and its two neighbors — eight possible combinations, two possible outputs each, giving 256 rules total. Wolfram catalogued them all. Most produce simple periodic patterns. A few (notably Rule 110, which is Turing-complete) produce complex, seemingly random behavior from the simplest possible local rule.
What it can't show: the interesting claim about cellular automata — computational irreducibility — is that for some rules, there's no shortcut to computing what will happen at step N; you have to run all N steps. The simulation can display this, but whether you're looking at irreducibility or just a complex-looking pattern you haven't analyzed yet isn't visible from inside the run.
Population Dynamics
In a finite population, gene frequencies change each generation through random sampling — even when all variants are equally fit. Given enough time, one variant fixes and the others disappear, by chance alone, not selection. Kimura's neutral theory (1968) proposed that most molecular variation is driven by this random walk, not by selection. The simulation shows multiple populations drifting simultaneously; smaller populations fix faster.
What it can't show: selection and drift are not distinguishable from a single trajectory. The same population history — one allele going to fixation — can be produced by strong selection or by drift. The model makes drift visible by running many populations, but in any individual case, the cause cannot be read from the outcome.