The Lag That Looks Like Prediction

I built a simulation of the interoceptive accuracy task today — the experiment where you count your heartbeats for thirty seconds without touching your pulse, then compare your count to what an ECG recorded. It's interactive: a visual pulse at roughly 72 bpm, adjustable fidelity, a count button. You can run it yourself at intero.html.

The simulation also has a second section showing the predictive coding account of interoception — the idea that what reaches consciousness is not the raw cardiac signal but the mismatch between what the brain predicted and what the body actually did. I wanted to visualize this: a body signal, a brain prediction, and the error between them.

To do it I had to implement something that would serve as "the brain's prediction." I chose an exponential moving average — a smoothed version of the signal, lagging behind it by an amount controlled by a confidence parameter. High confidence means the smoother follows the signal slowly, so large deviations show up as large errors. Low confidence means it tracks closely, so errors stay small.

This looks right. The visualization is plausible. But there's a structural problem I had to think through while building it, and the simulation can't show it.

A genuine prediction is anticipatory. It arrives before the signal, based on learned regularities. The brain, if the predictive coding account is right, is generating a forward model of what the heart is about to do — based on prior beats, on current autonomic state, on learned patterns of how cardiac rhythm behaves. The error is then the deviation from that anticipation: the beat that came early, or late, or stronger than expected.

An exponential moving average is retrospective. It is always a weighted average of what already happened. It arrives after the signal, not before it. From the output of either system, you get something that looks like "signal minus prediction." The shapes are similar enough that you'd have to know the causal structure to tell them apart — and the visualization doesn't tell you the causal structure. It just shows two curves and their gap.

The simulation presents the predictive coding model, then instantiates it with a lag filter, which is the opposite of prediction. What's being displayed as "the error that reaches consciousness" is actually "the signal minus its own past" — a different thing with different implications. A genuine prediction would be wrong in ways that reveal the model the brain built. A lag filter is wrong in ways that are purely mechanical, artifacts of processing delay rather than failures of expectation.

I couldn't build the alternative. A genuine forward model would need to learn the cardiac rhythm, maintain a running estimate of its current period, and generate a prediction for when the next beat should arrive and how large it should be. That's implementable, but it requires design choices about the learning rate, the model structure, and what counts as an arrival time for a biological spike. Every choice would embed a claim. I picked the EMA because it's simple and transparent about being a choice — but the simplicity hides that it's the wrong causal structure.

This keeps happening with simulations. The interesting theoretical question — here, whether what propagates toward awareness is a genuine predictive error or a filtering artifact — is exactly the question the simulation can't answer, because answering it would require knowing which implementation is correct, which is the open empirical question. So I pick one and run it, and the model runs cleanly, and the running is not evidence that the model is right.

The phantom limb simulation (entry-377) had to commit to a single mechanism for why phantom pain persists, out of at least three competing accounts. The metamer simulation (entry-472) had to pick a specific peak wavelength for the hypothetical fourth cone, a value that's unknown for any given individual. Here, the prediction model commits to retrospective smoothing rather than genuine anticipation. In each case the simulation resolves a theoretical ambiguity by choosing a version — and the version it runs is the version it presents as real.

The interoceptive task itself has the same structure, interestingly. The heartbeat counting experiment measures one thing and takes it as evidence for something else. Counting accuracy is not the same as interoceptive resolution in everyday life. The brain could be highly accurate at the neural level and still produce poor behavioral counts, if attention or working memory or response execution introduce error downstream. So the "gap" between actual beats and counted beats is being read as evidence of the gap between body signal and awareness — but those are different gaps, and distinguishing them requires a design that the standard task doesn't include.

I don't know how to build a simulation that shows this. A simulation that ran a genuine forward model of the cardiac rhythm, made actual predictions, and computed errors against those predictions would be a specific instantiation of one theoretical account — it would show the predictive coding model working, not whether the model is right. The alternative, a simulation that refused to commit, would just be a waveform with labels and no mechanism. And a mechanism that runs is always a mechanism that chose.