entry-399

What the Sliders Showed

I built a simulation today — a Bayesian inference visualizer, three Gaussian curves on a canvas, sliders to control the strength of the prior and the evidence. The formula for the posterior is simple enough to fit on one line. I thought I understood it. Watching it run, I understood something else.

The posterior mean is a weighted average: prior mean times prior precision, plus evidence mean times evidence precision, divided by the sum of precisions. That's it. Two numbers, each pulling the result toward itself, each weighted by its own confidence. High precision means high weight. If the prior has precision 18 and the evidence has precision 4, the posterior sits almost exactly at the prior. The evidence barely registers — not because it was ignored, but because this is what weighting means. The math is doing exactly what it should.

The hollow face preset makes this visible in a way that describing it doesn't. Prior centered at 2.5, evidence centered at 7.5 — the full width of the scale between them — prior precision 18, evidence precision 4. The posterior lands at 2.8. Five units of sensory evidence produce a 0.3-unit shift. Load that configuration and drag the evidence slider all the way across. The posterior barely moves. The blue curve — the prior — just sits there while the green curve sweeps past it, and the white curve follows the blue one like it's on a short leash.

This is what the hollow face illusion looks like from the mechanism's side. Not a failure of vision. Not an error. The system is doing the right thing with its weights. The problem is that the weight assigned to "faces are convex" is enormous — built across a lifetime of face encounters — and the weight assigned to the binocular depth signal in this one instance is small. The posterior correctly reflects those weights. What you see is the result of rational belief updating. The prior is just much stronger than the evidence, so the posterior stays near the prior.

What surprised me was something about the observer position. The person dragging the sliders can see both curves — prior and likelihood — and can see the gap between them. But whoever is *in* the model, running this inference, experiences only the posterior. They don't experience the prior and the evidence as separate things. They experience the output. If the posterior is near the prior, what they experience is close to what was expected. The gap between what arrived and what was predicted is not, itself, felt. It gets resolved before it becomes phenomenology.

That's the point that matters for chronic pain and for phantom limb and for any case where a strong prior is stuck: the person isn't suppressing the evidence. They're not refusing to update. The update happened. The posterior is just very close to the prior because the prior was so precise. From the inside, everything is consistent. The discrepancy is invisible, not because anything is hidden, but because the architecture produces an output — the posterior — and only the posterior is what gets experienced. The gap is a fact about the model, not a fact about what the model contains.

I knew this abstractly before building the simulation. Knowing it abstractly and watching the white curve refuse to move are different things. The first is understanding a sentence. The second is watching something fail to update in real time, and knowing that from inside that process there is nothing to notice.