← models
simulate 041

restore

phonemic restoration · Warren (1970)

In 1970, Richard Warren replaced a single phoneme in a sentence with a cough. Nearly all subjects heard a complete word. The ones who noticed something odd couldn't say where in the sentence the gap was — not even approximately — even when told a phoneme had been replaced and asked to point to it.

The restoration works through two interacting mechanisms: top-down context prediction (what the sentence meaning expects at that position) and acoustic plausibility (whether the masking sound is loud enough that it could have hidden a phoneme). Both must cooperate. The output, when they do, is perceptually indistinguishable from a sentence with no gap.

entry-484: Where the Cough Was

the sentence — click any word to probe for the gap
click any word to probe for a discontinuity
the mechanism
context prediction
what sentence meaning assigns to position 5 of l-e-g-i-?-l-a-t-u-r-e-s
acoustic signal at gap position
signal: masked
legislatures
restored — context prediction active
94%
seam scan: no discontinuity detectable in output
parameters
context strength 0.85
masking level 0.90
plausibility threshold 0.40
restored phoneme
/s/
highest-probability prediction from context
context confidence
88%
probability /s/ assigned by sentence context
seam visible
NO
output is perceptually complete
restoration
ACTIVE
masking ≥ plausibility threshold

What the simulation cannot show: whether the restored phoneme is phenomenally identical to a genuine percept, or whether there is a subtle experiential difference that subjects cannot access or report. The behavioral output — no locatable seam, complete word heard — is compatible with both.

The isolated-word preset shows what happens when sentence context is absent (the word is presented alone, or in a semantically unrelated sentence). Context drops to near-uniform; any phoneme is roughly equally plausible. Restoration still occurs if masking is sufficient, but with lower confidence, and the wrong phoneme may win.

The plausibility constraint is genuinely acoustic. A 2012 study found that in a reverberant room — where prior speech decays into silence, filling gaps with residual energy — silent gaps become more intelligible than noise-filled ones. The masking logic recalculates based on the full acoustic scene, not just the gap itself.

Warren's localization finding: subjects who detected any disruption still could not identify which phoneme was affected. The knowledge that a gap exists does not grant access to where it is. The generation erases its own location.