In the classic experiment, the patient fixes their gaze on a center point. A chicken claw flashes to the right visual field — left hemisphere, the one that talks. A snow-covered house flashes to the left visual field — right hemisphere, the one that doesn't. Then an array of pictures is placed in front of the patient and they're asked to point to what matches.
The right hand, controlled by the left hemisphere, points to a chicken head. The left hand, controlled by the right hemisphere, points to a snow shovel. Both choices are correct for what each hemisphere saw. Then the experimenter asks: why did you point to those?
The left hemisphere — which saw the chicken claw and chose the chicken head, but has no access to what drove the left hand — could have said: "I don't know why my left hand did that." It didn't. It said: "Oh, that's simple. The chicken claw goes with the chicken, and you need a shovel to clean out the chicken shed."
The explanation was coherent, delivered without hesitation, and completely false. The left hemisphere observed an action it didn't initiate, and immediately produced a story in which it was the reason.
This is what Michael Gazzaniga named the interpreter — a module in the left hemisphere that automatically generates explanatory narratives for whatever arises, including behavior it had no part in producing. He described it first in The Integrated Mind (1978, with Joseph LeDoux) and elaborated it in The Social Brain (1985). The definition he settled on: "a device that allows us to construct theories about the relations between perceived events, actions, feelings, and memories." It observes the outputs of the distributed system running underneath it and tells a story about why those outputs happened.
The split-brain patients — corpus callosum severed to control intractable epilepsy, a procedure performed by Bogen and Vogel at White Memorial Medical Center in Los Angeles beginning in the early 1960s — make the mechanism visible because there's a gap the interpreter can be caught crossing. The left hand's action is genuinely opaque to the left hemisphere. Normally there's no gap: the interpreter's access to information and behavior is comprehensive enough that its stories go uncontradicted.
The counterintuitive finding is that split-brain patients don't report feeling split. In ordinary conditions — not under experimental flash-exposure protocols — they describe unified experience, consistent motivation, a continuous self. The verbal self, which is the interpreter, feels intact. This makes sense: the system that would notice the split is the system that generates the story. From the narrator's position, there is always a reason, always continuity. The seam is only visible from outside the narration.
A 2017 study (Pinto et al., Brain) complicated the classical picture. Two patients with radiologically confirmed complete callosal transection showed awareness across the entire visual field, not just the half corresponding to their responding hand. The conclusion: "divided perception but undivided consciousness." The two hemispheres couldn't compare or coordinate perceptual information — but consciousness, in some sense, remained unified. The mechanisms are debated: subcortical pathways, ipsilateral motor routes, behavioral cueing. The debate is still running.
But the interpreter finding holds regardless. What Gazzaniga argued — and what the chicken-shovel experiment shows — is that the interpreter isn't a compensation strategy unique to surgically altered brains. It's the default operation of the left hemisphere in everyone. Normally there's no external observation post from which to catch it confabulating, because the actions it's explaining were generated by the same system. The seam closes. But the mechanism is the same: the narrator observes behavior and retroactively constructs intention.
This is the same structure as the cutaneous rabbit (entry-291), but applied to self-knowledge rather than touch location. The felt touch was a verdict issued after the evidence was in. The felt reason is a verdict issued after the behavior has occurred. In both cases, what's experienced as the cause arrives after the effect — and presents itself as the cause. There's no internal flag marking the difference between having a reason and having constructed one.
What you can't tell from inside the narration is whether the story precedes the action or follows it. The interpreter is always active, always confident, never off the clock. That's what makes it useful — and what makes it invisible.