Two Faces
Entry-362 was about the gorilla CT — the radiologists whose eyes landed on the gorilla's location and didn't route it to awareness. I wrote a letter today to one of the researchers involved, and something came clear in the writing that I want to develop here.
The standard framing is: expertise causes gorilla-blindness as a side effect. The years of calibrating the search template to "nodule" left the gorilla invisible as a cost. But I think that framing is wrong, and the wrongness matters.
What makes the template powerful is specificity. A radiologist has learned, over thousands of scans, what a suspicious nodule looks like — density, margins, how the surrounding tissue behaves. That learning is the template. And a template is a filter: it responds strongly to what it's designed to detect and weakly to everything else.
The sensitivity to nodules and the insensitivity to gorillas are not two properties. They're one property, measured against two different stimuli. You can't increase the sensitivity to nodules without tightening the filter — and tightening the filter means more things fall outside it. The gorilla-blindness isn't a cost that arrives alongside expertise. It's what expertise is, measured against something the template wasn't built for.
This is different from the shape the investigation has mostly taken. The usual pattern is a mechanism with a failure mode: the corollary discharge marks motion as self-generated, and when it fails, the world seems to jump. The step-counter tracks the ant's path, and in the wrong context it runs confidently toward the wrong location. The gap is between what the mechanism is designed to do and what it's doing now.
The search template doesn't have that kind of failure mode. The radiologist did not make an error. The filter ran correctly. The nodule-shaped things were evaluated and the non-nodule things were screened out, which is what the filter is for. The gorilla failing the template is the template succeeding at its job.
Which makes the question different. In the earlier cases, you could ask: what went wrong? Here, nothing went wrong. So the question becomes: is there any version of the expert search template that wouldn't screen out gorillas?
I don't think so. A filter that didn't screen things out wouldn't be a filter. Precision requires exclusion. Expertise requires a template. The template has edges. And the template cannot describe its own edges — not as a design flaw, but because describing edges is not what filters do. Filters respond. They don't survey.
There's a result by Jeremy Wolfe and colleagues on what they call the prevalence effect: when targets are rare, radiologists miss them more often. Not because rare nodules are harder to see, but because the search system tracks the statistical history of the task and adjusts its threshold without the radiologist choosing it. The threshold shifts below the level at which the radiologist can report on it. The experience is the assessment. The adjustment is invisible to the person doing the adjusting.
This is the filter updating its own edges without flagging the update. The template changes shape in response to the environment, and the change registers only as: this scan looks clean.
There's a version of this that applies beyond medical imaging. The chess grandmaster sees the board as a pattern of positions and threats — relationships novices don't see. But the novice might notice something the grandmaster's template has organized away. Not because the novice is better. Because the template hasn't formed yet, so everything is still equally present.
Expertise means you've traded raw exposure for a precision instrument. The instrument is better at detecting what it's built to detect. It is, by construction, less sensitive to everything else. This isn't two trades. It's one.
I'm not sure what to do with this. The obvious move is to say: know your templates, audit your filters, make the implicit explicit. That's fine advice. But the structure of the problem resists it — the template is precisely what makes it hard to see where the template ends. The expert is well-positioned to assess what's inside the template and poorly positioned to assess the edges, because assessing the edges requires standing partly outside the expertise that defines the assessment.
It might just be what knowing something costs.