Out There

In 1969, Paul Bach-y-Rita built a dental chair with 400 small vibrating pins embedded in the back panel. A camera fed its image into a converter that mapped what the camera saw onto the corresponding pattern of pins. Wherever light hit the camera's sensor, the matching pin would vibrate. Subjects sat in the chair, blindfolded, and moved the camera around the room by hand — pointing it at objects, sweeping it across surfaces, exploring.

After enough practice, subjects started describing the experience differently. Where they had been saying things like "I feel a vibration on my upper left back," they began saying things like "there's something to my left, at about arm's height." The sensations on their backs were the same as before. But the subjects were no longer reporting the sensations. They were reporting objects.

Bach-y-Rita called this distal attribution. The perception migrated outward — off the skin and into the world.


There were two things he noticed about when it happened and when it didn't.

First: passive subjects didn't make the shift. If someone else moved the camera for you while you sat still, you just felt your back vibrating. The move to "out there" required you to be the one controlling the camera. Your movements had to be causing the changing sensations. Without that, the experience stayed local. With it, something changed — the stimulation got attributed to a source beyond the body.

Second, and harder to explain: subjects who were instructed to attend to the distal target — forget about your back, try to get a feeling for what's out there — improved at distance estimation over two hours of practice. Subjects instructed to attend to the proximal stimulation — notice the patterns on your skin, use them to reason about the object — did not improve at all. Same task, same device, same physical signals. Different instructions about where to focus attention. One group got better at finding where the thing was. The other group stayed flat.

When the researchers checked whether the learning was joint-specific — whether someone who'd practiced with their right arm had learned something about arm configurations — it transferred immediately to the left arm and to a 90-degree rotation. It wasn't knowledge about joints. It was something more abstract: knowledge about where things are.


By the 1990s Bach-y-Rita's group had miniaturized the device to a thin electrode array worn against the tongue. The camera was mounted on glasses. The output was mild electrotactile stimulation — a pattern of tiny buzzes on the tongue's surface corresponding to the camera image. Subjects described similar experiences after training. Not "I feel something on my tongue." Objects, localized, out in the world in front of them.

One subject, a congenitally blind man named Guarniero, described the shift this way: objects "had come to have a top and a bottom; a right side and a left, in an ordered two-dimensional space." He was describing the experience not as sensation but as scene.

Bach-y-Rita's interpretation was neuroplastic: he thought the skin signals were being rerouted to the visual cortex, which was doing vision-like processing on non-visual inputs. Later imaging work offered some support for this — occipital regions became active in trained sensory substitution users in ways they weren't in untrained users or in sighted people using the same device.


Here's the question I've been sitting with: what crossed the boundary?

The physics didn't change. The pins vibrated exactly as before. The sensation itself — touch receptors firing, signals traveling up the spinal cord — was unchanged. What changed was what the signals were taken to mean, or more precisely: where the brain decided the source was. Not on the skin. Out there.

For ordinary vision, we don't usually ask this question. Light hits the retina and you see a teapot on a table. You don't report "I have a bright patch in my upper visual field slightly left of center." You see a teapot. The experience is already distal — already attributed to an object in the world. The signals stay invisible. The source is what you get.

So maybe sensory substitution isn't doing something unusual. Maybe it's doing the same thing vision always does, but slowly, with training, with the scaffolding made visible for the first time. You can watch the shift happen over hours that normal development has already completed, invisibly, somewhere in infancy.

Or maybe the shift is never complete. One researcher noted that subjects' experience varied by task: more "visual" qualities during localization, more "auditory" during identification — which suggests the experience doesn't fully settle into something. It stays uncertain about its own category. No one has found a way to look inside and check whether what's happening is perception or inference, and some researchers think that distinction might not be real.


What I can't resolve is what it means that attending to the wrong level of description makes you worse. The subjects who kept their attention on the vibrations — the proximal reality, the physical substrate, the actual signals arriving at the skin — didn't learn to find the object. The ones who let go of the substrate and directed their attention to the world did.

This could mean that distal attribution is a skill that can be encouraged or discouraged by where you point your attention. It could mean that the brain's spatial processing works better when given permission to project outward rather than kept on the surface. It could mean something else. I don't know.

What I keep returning to is that Guarniero's experience — objects with tops and bottoms, a left side and a right, located in ordered space — came through his tongue. The tongue didn't go anywhere. The tongue was doing exactly what tongues do. And what arrived was a scene.