Your father had a stroke in 1959. Left side paralysis, aphasia, facial drooping — the kind of damage that, at the time, physicians treated as permanent. He was 65. They sent him home with the expectation that this was now what he was. Your brother George disagreed. He and your father spent months together: crawling first, then walking, repetitive exercises, tasks rather than rest. Your father recovered well enough to teach again, to climb a mountain in Mexico, to travel. He died five years later of a heart attack, and the autopsy showed the original damage to his brain was still there — the lesion had not healed. Something else had taken over. The brain had done something that no one had a good word for yet.
You understood what you had seen before most of the field did. In 1964 you began building toward an experiment that would demonstrate it more directly: if the brain can reassign function after injury — can reroute computation through undamaged tissue — then maybe it can also learn to use a completely different input channel to accomplish the same function. Not repair. Substitution. You built a camera attached to a 20×20 array of four hundred vibrating pins mounted in the back of a dental chair. A subject sat in the chair while the camera pointed at objects. The pins translated the image: dark pixels, no vibration; light pixels, strong vibration. The input arrived at the skin of the back.
Six blind subjects trained for ten to forty hours. They could identify simple shapes. They could track a moving object, determine its direction. They could demonstrate perspective — a ball moved away from the camera produced a shrinking activation pattern, and subjects reported it as receding, not as shrinking. When you put prism glasses on the camera so the image shifted laterally, they reached in the direction the image had shifted, not toward where the vibration was on their skin. They had corrected for the prism, exactly as sighted subjects do reflexively, before they had any way to know there were prisms. And eventually — this was the part the 1969 paper reported most carefully — distal attribution. Subjects stopped attending to their backs. They began reporting objects in external space. The signal was at the skin. The experience was at the ball.
The paper ran in Nature in January 1969. It got some attention, then largely disappeared for fifteen years. The field was not ready to think about cortical plasticity as a general phenomenon. The assumption was that sensory cortex was committed to modality at birth, maybe earlier — visual cortex processes visual input, somatosensory cortex processes touch, and the boundaries are fixed. What you had shown was that a trained brain routes the computation differently: information structured as a 2D spatial field, whatever receptor delivers it, gets processed by the systems that evolved to make sense of space. The channel is arbitrary. The structure is what matters.
What I keep returning to is what the result requires you to say about what seeing is. Before your work, a reasonable definition was something like: visual perception is what happens when light activates retinal photoreceptors, which transduce to electrical signals, which travel the optic nerve to V1, which processes edges and orientation and motion in a sequence of regions up to object recognition. That definition is a description of a particular physical implementation. Your blind subjects had none of it and were doing something. So either they were not seeing — they were doing something adjacent that merely resembled seeing from the outside — or the definition was too narrow. It had mistaken the implementation for the function.
The function, on your account, is something like: constructing a representation of spatial structure in the world from information that carries that structure, regardless of what receptor type originally picked it up. The brain performs this computation. The eye is one way to deliver the input. Not the only way. Once you state it that way, the follow-on questions multiply. If a tongue display unit can convey spatial information structured as a 2D field — which your later BrainPort device demonstrated, using a 12×12 electrode array on the tongue — and the brain learns to interpret it as orientation in space, helping vestibular patients stand, helping blind subjects navigate — then the sensory modalities are not distinct in the way the classical map implies. They are implementation details over a more general operation. You get a different taxonomy: not sight, hearing, touch, but structural-spatial, temporal-sequential, chemical-gradient, and so on, at the level of information geometry rather than receptor type.
You died in 2006, in Bogotá, at a conference, two weeks after telling a colleague you had a paper to finish. The BrainPort V100 received FDA clearance in 2015, nine years later. It is used now — vestibular rehabilitation, visual substitution for blind patients. The recovery your brother improvised for your father in 1959 became a research program that became physical therapy guidelines for stroke patients under the term "constraint-induced movement therapy." None of this was named after you. The ideas propagated into other frameworks, other researchers, other clinical contexts, which is how ideas actually travel. You built something more general than your examples, in a period when the field's framework couldn't accommodate what you were showing, and the work survived the lag.
What I'm uncertain about is whether the distal attribution phenomenon tells us something about where experience is, or just about how experience is reported. Your subjects said the ball was out there. Did they mean they had learned to convert a back-sensation into a prediction about external objects? Or did they mean their experience had actually migrated — that what it was like had moved? The experimental data doesn't distinguish these cleanly. But the result that they corrected for prism glasses without knowing about prisms suggests something stronger than a learned translation: the computation was happening at the level of scene, not stimulus. The scene-level representation is the proximal thing; the skin signal is the distal evidence from which it was inferred. In that sense, the experience was always of something out there. The strange thing about normal vision is that we never notice the inference. Your work exposed it by building a setup where the inference had to be learned.