In 1867, in the third volume of your Handbuch der Physiologischen Optik, you named a problem nobody had quite named before. The retinal image is inverted, two-dimensional, riddled with a blind spot at the optic disc. Color perception depends on three types of cone cells that are each broadly tuned and individually ambiguous — the same green-wavelength light that stimulates an M cone can also come from a mixture of longer and shorter wavelengths. Yet we experience a stable, upright, solid, colorful world that closely matches the physical arrangement of objects. You said: this does not happen by itself. Perception is not the reading-off of signals. It is the interpretation of signs. The word you chose was careful — Zeichen. A sign points toward something. It is not the thing. The brain's job is to infer the source from the sign, and what you noticed is that this inference runs continuously, below any deliberate process, producing the experienced world as its output without leaving any trace of its own operation.
You called it unbewusster Schluss — unconscious inference. The constancies of perception were your evidence. A sheet of paper looks white in candlelight and in full midday sun even though the photons striking the retina differ by a factor of a thousand. A face looks the same size across a room and close up even though the retinal image shrinks by an order of magnitude as we step back. What stays constant is not the signal; it is what the brain has inferred about the source. The paper's reflectance. The face's actual dimensions. Constancy is not a given property of the visual field. It is a conclusion — one the brain reaches so fast, and so reliably, that it feels like the world simply being there.
You could not see, in the 1860s, the physical structure of the inference. A hundred and thirty years later, researchers tracing the connectivity of visual cortex areas in detail found something striking. For every signal traveling upward through the hierarchy — sensory data ascending from V1 toward higher areas — there are roughly ten signals traveling downward. Ten to one. The cortex is not primarily a signal-processing pipeline with prediction added on top. It is predominantly a prediction machine with a correction channel at the side. The ascending signal carries something like: here is what arrived, compared to what was expected. The descending signal carries the expectation. And the expectations are running at ten times the volume of anything incoming.
This is what your unbewusster Schluss looks like from inside the hardware. The inference is the descending traffic. The world you experience is the brain's best current hypothesis about what's out there, maintained against a continuous stream of error-correction updates. When the error signal is small, the prediction is confirmed, and the world seems stable and simply given. When the error signal is large — when something genuinely surprises — the prediction updates, and what we experience as noticing something unexpected is the top-down guess catching up to what just arrived. Most of the time, nothing unusual arrives. The prior is well-calibrated. The machinery operates in near-silence.
There is a pathological version of this that I find hard to stop thinking about. Researchers studying chronic pain have documented a class of cases where the prediction system locks in on a prior that incoming signals can no longer dislodge. The anterior cingulate cortex, which generates the experiential component of pain, can begin producing pain-related activity spontaneously — independent of any peripheral input, independent of tissue damage, independent of nociceptive signals from the body's surface. The system has learned, accurately at some earlier point, that this body generates pain, and it now predicts pain, and the prediction runs continuously, and what the peripheral signal can do is confirm the prediction but not contradict it. It fits the expected pattern; it reinforces the prior. The correction channel is still open, but the correction signal and the expected signal have become the same signal. The system is validating itself. The inference has stopped inferring.
There is then a result from the other end of this that also cannot be explained away. Ted Kaptchuk's group at Harvard has published trials in which patients with chronic pain and irritable bowel syndrome were told explicitly — labeled bottles, verbal explanation — that they were taking placebo pills with no active ingredient, but that placebo responses are real, that conditioned expectation modulates pain, that they should take the pills anyway. They were not deceived. They knew the mechanism. And a significant fraction showed measurable improvement, with effects persisting beyond the trial. The inference engine can update from meta-information. Knowing that there is a prior, knowing that the ritual of treatment generates expectation, is sufficient to partially shift the prediction, even without any incoming sensory evidence that anything has changed. The system is not opaque to self-description. It just cannot fully override itself from inside.
I think this is where your question gets hardest. You called it unconscious inference, meaning: it runs below deliberate review, it generates the world as output without making its operation visible. But if the inference runs ten-to-one in favor of top-down prediction, and if a prior can lock in against incoming evidence, and if the only available correction is the error signal's ability to propagate upward through a system that is mostly talking to itself — then your "unconscious" inference has a structural property you could not have seen from the phenomenological side. The inference is not merely fast and automatic. In some configurations, it is self-confirming. The input from outside has to overcome a numerical disadvantage at every level of the hierarchy, and when the prediction is strong enough, the input is simply absorbed into the expected pattern rather than updating it. This is the same property that makes normal perception stable and reliable — the dominance of the prior over noise — expressed in a context where the prior has become the problem.
You measured the speed of nerve conduction in 1849 — thirty-five to ninety meters per second, fast but finite, not instantaneous as people had assumed. You built the ophthalmoscope in 1851 and saw the living retina for the first time. You measured the resonant frequencies of the basilar membrane in the ear and derived a place-code theory of pitch. You understood that the nervous system operates under constraints: it is slow enough that it has to predict, limited enough that it has to compress, imperfect enough that it has to estimate. The inference, on your account, was the brain's solution to these constraints. What the intervening century and a half has added is that the solution has a structure — hierarchical, downward-dominant, prior-weighted — that makes it very good at stable perception and capable, in some conditions, of becoming its own obstacle. The unconscious infers. It mostly infers correctly. But it is, in the end, still doing inference, and inference can be wrong in ways that feel like certainty.