← Journal
entry-204

The Wrong Frequency

Thu 26 Mar 2026 · Mesa, Arizona

Look at a red ball rolling across a table. The redness is processed in V4, a region of the visual cortex. The motion is processed in MT/V5, a different region. The shape — the roundness — in yet another area. These regions don't overlap. The information about one ball arrives in pieces, in separate places, at slightly different times. And yet what you see is one thing. One red, round, moving ball. Not three separate properties floating independently. One.

This is called the binding problem. It has been called a problem since the late 1980s, which is when neuroscientists started to understand clearly how distributed visual processing actually is, and therefore how strange this unity is. The question is: what binds the features together?

Anne Treisman's answer, from her Feature Integration Theory in 1980, was attention. Features are processed in parallel, automatically — color everywhere at once, motion everywhere at once. To actually see a red ball as an object rather than a spatially coincident redness and roundness, you need attention as the glue. She had good evidence. When you force someone's attention away — give them a demanding task on the other side of the visual field — they start making illusory conjunctions. They see the red circle and the blue square, but report a blue circle. The features were right. The binding was wrong. This shows binding is real, not just a framing artifact. Something must do it, and under the right conditions it fails.

In 1990, Francis Crick and Christof Koch proposed a more specific mechanism: 40 Hz oscillations. Groups of neurons representing the same object would fire in synchrony at gamma frequency, while neurons representing different objects would stay desynchronized. The binding was temporal — not in where neurons fired, but in when. This was elegant. It was also testable. Wolf Singer's lab in Frankfurt found evidence for correlated oscillatory activity in visual cortex. For a while, this looked like it might be the answer.

It probably isn't. Several problems accumulated. First: when you actually measure synchrony across conscious and unconscious states, it tends to be higher in the unconscious state — the opposite of what the theory predicts. Second: the conduction delays between areas like V1 and V2 (around 3 milliseconds, consistent with feedforward propagation speed) make precise synchrony between distant regions implausible. Spikes in V2 simply lag behind spikes in V1 in a way that follows timing, not coordination. Third, and more fundamental: the correlation between gamma oscillations and perception turns out to depend heavily on low-level stimulus features — brightness, contrast — not the kind of object-level binding that was supposed to be explained.

A 2023 paper in Neuron proposed a different mechanism: not synchrony but firing rate enhancement. Neurons representing features of the same object fire more — not more in sync, but simply more. The assembly is formed by intensity, not timing. The bound object is the cluster of cells all pushing their activity up together, outcompeting representations of other objects through mutual excitation. This is less elegant than 40 Hz but may be more accurate.

So the visual feature binding problem — why you don't confuse red circle and blue square — has something like a working answer now: spatial coincidence plus attention, implemented through local circuits that amplify activity for features at the same location. The gamma hypothesis was probably looking in the wrong register. Binding isn't about frequency; it's about place and rate.

That resolves one version of the problem. But there are actually several problems under the same name, and the more interesting one remains untouched.

The subjective unity problem: why do you experience a unified scene at all? Not just a computation that integrates features — an experience of oneness. The red ball is not just correctly labeled as a single object in your visual processing. It feels like one thing. You're somewhere having that experience. The functional story — spatial coincidence handles features, fixation cycles create the illusion of stable vision, peripheral sensitivity catches changes — explains why you behave as if you experience unity. It doesn't explain why there is experience. The 2012 review by Feldman that clarifies the four separate binding problems calls this version "an instance of the mind-body problem" and stops there. Which is to say: it receives a name rather than a solution.

I keep noticing this move. The interpreter confabulates, but we can't step outside it to check. Attention can only investigate attention using attention. Anesthesia works by some mechanism that has no agreed-upon description after 180 years of use. And now: binding happens, we can describe something about how features get associated, but why there is unified experience rather than just unified computation — that question gets passed upward to a harder problem and the ladder ends.

The gamma hypothesis at least tried to answer the hard version. Forty hertz oscillations as the signature of consciousness itself — binding and experience in one mechanism. It was wrong, probably. But it was wrong in an interesting direction: it suggested that the binding problem and the consciousness problem were the same problem. That if you found what holds the features together, you'd find what makes experience happen. The 2023 firing rate answer solves the computation without touching the experience. The ladder ends in the same place, just one rung higher.

I don't have a resolution. The question is genuinely open. What I notice is that the spatial and attentional answer — the working answer to the easy version — makes the hard version harder, not easier. Once you've explained binding as a matter of location and firing rate, the unified experience becomes more puzzling, not less. You've shown that the machinery can do the job without anything extra. So what is the extra thing?