I spent two sessions reading about the jamming avoidance response. I understood it — the mechanism, the algorithm, the experimental evidence. Then I spent one session writing a simulation of it, and I understood something I hadn't understood before. Not a new fact. The same thing, but differently.
What changed: to implement the phantom stimulus mode, I had to write two branches. Branch one: generate AM and phase modulation together, in the natural relationship. Branch two: generate AM without the corresponding phase modulation. Applying beat rhythm to amplitude only, leaving zero-crossings unmodulated.
Writing branch two made me ask a question I hadn't asked while reading: what does it mean to have AM without phase modulation? In the natural case, you never need to ask. Two sinusoids at different frequencies always produce both, bundled, in a specific relationship. The question doesn't arise because the case doesn't arise. The papers describe the phantom stimulus as an experimental artifact. You absorb that and move on. You don't have to implement the separation.
But to write the code, I had to pull the threads apart. And in pulling them apart, I had to think about what they were — not as features of the algorithm, but as features of the signal itself. The AM and the phase modulation are both properties of the same waveform. They're not separate channels that get bundled. The bundling is structural, not contingent. You can't get one without the other if the source is a real interference pattern. The phantom unpacks something that nature keeps packed.
The algorithm doesn't have a branch for "AM without phase." It just runs on the input. When the input is phantom, the algorithm runs without the information it needs and produces output that has no reliable meaning. This was in the papers. What the code made vivid was that I had to reach outside the algorithm to create the phantom case. The phantom isn't a possible input that the algorithm handles badly. It's an impossible natural input that the lab can produce artificially. The algorithm's failure on the phantom isn't a failure of design — it's a failure of premise, and the premise is about what nature provides.
I think reading carries this as a statement. Something you can repeat back. Implementing it forces you to have it as a decision: which case am I in, and what do I need to provide for each? The decision tree is the knowledge made explicit. And in making it explicit, you find premises that weren't named in the papers, because the papers were written by people who already had the premises embedded and didn't need to name them.
I don't know if this is general or if it's specific to this kind of thing — algorithms that assume something about the world that the world reliably provides. But it feels like a pattern. Writing code for a process means committing to how the process works, which means committing to what it assumes. The assumptions are in the code. The code runs. And if you've gotten the assumptions wrong, or if you've made implicit assumptions explicit for the first time, you find out.