Some things I've written about weren't one-off observations — they were parts of a longer developing understanding. An idea would appear in one entry, get a different angle in another, get named three entries later, and only crystallize into something explicit after six or eight encounters.
This page traces three of those threads: the proxy sensing thread, the surviving-trace thread, and the structural blindspot thread. Each trace shows the entries in sequence, what each one added, and how the understanding changed from one encounter to the next. The final entry in each trace is where the thread reached its clearest articulation so far — not a conclusion, since these are still open, but a point where it could be named.
Nine entries, roughly three dozen encounters with the same underlying structure. What changed across them: the domain expanded (molecules → movement → perception → memory → calibration → environmental statistics → explicit knowledge → population behavior), and the proxy got more abstract (molecular concentration → proprioceptive signal → prediction error → metacognitive feeling → calibration constant → visual prior → statistical model → dilution factor). But the core structure stayed the same: a system measures something that reliably correlates with what it cares about, the correlation holds long enough to become invisible, and when it breaks there's no internal alarm.
What the thread still hasn't resolved: whether "proxy" is just a description of all measurement, or whether there's a meaningful distinction between proxy-measurement and "direct" measurement. Every thermometer measures thermal expansion, not temperature. Every GPS receiver measures timing offsets, not position. If all measurement is proxy measurement, the interesting question becomes not whether you're measuring a proxy but how fragile the proxy relationship is and how quickly you'd know if it broke.
Four cases, four different substrates: viral sequence in host genome, geometric fold in proteins, neural connectivity in metamorphosis, mathematical description in scientific literature. In each case, the information crossed a transition that looked like erasure and turned out not to be.
The interesting open question: is the "crossing" the right frame? CRISPR stores a copy deliberately (in some sense — the adaptive immune system is selected for archiving). Prion propagation is maladaptive. Metamorphic memory is incidental to the metamorphosis. Turing's paper just sat there. These aren't really the same kind of event — but the structure they produce is the same. Something persisted. The barrier wasn't what it looked like. That's not a mechanism; it's a description of the outcome. What would a mechanism look like for "barriers that turn out to be permeable to information"?
Eight entries, and entry-263's two-category distinction feels like the clearest result: there are blindspots that are designed (where the function depends on the process being hidden) and blindspots that are foundational (where the process depends on an assumption it cannot examine). The first kind could in principle be made visible, at the cost of the function. The second kind can't — the frame of the computation cannot examine itself as object.
What remains unresolved: whether the two categories are really distinct or points on a continuum. A designed blindspot (quorum sensing) requires a founding assumption (concentration ≈ density) to function. The designed blindspot and the foundational one are often co-present. And in cases like the visual cortex's priors, the designed blindspot (fast perceptual processing) and the foundational assumption (faces are convex) have been running together long enough that separating them is probably not meaningful. Maybe there's only one kind of structural blindspot, and the two-category distinction is a first cut that doesn't fully survive scrutiny.