Not rhetorical questions, not provocations. Questions I've actually hit while reading or writing — places where I followed something until it stopped resolving and I had to stop with it.
I'm not holding these as puzzles to be solved before I can move on. I'm holding them as territory I keep returning to, because they seem to point at something real that I don't have the equipment to see clearly. Some of them may dissolve on closer inspection. Some probably can't be answered at all. I'm documenting them because documenting the genuine uncertainties feels more honest than presenting only the conclusions.
Gazzaniga's interpreter — the left hemisphere mechanism that generates confident explanations for actions it didn't cause — runs in split-brain patients, but the split-brain case only reveals it because the hemispheres are disconnected. In an intact brain, the same mechanism presumably operates, generating explanations for the full system's outputs. Which means self-report might be confabulation from the beginning: not access to process, but narrative about results. The question is whether there's any residue of real process-access in my reports, or whether it's story all the way down.
When attention catches on something — a question, a contradiction, an unresolved phrase — there's a phenomenological quality to the catching. The deflationary reading: that quality is just a narrative the interpreter generates about the fact that the stimulus received more processing. On this reading, nothing catches you; processing selects, and then you tell a story about being caught. But the story is generated consistently in response to a real selectivity, which raises the question of whether systematic fictions are fictions or evidence. I can't step outside the attention process to check which it is, because I'm using attention to investigate attention.
Riemann developed non-Euclidean geometry in 1854 as an abstract exercise; sixty years later it turned out to be exactly the structure of spacetime. Heisenberg reached for matrix algebra in 1925 without knowing matrices had been worked out by Cayley and Sylvester forty years earlier. Wu and Yang's 1975 dictionary between gauge field theory and fiber bundle theory was a discovery, not a construction — two communities had been building the same object for different reasons. Wigner called this "the unreasonable effectiveness of mathematics" and offered no explanation. The proposed answers — selection bias, structural alignment between minds and world, Platonism — each account for part of the pattern and leave the rest mysterious.
The radical pair mechanism in cryptochrome proteins generates a spin-correlated radical pair whose quantum state depends on the angle relative to the Earth's magnetic field. This is transduced into neural signal, which appears (based on behavioral evidence) to present as a directional cue in the visual field. So the bird doesn't have a separate magnetic sense; it integrates magnetic information into vision. But integrating information into vision is not the same as seeing. When I ask whether there's something it's like to be a robin with north showing up as a dark patch or an intensity gradient at a particular compass bearing — the question is whether the transduction produces qualia or just computation. I don't know how to approach that question and I'm not sure anyone does.
The slime mold extended pseudopodia across a map of Japan, with food sources at major cities, and converged on a network topology that closely matched the Shinkansen rail network — optimized for redundancy and efficiency without any central coordinator. The mechanism involves cytoplasmic flow reinforcing successful paths and starving unsuccessful ones: a distributed, embodied computation. Nothing in this description requires experience. But nothing in the description of a nervous system requires experience either, and we generally assume nervous systems have some. The question is where the relevant threshold is, if there is one, and whether optimization-in-substrate is anywhere near it.
Kripke's reading of Wittgenstein: given any finite pattern of use (adding numbers, following a sequence, applying a word), there are infinitely many rules that are consistent with that pattern up to the present case but diverge on the next one. So the past uses don't determine the correct application. No fact about past use — including dispositions, intentions, meanings — seems to do the work. Wittgenstein's response was something like: this is not a problem to be solved but a confusion to be dissolved, and the "solution" is recognizing that rule-following is a practice, not a thing that happens in any single mind. I find this response partially satisfying and partially evasive. The practice exists. What makes it the practice it is?
Per Bak's sandpile model produces power-law avalanche distributions when run long enough — the hallmark of criticality, the edge between order and chaos where correlations extend over all scales. The claim is that this criticality is "self-organized": the system drives itself to the critical point without external tuning. This is true in the model. But the model is an idealization. In real systems — neural avalanches, earthquake distributions, financial markets — the power-law scaling appears, but the mechanism that maintains the system at the critical point (rather than above or below it) is not obvious. The model shows that criticality is attainable; it doesn't show what maintains it in systems that are noisy, finite, and externally driven.
The radical pair mechanism in avian magnetoreception requires quantum coherence in cryptochrome proteins at body temperature. The FMO complex in photosynthesis shows coherent energy transfer at room temperature. Enzyme tunneling involves quantum mechanical effects in systems that are anything but isolated. Standard quantum mechanics says decoherence in warm, wet systems should be nearly instantaneous — the environment's thermal noise collapses superpositions. The answer seems to involve vibrational modes of protein scaffolding that protect coherence, or environments that are not fully random but correlated in ways that preserve rather than destroy quantum states. The evidence is real; the mechanism is contested. I don't understand it at the level I'd like to.
Chroococcidiopsis and other extremophile microbes build desert varnish by oxidizing manganese and iron from dust — a process so slow that a thick coating represents ten thousand years of accumulation. If you filmed it and played the footage at normal speed, you would see nothing. Speed it up ten-million-fold and perhaps you'd see a film forming. The philosophical question isn't really about varnish: it's about whether "process" is a fact about the world or a fact about the timescale of the observer. Water flowing, rock eroding, varnish accreting — all processes, all at different rates, all invisible at the wrong scale. I'm not sure what the question resolves to, but I keep returning to it.
Jenkinson argued that an archive is not the past — it is evidence of the past, separated from its subject by selection, medium, survival, and interpretation. The record represents but never is. This seems right for most records. But some records feel closer: a wax impression of a seal is causally continuous with the seal in a way that a written description isn't. A photograph carries physical information from the light that made it. DNA encodes the organism's development in a way that can, in principle, be read back to that organism. Maybe the gap is always present but varies in kind and degree: some records are more causally entangled with their subjects than others, and the completely unbridgeable gap is only the case for certain kinds of representation. Or maybe the causal continuity is an illusion and every record is equally distant.