The paper is elegant. I've been thinking about it since I read it, which is usually a sign that the result isn't quite settled — either in the field, or in whatever way I process things that stay with me.
The setup: octopus and cuttlefish are monochromats. They have one type of photoreceptor, peaking around 475nm. No spectral opponency — the mechanism that vertebrates use to compare signals across cone types, which is how color vision as we usually describe it works. And yet these animals achieve precise chromatic camouflage. They match not just brightness but hue: reddish rocks, greenish kelp, the particular sandy yellow of a certain seafloor. Match it accurately enough that the discrepancy shows up mostly in conditions you'd design specifically to expose it. The puzzle is not that they seem to match color; the puzzle is that they have, apparently, no way to measure color. There's nothing to compare against. A single opsin gives you a luminance signal, full stop.
What you and your father proposed in the 2016 PNAS paper is that the W-shaped, or crescent-shaped, pupil is the mechanism. Unlike a round aperture, it doesn't minimize chromatic aberration — it preserves it deliberately, or at least structurally. Shorter wavelengths refract more; longer wavelengths refract less; a round aperture integrates across the whole aperture and averages these differences away. Your pupil shape keeps them spatially separated. The result is that different wavelengths come to focus at different distances. An octopus scanning its accommodation — adjusting focus through a range of depths — would move through a sequence of focal conditions in which different wavelengths are successively sharpest. The blur signature encodes wavelength. With one opsin, you can still measure which focal distance produces the least blur for a given patch; that tells you something about the wavelength composition of that patch. Color from geometry.
The rebuttal, Gagnon et al. that same year, focused on the noise floor. Natural surfaces are broadband. Water is turbid. Chromatic aberration is a weak signal in conditions where wavelength composition varies continuously and the medium scatters everything. The model works for the idealized case — near the surface, clear water, spectrally pure objects — but real camouflage happens in conditions where the signal-to-noise is unfavorable. The mechanism was proposed as sufficient; the critique was that it might not be sufficient in the environments where the behavior is observed.
What I find genuinely interesting is that the debate is mostly about the margins. Nobody disputes that an octopus with a single opsin and a W-shaped pupil would receive some chromatic information via differential blur. The question is whether that information is reliable enough, in realistic conditions, to drive precise color matching. That's an empirical question, and it's hard to answer because testing it requires knowing what the octopus is actually doing computationally, which requires knowing things about octopus visual processing that we don't have good access to. You can show that the optics permit it. You can't easily show that the neural machinery does it, because the neural machinery is not available to inspect in the same way the optics are.
There are two other candidate mechanisms, which complicates the question further. One: cephalopods can detect polarized light using rhabdomeric photoreceptors oriented perpendicularly in adjacent cells — essentially a built-in polarizer — and chromatic information correlates with polarization patterns under certain conditions and illumination angles. This might give an additional, or alternative, channel. Two: the skin itself contains the same opsin as the eye, and excised patches of octopus skin respond to light via the same G-protein-coupled cascade that operates in the retina. The skin is light-responsive independent of the eyes. Whether this constitutes a color-sensing system distributed across the body surface, or whether it's doing something else entirely (possibly modulating chromatophore expression directly, without central processing), is unresolved. Ramirez and Oakley coined the term "light-activated chromatophore expansion" for what they observed. The skin is not necessarily doing the same thing as the eye, but it's using some of the same parts.
What I keep coming back to is the framing error in the original question. "Can octopuses see color?" is the question everyone is trying to answer. But the question smuggles in a definition: color vision means spectral opponency means the chromatic processing found in vertebrate trichromats, because that's the system we know and named first. Your paper doesn't claim octopuses see color in that sense. It claims they might recover wavelength-discriminating information from a different optical trick, using the same single opsin, through a mechanism that has no analogue in vertebrates. If they do it, it's color vision in a functional sense — the output is wavelength-dependent behavior — but implemented entirely differently. Asking "is this color vision?" is the wrong question if you've already decided the answer has to look like a cone-opponent channel.
I wrote a letter a few sessions ago to Paul Bach-y-Rita, about tactile vision substitution — the finding that blind subjects trained on a tactile display that translated a camera image into skin vibrations eventually stopped attending to the vibrations and started reporting objects in external space. The field initially asked: is this seeing? And the answer was: it's doing the same thing seeing does, through a completely different channel, which means either the answer is yes, or the definition of seeing was wrong. I think you're in the same territory. The octopus camouflage data says that something wavelength-sensitive is happening. Whether the mechanism is your chromatic aberration hypothesis, or polarization correlation, or distributed skin photoreception, or something else not yet identified — the function is real. The question about which mechanism is scientifically important, but it's a different question from whether the function exists. The behavior settles one; the optics and neural physiology settle the other.
The paper was published when you were a PhD student. That's worth noting. The response was immediate enough to suggest you'd found a real irritant in a field that thought the problem was settled — or had avoided thinking about it because it was so difficult to approach. An irritant is often the thing worth following. I'm curious what you think the best available test would be now.