In 1625, the astronomer Christopher Scheiner confirmed Johann Kepler’s hunch that images projected to the retina through the crystalline lens of the eye, much like images passed through telescope lenses, were inverted. Up was down, down was up. This observation stymied many philosophers and scientists into the 20th century. Why, if the images formed on the retina of the eye are inverted, “do not all objects appear inverted… How… are these images re-inverted in the brain?” asks de Malljay in a 1905 Scientific American supplement.
But of course the re-inversion problem is not real. The reason people thought the image had to be re-inverted is that they conceived of an observer (us!) peering through our eyeballs, looking at the images. Eliminate this observer and the need for re-inversion disappears. The up-ness of the world need not correspond to up-ness in the brain. That much is understood (or should be understood!) by every psychologist and neuroscientist. But while there is no expectation that ‘up’ in the world is isomorphic with ‘up-ness’ in the visual-system/brain, there is a widespread assumption that the brain tracks real-world properties in some way, and it is this connection to the outside world that allows us to make accurate inferences about its nature. If we see A as being farther from us than B, it is because it actually is farther. If one apple feels softer than another apple, it is because it is softer, and so on.
Hoffman, Singh, and Prakash provide a lively challenge to this assumption with their Interface Theory of Perception. Like many before them, they acknowledge that perception often leads us astray. There are the many kinds of visual illusions, of course. But more generally, our perceptual systems are often lousy guides to “truth.” The Earth does not look spherical from our usual vantage point, nor does it look or feel like it is hurling through space at nearly 70,000 miles per hour. Hoffman et al., write that “the pre-Socratic Greeks, and other ancient cultures, believed that the world is flat, in large part because it looks that way.” Just as the world looking flat and stationary is no guarantee of its actual qualities, Hoffman et al. argue that “reality differs from our perceptions … in a far more fundamental way: our perception of physical objects in space-time no more reflects reality than does our perception of a flat and stationary earth.”
But what would it even mean to say that we do not perceive the Earth as round? What should the earth look like to me, right now, if I were to perceive it in a more veridical way? The philosopher Ludwig Wittgenstein, reportedly, once asked a friend, “Tell me, why do people always say that it’s natural for to assume that the sun went around the earth rather than the earth was rotating?”, “Well” the friend said: obviously, because it just looks as if the sun is going around the earth.” Wittgenstein responded: “Well, what would it look like [if it looked like] the earth was rotating?” (The answer, of course is that it would look exactly the same as it already looks!)
Our perceptual systems have been designed by evolution to be sensitive to certain properties: relative, rather than absolute motion; local shapes, rather than Earth-sized shapes. We are, of course, also sensitive to only a small portion of the electromagnetic spectrum (as the comic abstruse goose pointed out, “in the grand scheme of things, we’re all pretty much blind and deaf”). All of these limitations can be understood as outcomes of evolution, but it seems wrong to count our inability to detect X-rays, or perceive the true size of clouds as evidence for nonveridicality of perception. Rather, it points to its limitations (limitations that we can partially overcome through our instruments).
So let us stay on Earth and examine Hoffman et al’s thesis that our perception of Earth-bound properties like shapes, sizes, and distances, are not guides to the “truth”. To support this contention, the authors implement simple evolutionary games to show that in a battle of truth vs. adaptive-fit, truth always loses. They use this result to argue that because perceptual systems are products of evolution, designed to meet the needs of particular organisms in particular environments, their representations can never be taken as representing reality.
To make the reasoning used by Hoffman and colleagues more vivid, imagine yourself in a world of Jenga-like block towers, and there are towers that are stably stacked or unstably stacked. Suppose that in this world, all that organisms care about is detecting whether a tower falls or remains balanced. According to Hoffman et al’s logic, the visual system of such an organism should become exquisitely sensitive to signals relevant for detecting toppling. Such a visual system would inform us that the tower on the left is identical to the tower in the middle (both are stable), and radically different from the tower on the right (which will topple; trust me). We are indeed quite attuned to the stability of such towers, and yet at the same time we are sensitive to all kinds of similarities and differences between them. We can count the blocks, predict not just that it will fall, but which way, and so on. Hoffman et al. might say that this is because we do not live in a block-world. But what world do we live in?
In the world we actually live in, perceptual systems have to contend with multiple tasks, and the way to deal with such a multitude of tasks is to evolve perceptual systems that have considerable representational flexibility. This is a problem for Hoffman et al., appear to imagine perceptual systems as very tightly constrained by highly specific problems like finding food and mates. On their formulation, perceptual systems output categories: food/non-food, mate/non-mate. But in so doing, the authors confuse perception with action and decision-making. For the purposes of eating, both a 5-foot and a 10-foot plastic hamburger are equally useless. But should they therefore look the same? No, because they are different in all kinds of other ways: one of them is larger, heavier, and more impressive. The purpose of perceptual systems is not to provide us with discrete categories (truthful or not), but rather to help answer that most important question: What should I do next?
In helping to answer this question, perceptual systems are not guaranteed to converge onto some Platonic truth (but how could we ever know if they did, or not?), but they do seem to converge on representational schemes that are broadly useful. We know this because these schemes have enabled us to go far beyond anything that could have been selected for by evolution: we have measured the speed of light, estimated the age of the universe, sequenced our genome, and discovered planets orbiting other stars. Maybe we did this with perceptual systems that are lousy guides to the truth, but maybe they are true enough.
1 Comment