Laurinemily via Wikimedia Commons // CC BY-SA 2.5
Laurinemily via Wikimedia Commons // CC BY-SA 2.5

How Our Eyes See Everything Upside Down

Laurinemily via Wikimedia Commons // CC BY-SA 2.5
Laurinemily via Wikimedia Commons // CC BY-SA 2.5

by Katie Oliver

Beliefs about the way visual perception works have undergone some fairly radical changes throughout history. In ancient Greece, for example, it was thought that beams of light emanate from our eyes and illuminate the objects we look at. This "emission theory" ["a href="" target="_blank">PDF] of vision was endorsed by most of the great thinkers of the age including Plato, Euclid, and Ptolemy. It gained so much credence that it dominated Western thought for the next thousand years. Of course, now we know better. (Or at least some of us do: There’s evidence that a worryingly large proportion of American college students think we do actually shoot beams of light from our eyes, possibly as a side effect of reading too many Superman comics.)

The model of vision as we now know it first appeared in the 16th century, when Felix Platter proposed that the eye functions as an optic and the retina as a receptor. Light from an external source enters through the cornea and is refracted by the lens, forming an image on the retina—the light-sensitive membrane located in the back of the eye. The retina detects photons of light and responds by firing neural impulses along the optic nerve to the brain.

There’s an unlikely sounding quirk to this set-up, which is that mechanically speaking, our eyes see everything upside down. That’s because the process of refraction through a convex lens causes the image to be flipped, so when the image hits your retina, it’s completely inverted. Réné Descartes proved this in the 17th century by setting a screen in place of the retina in a bull’s excised eyeball. The image that appeared on the screen was a smaller, inverted copy of the scene in front of the bull’s eye.

So why doesn’t the world look upside down to us? The answer lies in the power of the brain to adapt the sensory information it receives and make it fit with what it already knows. Essentially, your brain takes the raw, inverted data and turns it into a coherent, right-side-up image. If you’re in any doubt as to the truth of this, try gently pressing the bottom right side of your eyeball through your bottom eyelid—you should see a black spot appear at the top left side of your vision, proving the image has been flipped.

In the 1890s, psychologist George Stratton carried out a series of experiments [PDF] to test the mind’s ability to normalize sensory data. In one experiment he wore a set of reversing glasses that flipped his vision upside down for eight days. For the first four days of the experiment, his vision remained inverted, but by day five, it had spontaneously turned right side up, as his perception had adapted to the new information.

That’s not the only clever trick your brain has up its sleeve. The image that hits each of your retinas is a flat, 2D projection. Your brain has to overlay these two images to form one seamless 3D image in your mind—giving you depth perception that’s accurate enough to catch a ball, shoot baskets, or hit a distant target.

Your brain is also tasked with filling in the blanks where visual data is missing. The optic disc, or blind spot, is an area on the retina where the blood vessels and optic nerve are attached, so it has no visual receptor cells. But unless you use tricks to locate this blank hole in your vision, you’d never even notice it was there, simply because your brain is so good at joining the dots.

Another example is color perception; most of the 6 to 7 million cone photoreceptor cells in the eye that detect color are crowded within the fovea centralis at the center of the retina. At the periphery of your vision, you pretty much only see in black and white. Yet we perceive a continuous, full-color image from edge to edge because the brain is able to extrapolate from the information it already has.

This power of the mind to piece together incomplete data using assumptions based on previous experience has been labeled "unconscious inference" by scientists. As it draws on our past experiences, it’s not a skill we are born with; we have to learn it. It’s believed that for the first few days of life babies see the world upside down, as their brains just haven’t learned to flip the raw visual data yet. So don’t be alarmed if a newborn looks confused when you smile—they’re probably just trying to work out which way up your head is.

Dean Mouhtaropoulos/Getty Images
Essential Science
What Is a Scientific Theory?
Dean Mouhtaropoulos/Getty Images
Dean Mouhtaropoulos/Getty Images

In casual conversation, people often use the word theory to mean "hunch" or "guess": If you see the same man riding the northbound bus every morning, you might theorize that he has a job in the north end of the city; if you forget to put the bread in the breadbox and discover chunks have been taken out of it the next morning, you might theorize that you have mice in your kitchen.

In science, a theory is a stronger assertion. Typically, it's a claim about the relationship between various facts; a way of providing a concise explanation for what's been observed. The American Museum of Natural History puts it this way: "A theory is a well-substantiated explanation of an aspect of the natural world that can incorporate laws, hypotheses and facts."

For example, Newton's theory of gravity—also known as his law of universal gravitation—says that every object, anywhere in the universe, responds to the force of gravity in the same way. Observational data from the Moon's motion around the Earth, the motion of Jupiter's moons around Jupiter, and the downward fall of a dropped hammer are all consistent with Newton's theory. So Newton's theory provides a concise way of summarizing what we know about the motion of these objects—indeed, of any object responding to the force of gravity.

A scientific theory "organizes experience," James Robert Brown, a philosopher of science at the University of Toronto, tells Mental Floss. "It puts it into some kind of systematic form."


A theory's ability to account for already known facts lays a solid foundation for its acceptance. Let's take a closer look at Newton's theory of gravity as an example.

In the late 17th century, the planets were known to move in elliptical orbits around the Sun, but no one had a clear idea of why the orbits had to be shaped like ellipses. Similarly, the movement of falling objects had been well understood since the work of Galileo a half-century earlier; the Italian scientist had worked out a mathematical formula that describes how the speed of a falling object increases over time. Newton's great breakthrough was to tie all of this together. According to legend, his moment of insight came as he gazed upon a falling apple in his native Lincolnshire.

In Newton's theory, every object is attracted to every other object with a force that’s proportional to the masses of the objects, but inversely proportional to the square of the distance between them. This is known as an “inverse square” law. For example, if the distance between the Sun and the Earth were doubled, the gravitational attraction between the Earth and the Sun would be cut to one-quarter of its current strength. Newton, using his theories and a bit of calculus, was able to show that the gravitational force between the Sun and the planets as they move through space meant that orbits had to be elliptical.

Newton's theory is powerful because it explains so much: the falling apple, the motion of the Moon around the Earth, and the motion of all of the planets—and even comets—around the Sun. All of it now made sense.


A theory gains even more support if it predicts new, observable phenomena. The English astronomer Edmond Halley used Newton's theory of gravity to calculate the orbit of the comet that now bears his name. Taking into account the gravitational pull of the Sun, Jupiter, and Saturn, in 1705, he predicted that the comet, which had last been seen in 1682, would return in 1758. Sure enough, it did, reappearing in December of that year. (Unfortunately, Halley didn't live to see it; he died in 1742.) The predicted return of Halley's Comet, Brown says, was "a spectacular triumph" of Newton's theory.

In the early 20th century, Newton's theory of gravity would itself be superseded—as physicists put it—by Einstein's, known as general relativity. (Where Newton envisioned gravity as a force acting between objects, Einstein described gravity as the result of a curving or warping of space itself.) General relativity was able to explain certain phenomena that Newton's theory couldn't account for, such as an anomaly in the orbit of Mercury, which slowly rotates—the technical term for this is "precession"—so that while each loop the planet takes around the Sun is an ellipse, over the years Mercury traces out a spiral path similar to one you may have made as a kid on a Spirograph.

Significantly, Einstein’s theory also made predictions that differed from Newton's. One was the idea that gravity can bend starlight, which was spectacularly confirmed during a solar eclipse in 1919 (and made Einstein an overnight celebrity). Nearly 100 years later, in 2016, the discovery of gravitational waves confirmed yet another prediction. In the century between, at least eight predictions of Einstein's theory have been confirmed.


And yet physicists believe that Einstein's theory will one day give way to a new, more complete theory. It already seems to conflict with quantum mechanics, the theory that provides our best description of the subatomic world. The way the two theories describe the world is very different. General relativity describes the universe as containing particles with definite positions and speeds, moving about in response to gravitational fields that permeate all of space. Quantum mechanics, in contrast, yields only the probability that each particle will be found in some particular location at some particular time.

What would a "unified theory of physics"—one that combines quantum mechanics and Einstein's theory of gravity—look like? Presumably it would combine the explanatory power of both theories, allowing scientists to make sense of both the very large and the very small in the universe.


Let's shift from physics to biology for a moment. It is precisely because of its vast explanatory power that biologists hold Darwin's theory of evolution—which allows scientists to make sense of data from genetics, physiology, biochemistry, paleontology, biogeography, and many other fields—in such high esteem. As the biologist Theodosius Dobzhansky put it in an influential essay in 1973, "Nothing in biology makes sense except in the light of evolution."

Interestingly, the word evolution can be used to refer to both a theory and a fact—something Darwin himself realized. "Darwin, when he was talking about evolution, distinguished between the fact of evolution and the theory of evolution," Brown says. "The fact of evolution was that species had, in fact, evolved [i.e. changed over time]—and he had all sorts of evidence for this. The theory of evolution is an attempt to explain this evolutionary process." The explanation that Darwin eventually came up with was the idea of natural selection—roughly, the idea that an organism's offspring will vary, and that those offspring with more favorable traits will be more likely to survive, thus passing those traits on to the next generation.


Many theories are rock-solid: Scientists have just as much confidence in the theories of relativity, quantum mechanics, evolution, plate tectonics, and thermodynamics as they do in the statement that the Earth revolves around the Sun.

Other theories, closer to the cutting-edge of current research, are more tentative, like string theory (the idea that everything in the universe is made up of tiny, vibrating strings or loops of pure energy) or the various multiverse theories (the idea that our entire universe is just one of many). String theory and multiverse theories remain controversial because of the lack of direct experimental evidence for them, and some critics claim that multiverse theories aren't even testable in principle. They argue that there's no conceivable experiment that one could perform that would reveal the existence of these other universes.

Sometimes more than one theory is put forward to explain observations of natural phenomena; these theories might be said to "compete," with scientists judging which one provides the best explanation for the observations.

"That's how it should ideally work," Brown says. "You put forward your theory, I put forward my theory; we accumulate a lot of evidence. Eventually, one of our theories might prove to obviously be better than the other, over some period of time. At that point, the losing theory sort of falls away. And the winning theory will probably fight battles in the future."

More Evidence to Suggest That Your Insomnia Is Genetic

In 2016, a study on mice found that certain sleep traits, like insomnia, have genetic underpinnings. Several studies of human twins have also suggested that insomnia can be an inherited trait. Now, new research published in Molecular Psychiatry not only reinforces that finding, but also suggests that there may be a genetic link between insomnia and some other psychiatric and physical disorders, like depression and type 2 diabetes, as Psych Central alerts us.

Insomnia is particularly prevalent in populations of military veterans. For this study, researchers at VA San Diego Healthcare System analyzed questionnaire responses and blood samples from almost 33,000 new soldiers at the beginning of basic training, along with pre- and post-deployment surveys from nearly 8000 soldiers deployed to Afghanistan starting in early 2012. They conducted genome-wide association tests to determine the heritability of insomnia and links between insomnia and other disorders. The results were adjusted for the presence of major depression (since insomnia is a common symptom of depression).

The genotype data showed that insomnia disorder was highly heritable and pinpointed potential genes that may be involved. The study indicated that there's a strong genetic correlation between insomnia and major depression. (The two were distinct, though, meaning that the insomnia couldn't be totally explained by the depression.) They also found a significant genetic correlation between insomnia and type 2 diabetes.

Because the study relied on data from the U.S. military, the study doesn't have the most far-reaching sample—it was largely male and wasn't as racially diverse as it could have been. (While it analyzed responses from recruits from European, African, and Latino ancestry, there weren't enough Asian-American participants to analyze as a group.) The responses were also self-reported, which isn't always the most accurate data-collection method.

The genes indicated by this study could be used to develop new treatments for insomnia, but future studies will likely need to explore these questions within broader populations.

[h/t Psych Central]


More from mental floss studios