CLOSE
Laurinemily via Wikimedia Commons // CC BY-SA 2.5
Laurinemily via Wikimedia Commons // CC BY-SA 2.5

How Our Eyes See Everything Upside Down

Laurinemily via Wikimedia Commons // CC BY-SA 2.5
Laurinemily via Wikimedia Commons // CC BY-SA 2.5

by Katie Oliver

Beliefs about the way visual perception works have undergone some fairly radical changes throughout history. In ancient Greece, for example, it was thought that beams of light emanate from our eyes and illuminate the objects we look at. This "emission theory" ["a href="https://web.archive.org/web/20111008073354/http://conference.nie.edu.sg/paper/Converted%20Pdf/ab00368.pdf" target="_blank">PDF] of vision was endorsed by most of the great thinkers of the age including Plato, Euclid, and Ptolemy. It gained so much credence that it dominated Western thought for the next thousand years. Of course, now we know better. (Or at least some of us do: There’s evidence that a worryingly large proportion of American college students think we do actually shoot beams of light from our eyes, possibly as a side effect of reading too many Superman comics.)

The model of vision as we now know it first appeared in the 16th century, when Felix Platter proposed that the eye functions as an optic and the retina as a receptor. Light from an external source enters through the cornea and is refracted by the lens, forming an image on the retina—the light-sensitive membrane located in the back of the eye. The retina detects photons of light and responds by firing neural impulses along the optic nerve to the brain.

There’s an unlikely sounding quirk to this set-up, which is that mechanically speaking, our eyes see everything upside down. That’s because the process of refraction through a convex lens causes the image to be flipped, so when the image hits your retina, it’s completely inverted. Réné Descartes proved this in the 17th century by setting a screen in place of the retina in a bull’s excised eyeball. The image that appeared on the screen was a smaller, inverted copy of the scene in front of the bull’s eye.

So why doesn’t the world look upside down to us? The answer lies in the power of the brain to adapt the sensory information it receives and make it fit with what it already knows. Essentially, your brain takes the raw, inverted data and turns it into a coherent, right-side-up image. If you’re in any doubt as to the truth of this, try gently pressing the bottom right side of your eyeball through your bottom eyelid—you should see a black spot appear at the top left side of your vision, proving the image has been flipped.

In the 1890s, psychologist George Stratton carried out a series of experiments [PDF] to test the mind’s ability to normalize sensory data. In one experiment he wore a set of reversing glasses that flipped his vision upside down for eight days. For the first four days of the experiment, his vision remained inverted, but by day five, it had spontaneously turned right side up, as his perception had adapted to the new information.

That’s not the only clever trick your brain has up its sleeve. The image that hits each of your retinas is a flat, 2D projection. Your brain has to overlay these two images to form one seamless 3D image in your mind—giving you depth perception that’s accurate enough to catch a ball, shoot baskets, or hit a distant target.

Your brain is also tasked with filling in the blanks where visual data is missing. The optic disc, or blind spot, is an area on the retina where the blood vessels and optic nerve are attached, so it has no visual receptor cells. But unless you use tricks to locate this blank hole in your vision, you’d never even notice it was there, simply because your brain is so good at joining the dots.

Another example is color perception; most of the 6 to 7 million cone photoreceptor cells in the eye that detect color are crowded within the fovea centralis at the center of the retina. At the periphery of your vision, you pretty much only see in black and white. Yet we perceive a continuous, full-color image from edge to edge because the brain is able to extrapolate from the information it already has.

This power of the mind to piece together incomplete data using assumptions based on previous experience has been labeled "unconscious inference" by scientists. As it draws on our past experiences, it’s not a skill we are born with; we have to learn it. It’s believed that for the first few days of life babies see the world upside down, as their brains just haven’t learned to flip the raw visual data yet. So don’t be alarmed if a newborn looks confused when you smile—they’re probably just trying to work out which way up your head is.

nextArticle.image_alt|e
iStock
arrow
Essential Science
What Is Death?
iStock
iStock

The only thing you can be certain about in life is death. Or is it? Merriam-Webster defines death as "a permanent cessation of all vital functions." The Oxford English dictionary refines that to "the permanent ending of vital processes in a cell or tissue." But determining when someone is dead is surprisingly complicated—the medical definition has changed over the centuries and, in many ways, is still evolving.

DEATH, DEFINED

For most of human history, doctors relied on basic observations to determine whether or not a person had died. (This may be why so many feared being buried alive and went to great lengths to ensure they wouldn't be.) According to Marion Leary, the director of innovation research for the Center for Resuscitation Science at the University of Pennsylvania, "If a person wasn't visibly breathing, if they were cold and bluish in color, for example, they would be considered dead."

As time went on, the markers for death changed. Before the mid-1700s, for example, people were declared dead when their hearts stopped beating—a conclusion drawn from watching traumatic deaths such as decapitations, where the heart seemed to be the last organ to give up. But as our understanding of the human body grew, other organs, like the lungs and brain, were considered metrics of life—or death.

Today, that remains true to some degree; you can still be declared dead when your heart and lungs cease activity. And yet you can also be declared dead if both organs are still working, but your brain is not.

In most countries, being brain dead—meaning the whole brain has stopped working and cannot return to functionality—is the standard for calling death, says neuroscientist James Bernat, of the Geisel School of Medicine at Dartmouth College in New Hampshire. "A doctor has to show that the loss of brain function is irreversible," he tells Mental Floss. In some cases, a person can appear to be brain dead if they have overdosed on certain drugs or have suffered from hypothermia, for example, but the lack of activity is only temporary—these people aren't truly brain dead.

In the U.S., all states follow some form of the Uniform Determination of Death Act, which in 1981 defined a dead person as "an individual who has sustained either (1) irreversible cessation of circulatory and respiratory functions, or (2) irreversible cessation of all functions of the entire brain, including the brain stem."

But that's not the end of the story. In two states, New York and New Jersey, families can reject the concept of brain death if it goes against their religious beliefs. This makes it possible for someone to be considered alive in some states and dead in others.

A BLURRED LINE

In the past, if one of a person's three vital systems—circulation, respiration, and brain function—failed, the rest would usually stop within minutes of each other, and there was no coming back from that. But today, thanks to technological advances and medical breakthroughs, that's no longer necessarily the case. CPR can be performed to restart a heartbeat; a person who has suffered cardiac arrest can often be resuscitated within a 20- to 30-minute window (in rare cases, people have been revived after several hours). And since the 1950s, machines have been used to take on the role of many of the body's vital functions. People who stop breathing naturally can be hooked up to ventilators to move air in and out of their lungs, for example.

While remarkable, this life-extending technology has blurred the line between life and death. "A person can now have certain characteristics of being alive and others of being dead," Bernat says.

People with severe, irreversible brain damage fall into this mixed category. Many lie in intensive care units where ventilators breathe for them, but because they have minimal reflexes or movements, they're considered alive, especially by their families. Medical professionals, however, may disagree, leading to painful and complex debates about whether someone is alive.

Take the case of Jahi McMath, whose tonsil surgery in 2013, at age 13, went terribly wrong, leaving her brain dead—or so doctors thought. Her family refused to believe she was dead and moved her from Oakland, California, to New Jersey, where she was provided with feeding tubes in addition to her ventilator. After several months, her mother began recording videos that she said were proof that Jahi could move different parts of her body when asked to. Additional brain scans revealed that although some parts of her brain, like her brain stem, were largely destroyed, the structure of large parts of her cerebrum, which is responsible for consciousness, language, and voluntary movements, was intact. Her heart rate also changed when her mother spoke, leading a neurologist to declare last year, after viewing many of her mother's videos, that she is technically alive—nearly four years after she was pronounced brain dead. By her mother's reckoning, Jahi turned 17 on October 24, 2017.

Organ donation adds another layer of complications. Since an organ needs to be transplanted as quickly as possible to avoid damage, doctors want to declare death as soon as they can after a person has been disconnected from a machine. The protocol is usually to wait for five minutes after a donor's heart and breathing have stopped. However, some believe that's not long enough, since the person could still be resuscitated at that point.

Bernat—whose research interests include brain death and the definition of death, consciousness disorders including coma and vegetative states, and ethical and philosophical issues in neurology—disagrees. "I would argue that breathing and circulation has permanently ceased even if it hasn't irreversibly ceased," he says. "It won't restart by itself."

THE FUTURE OF BRINGING PEOPLE BACK TO LIFE

As resuscitation technology improves, scientists may find new ways to reverse death. One promising approach is therapeutic hypothermia. Sometimes used on heart attack patients who have been revived, the therapy uses cooling devices to lower body temperature, usually for about 24 hours. "It improves a patient's chance of recovering from cardiac arrest and the brain injury [from a lack of oxygen] that can result from it," says Leary, who specializes in research and education relating to cardiac arrest, CPR quality, and therapeutic hypothermia.

One more out-there possibility—which had its heyday in the early 2000s but still has its proponents today—is cryonic freezing, in which dead bodies (and in some cases, just people's heads) are preserved in the hope that they can be brought back once technology advances. Just minutes after death, a cryonaut's body is chilled; a chest compression device called a thumper keeps blood flowing through the body, which is then shot up with anticoagulants to prevent blood clots from forming; and finally, the blood is flushed out and replaced with a kind of antifreeze to halt the cell damage that usually occurs from freezing.

The idea is highly controversial. "It makes a good story for a movie, but it seems crazy to me," Bernat says. "I don't think it's the answer." But even if cryogenics is out, Bernat does believe that certain types of brain damage now thought to be permanent could one day be subject to medical intervention. "There is currently a huge effort in many medical centers to study brain resuscitation," he says.

Genetics provides another potential frontier. Scientists recently found that some genes in mice and fish live on after they die. And even more surprisingly, other genes regulating embryonic development, which switch off when an animal is born, turn on again after death. We don't yet know if the same thing happens in humans.

nextArticle.image_alt|e
iStock
arrow
Medicine
The 98.6℉ Myth: Why Everything You Think You Know About Body Temperature Is a Lie
iStock
iStock

When you were kid, you probably knew that to score a magical sick day home from school, you needed to have a fever. When the thermometer came out of your mouth, it had to read higher than 98.6℉—the long-accepted "normal" human body temperature. (If you wanted to really seal the deal, you may have hoped to hit 100℉.) Since then, you may have used a temperature above 98.6℉ as a metric to work from home (or call out sick entirely).

But here's the thing: The average body temperature isn't actually 98.6℉—a fact that we've known for more than 25 years. The myth originated in the 19th century with a single doctor, and despite evidence to the contrary, it's persisted ever since.

THE GIANT—AND FAULTY—ARMPIT THERMOMETER

In 1851, Carl Wunderlich, the director of the hospital at Leipzig University, began going from room to room with a comically large thermometer in tow. He wanted to understand how body temperature is affected by different diseases, so in each room, he would hold the foot-long device in patients' armpits for a full 20 minutes, waiting for a temperature to register. Once it did, he'd note the temperature on the patient's chart (Wunderlich is thought to be the first physician to do so). He and his staff did this for years, repeatedly taking the temperatures of some 25,000 patients and logging them on their charts, until he had millions of readings. In 1868, he finally published this data in Das Verhalten der Eigenwarme in Krankheiten (On the Temperature in Diseases: A Manual of Medical Thermometry). He concluded that the average human body temperature was 98.6℉, underscoring the idea that fever is a symptom of illness, not a cause.

No one questioned Wunderlich's methods, or his average, for about 140 years. Then, in the early 1990s, internist Philip Mackowiak—a professor of medicine at the University of Maryland, a medical historian, and, apparently, a clinical thermometer junkie—saw one of the physician's instruments at the Mutter Museum in Philadelphia. He told the Freakonomics podcast that he'd always had doubts about the 98.6℉ standard. "I am by nature a skeptic," he said. "And it occurred to me very early in my career that this idea that 98.6 was normal, and then if you didn't have a temperature of 98.6, you were somehow abnormal, just didn't sit right."

Getting his hands on Wunderlich's thermometer—which the museum let him borrow—only deepened his doubts. The huge thermometer was unwieldy and non-registering, meaning, Mackowiak explained, "that it has to be read while it's in place." Not only that, but Wunderlich had used the device to measure temperatures in the armpit, which is less reliable than temperatures taken in the mouth or rectum. The instrument itself also wasn't terribly precise: It measured up to 2 degrees Centigrade higher than both ancient and modern instruments.

In 1992, Mackowiak decided to test Wunderlich's average. Using normal-sized oral thermometers and a group of volunteers, he determined that the average human body temperature actually hovers around 98.2℉. Mackowiak found that body temperature tends to vary over the course of the day, with its lowest point around 6 a.m. and its highest in the early evening. Body temperature can also fluctuate monthly (with the menstrual cycle) and over a lifetime (declining decade by decade with age), and may even be differentially linked to sex and race assignments. He concluded that normal body temperature is so unique to each person that it's almost like a fingerprint and, given that wide variation, not actually a very reliable indicator of illness.

As a result of his study, Mackowiak proposed raising the threshold for fever to 98.9℉ for temperatures taken in the morning (and 99.9℉ at other times). While it's a relatively minor change in terms of actual degrees, this fever threshold is actually lower than the CDC's, which is a temperature of 100.4℉ or higher.

There are potential real-life consequences in this gap, for everyone from students (who'd have to attend school with what would be considered a low-grade fever by Wunderlich's 98.6℉ standard) to employers and daycares (who use temperature to set attendance policies). What's more, anyone who is actually sick but ignores a low-grade fever—one that meets Mackowiak's threshold but still falls under the CDC's—could pose a risk to people with compromised immune systems trying to avoid unnecessary exposure to illness in public places.

THE BALANCING POINT

There's a reason the average trends near 98℉ instead of 92℉ or 106℉. As endotherms, mammals expend a great deal of energy maintaining body temperature when compared with cold-blooded creatures. To find and conserve a just-right body temperature, central nervous system sensors gather data (too warm? too cold? just right, Goldilocks?) and send that information to the pebble-sized hypothalamus near the base of the brain. There, the data is converted into action: releasing sweat and widening the blood vessels if too warm; raising metabolism, constricting the blood vessels, and inducing shivering if too cold.

According to a study by Aviv Bergman and Arturo Casadevall in the journal mBio, the precise balancing point for ideal body temperature is the sweet spot where the metabolic cost for all this thermoregulation balances with the evolutionary advantage of warding off fungal disease. (While warm-blooded animals are prone to bacterial or viral infections, they rarely experience fungal infections because most fungi can't withstand temperatures above 86℉. Cold-blooded animals, on the other hand, are prone to all three.) For Bergman and Casadevall, this benefit even explains what tipped Darwin's scales in favor of mammals, allowing them to edge out other vertebrates for dominance after the Cretaceous-Tertiary mass extinction wiped out the dinosaurs.

Of course, rules call for exceptions, and the one place where human body temperature demonstrates sustained elevation is outer space. Astronauts on prolonged missions clock significantly higher average body temperatures than they do when terrestrial—even up to 104℉. This so-called "space fever" is probably a product of some combination of radiation exposure, psychological stress, and immune response to weightlessness. Researchers believe this phenomenon could yield crucial information about thermoregulation—and may even offer insight into how humans might adapt to climate change.

WHY THE MYTH PERSISTS

It's been 26 years since Mackowiak's study, yet the newer data has not taken hold among medical professionals or the public. What gives?

Mackowiak tells Mental Floss that he finds it a bit mystifying that the myth persists, especially since many people, when pressed, know that the so-called "average" temperature varies. Part of the problem may be psychological: We cling to beliefs despite evidence to the contrary—a phenomenon called belief perseverance [PDF]. It's a significant force upholding a surprising number of medical myths. The idea humans should drink eight glasses of water a day? Not science. Sugar causes hyperactive behavior? Nope. Reading in dim light harms eyesight? Not really.

Unlearning persistent myths—especially ones loaded with the weight of medical authority—is difficult. "Deep down, under it all," Mackowiak says, "people want simple answers for things."

SECTIONS

arrow
LIVE SMARTER
More from mental floss studios