What Is a Calorie?


The word calorie carries a lot of weight. We know we're supposed to avoid too many of them, but things get more complicated after that. What, exactly, are calories, and how do I burn them?


A calorie is a unit of heat energy that fuels your body, making it possible to move, breathe, think, sleep—and even digest food to make more energy.

While there is some disagreement about who first coined the term calorie, we know the French chemist Antoine Lavoisier used it in experiments he conducted during the winter of 1782–1783. He used a device called a calorimeter to measure how much ice melted in a metal container due to the heat emitted by guinea pigs housed inside it. Over time, that measurement was refined by other scientists to mean the amount of energy needed to raise the temperature of a kilogram of water by 1°C—what's known as a kilocalorie.

The food calorie and a kilocalorie (kcal) are technically the same thing, but we use the term calorie rather than kilocalorie because of an American chemist named Wilbur Olin Atwater. In the late 1880s, Atwater traveled to Germany to study at physiologist Carl Voit's laboratory, where Voit was researching the nutritional value of food and animal feed. Inspired by that research, Atwater took measurements of different foods with a bomb calorimeter—a device that essentially measures the heat in food when burned—by having study participants eat, and then measuring and subtracting [PDF] the amount of heat leaving their bodies through respiration and waste. He used a respiration calorimeter to measure their breath and a bomb calorimeter to burn their poop, and from that calculated just how many calories were left in their bodies to be used. When writing about his research, Atwater used the word calorie (kcal wouldn't be used in America until 1894, when it was published in a physiology textbook).

Based on his experiments, Atwater created a system for calculating the calories that human bodies can get from our food. There are three types of food nutrients that deliver caloric energy—fats, proteins, and carbohydrates—and Atwater arrived at a caloric measurement of each: A fat gram has nine calories, while a gram of protein and a gram of carbohydrates each have four. That system was modified [PDF] by USDA scientists in 1973, but it's otherwise still the basis for how calories are calculated today.


When you eat, enzymes in the mouth, stomach, and intestine break down nutrients by turning fats into fatty acids, sugars into simple sugars, and proteins into amino acids. Then, using oxygen cells throughout your body, these components are broken down into energy—a process known as metabolism.

Most of the calories we burn each and every day are used just to keep our body functioning, with about half going toward powering our major organs—the brain, liver, kidneys, and heart. We use the rest for physical activity and the process of converting food to energy. Anything not used by the body is then stored, first in the liver and eventually as fat cells.

Some foods, like honey (carbohydrates), are easily digestible, whereas nuts (a mix of carbohydrates, fat, and protein) can't actually be fully digested at all. There are also digestibility differences within the same type of food. For example, in plants, older leaves tend to be sturdier (and therefore harder to digest) and less caloric than younger ones. Most significantly, especially in terms of human evolution, whenever we cook or process food, the body can get more calories as compared to that same food eaten raw. All of this has an impact on the amount of calories we can actually use.

There's no food you can eat to speed up the rate at which you burn calories (changes from foods like spicy peppers are fleeting), but factors like age and rapid, drastic weight loss can slow it down.

Building more muscle can increase your metabolic rate (although how much is debatable), since muscle requires more energy to function than fat does. And while cardiovascular exercise might not permanently boost your metabolism, it does burn calories; just how much depends on your weight and how vigorously you exercise.

Examples of higher calorie burning exercises include cycling and running, but almost every activity burns something, so you could potentially burn more calories throughout the day by consistently doing low-energy activities like gardening or pacing during a conference call than you would during 30 minutes of fast cycling.


We still use the Atwater system for calculating food calories, but it's far from perfect. For one thing, a USDA study found that people absorbed fewer calories from nuts than had been estimated under Atwater's system—a serving of almonds, for example, provided not 170 calories, but 129. There's some evidence that people tend to digest food at all sorts of different rates too, depending on the individual makeup of our gut bacteria, meaning that the absorption of calories may differ from person to person.

Scientists now believe the numbers on food labels are more of an estimate than a precise measurement. While companies are required to provide caloric information on food labels, the FDA doesn't specify exactly how those calories should be calculated. Some companies, like McDonald's, send their food to a lab for measurement, while others estimate the total by adding up the calorie count for each food component from the USDA's massive food composition database. As scientists continue to refine how we calculate calories, we'll come to have a better idea of the energy we can actually get from these different foods.

Dean Mouhtaropoulos/Getty Images
Essential Science
What Is a Scientific Theory?
Dean Mouhtaropoulos/Getty Images
Dean Mouhtaropoulos/Getty Images

In casual conversation, people often use the word theory to mean "hunch" or "guess": If you see the same man riding the northbound bus every morning, you might theorize that he has a job in the north end of the city; if you forget to put the bread in the breadbox and discover chunks have been taken out of it the next morning, you might theorize that you have mice in your kitchen.

In science, a theory is a stronger assertion. Typically, it's a claim about the relationship between various facts; a way of providing a concise explanation for what's been observed. The American Museum of Natural History puts it this way: "A theory is a well-substantiated explanation of an aspect of the natural world that can incorporate laws, hypotheses and facts."

For example, Newton's theory of gravity—also known as his law of universal gravitation—says that every object, anywhere in the universe, responds to the force of gravity in the same way. Observational data from the Moon's motion around the Earth, the motion of Jupiter's moons around Jupiter, and the downward fall of a dropped hammer are all consistent with Newton's theory. So Newton's theory provides a concise way of summarizing what we know about the motion of these objects—indeed, of any object responding to the force of gravity.

A scientific theory "organizes experience," James Robert Brown, a philosopher of science at the University of Toronto, tells Mental Floss. "It puts it into some kind of systematic form."


A theory's ability to account for already known facts lays a solid foundation for its acceptance. Let's take a closer look at Newton's theory of gravity as an example.

In the late 17th century, the planets were known to move in elliptical orbits around the Sun, but no one had a clear idea of why the orbits had to be shaped like ellipses. Similarly, the movement of falling objects had been well understood since the work of Galileo a half-century earlier; the Italian scientist had worked out a mathematical formula that describes how the speed of a falling object increases over time. Newton's great breakthrough was to tie all of this together. According to legend, his moment of insight came as he gazed upon a falling apple in his native Lincolnshire.

In Newton's theory, every object is attracted to every other object with a force that’s proportional to the masses of the objects, but inversely proportional to the square of the distance between them. This is known as an “inverse square” law. For example, if the distance between the Sun and the Earth were doubled, the gravitational attraction between the Earth and the Sun would be cut to one-quarter of its current strength. Newton, using his theories and a bit of calculus, was able to show that the gravitational force between the Sun and the planets as they move through space meant that orbits had to be elliptical.

Newton's theory is powerful because it explains so much: the falling apple, the motion of the Moon around the Earth, and the motion of all of the planets—and even comets—around the Sun. All of it now made sense.


A theory gains even more support if it predicts new, observable phenomena. The English astronomer Edmond Halley used Newton's theory of gravity to calculate the orbit of the comet that now bears his name. Taking into account the gravitational pull of the Sun, Jupiter, and Saturn, in 1705, he predicted that the comet, which had last been seen in 1682, would return in 1758. Sure enough, it did, reappearing in December of that year. (Unfortunately, Halley didn't live to see it; he died in 1742.) The predicted return of Halley's Comet, Brown says, was "a spectacular triumph" of Newton's theory.

In the early 20th century, Newton's theory of gravity would itself be superseded—as physicists put it—by Einstein's, known as general relativity. (Where Newton envisioned gravity as a force acting between objects, Einstein described gravity as the result of a curving or warping of space itself.) General relativity was able to explain certain phenomena that Newton's theory couldn't account for, such as an anomaly in the orbit of Mercury, which slowly rotates—the technical term for this is "precession"—so that while each loop the planet takes around the Sun is an ellipse, over the years Mercury traces out a spiral path similar to one you may have made as a kid on a Spirograph.

Significantly, Einstein’s theory also made predictions that differed from Newton's. One was the idea that gravity can bend starlight, which was spectacularly confirmed during a solar eclipse in 1919 (and made Einstein an overnight celebrity). Nearly 100 years later, in 2016, the discovery of gravitational waves confirmed yet another prediction. In the century between, at least eight predictions of Einstein's theory have been confirmed.


And yet physicists believe that Einstein's theory will one day give way to a new, more complete theory. It already seems to conflict with quantum mechanics, the theory that provides our best description of the subatomic world. The way the two theories describe the world is very different. General relativity describes the universe as containing particles with definite positions and speeds, moving about in response to gravitational fields that permeate all of space. Quantum mechanics, in contrast, yields only the probability that each particle will be found in some particular location at some particular time.

What would a "unified theory of physics"—one that combines quantum mechanics and Einstein's theory of gravity—look like? Presumably it would combine the explanatory power of both theories, allowing scientists to make sense of both the very large and the very small in the universe.


Let's shift from physics to biology for a moment. It is precisely because of its vast explanatory power that biologists hold Darwin's theory of evolution—which allows scientists to make sense of data from genetics, physiology, biochemistry, paleontology, biogeography, and many other fields—in such high esteem. As the biologist Theodosius Dobzhansky put it in an influential essay in 1973, "Nothing in biology makes sense except in the light of evolution."

Interestingly, the word evolution can be used to refer to both a theory and a fact—something Darwin himself realized. "Darwin, when he was talking about evolution, distinguished between the fact of evolution and the theory of evolution," Brown says. "The fact of evolution was that species had, in fact, evolved [i.e. changed over time]—and he had all sorts of evidence for this. The theory of evolution is an attempt to explain this evolutionary process." The explanation that Darwin eventually came up with was the idea of natural selection—roughly, the idea that an organism's offspring will vary, and that those offspring with more favorable traits will be more likely to survive, thus passing those traits on to the next generation.


Many theories are rock-solid: Scientists have just as much confidence in the theories of relativity, quantum mechanics, evolution, plate tectonics, and thermodynamics as they do in the statement that the Earth revolves around the Sun.

Other theories, closer to the cutting-edge of current research, are more tentative, like string theory (the idea that everything in the universe is made up of tiny, vibrating strings or loops of pure energy) or the various multiverse theories (the idea that our entire universe is just one of many). String theory and multiverse theories remain controversial because of the lack of direct experimental evidence for them, and some critics claim that multiverse theories aren't even testable in principle. They argue that there's no conceivable experiment that one could perform that would reveal the existence of these other universes.

Sometimes more than one theory is put forward to explain observations of natural phenomena; these theories might be said to "compete," with scientists judging which one provides the best explanation for the observations.

"That's how it should ideally work," Brown says. "You put forward your theory, I put forward my theory; we accumulate a lot of evidence. Eventually, one of our theories might prove to obviously be better than the other, over some period of time. At that point, the losing theory sort of falls away. And the winning theory will probably fight battles in the future."

Photo by Fox Photos/Getty Images
Essential Science
How Are Vaccines Made?
Quality checks on the Salk polio vaccine at Glaxo's virus research laboratory in Buckinghamshire, UK, in January 1956.
Quality checks on the Salk polio vaccine at Glaxo's virus research laboratory in Buckinghamshire, UK, in January 1956.
Photo by Fox Photos/Getty Images

Vaccines have long been hailed as one of our greatest public health achievements. They can be made to protect us from infections with either viral or bacterial microbes. Measles and smallpox, for example, are viruses; Streptococcus pneumoniae is a bacterium that causes a range of diseases, including pneumonia, ear and sinus infections, and meningitis. Hundreds of millions of illnesses and deaths have been prevented due to vaccines that eradicated smallpox and significantly reduced polio and measles infections. However, some misunderstanding remains regarding how vaccines are made, and why some scary-sounding ingredients [PDF] are included in the manufacturing process.

The production of our vaccines has greatly evolved since the early days, when vaccination was potentially dangerous. Inoculating an individual with ground-up smallpox scabs usually led to a mild infection (called "variolation"), and protected them from acquiring the disease the "regular" way (via the air). But there was always a chance the infection could still be severe. When Edward Jenner introduced the first true vaccination with cowpox, protection from smallpox became safer, but there were still issues: The cowpox material could be contaminated with other germs, and sometimes was transmitted from one vaccinated person to another, leading to the inadvertent spread of blood-borne pathogens. We’ve come far in the last 200 years.

There are different kinds of vaccines, and each requires different processes to move from the laboratory to your physician's office. The key to all of them is production of one or more antigens—the portion of the microbe that triggers a host immune response.


There are several methods to produce antigens. One common technique is to grow a virus in what's called a cell culture. Typically grown in large vats called bioreactors, living cells are inoculated with a virus and placed in a liquid growth medium that contains nutrients—proteins, amino acids, carbohydrates, essential minerals—that help the virus grow in the cells, producing thousands of copies of itself in each infected cell. At this stage the virus is also getting its own dose of protective medicine: antibiotics like neomycin or polymyxin B, which prevent bacterial and fungal contamination that could kill the cells serving as hosts for the virus.

Once a virus completes its life cycle in the host cell, the viruses are purified by separating them from the host cells and growth media, which are discarded. This is often done using several different types of filters; the viruses are small and can pass through holes in the filter that trap larger host cells and cell debris.

This is how "live attenuated vaccines" are created. These vaccines contain viruses that have been modified so that they are no longer harmful to humans. Some of them are grown for many generations in cells that aren't human, such as chicken cells, so that they have mutated to no longer cause harm to humans. Others, like the influenza nasal mist, were grown at low temperatures until they lost the ability to replicate in the warmer temperatures of the lungs. Many of these vaccines you were probably given as a child: measles, mumps, rubella ("German measles"), and chickenpox.

Live attenuated vaccines replicate briefly in the body, triggering a strong—and long-lasting—response from your immune system. Because your immune system kicks into high gear at what it perceives to be a major threat, you need fewer doses of the vaccine for protection against these diseases. And unlike the harmful form of the virus, it is extremely unlikely (because they only replicate at low levels) that these vaccines will cause the host to develop the actual disease, or to spread it to other contacts. One exception is the live polio vaccine, which could spread to others and, extremely rarely, caused polio disease (approximately one case of polio from 3 million doses of the virus). For this reason, the live polio virus was discontinued in the United States in 2000.

Scientists use the same growth technique for what are known as "killed" or "inactivated" vaccines, but they add an extra step: viral death. Inactivated viruses are killed, typically via heat treatment or use of a chemical such as formaldehyde, which modifies the virus's proteins and nucleic acids and renders the virus unable to replicate. Inactivated vaccines include Hepatitis A, the injected polio virus, and the flu shot.

A dead virus can't replicate in your body, obviously. This means that the immune response to inactivated vaccines isn't as robust as it is with live attenuated vaccines; replication by the live viruses alerts many different types of your immune cells of a potential invader, while killed vaccines primarily alert only one part of your immune system (your B cells, which produce antibodies). That's why you need more doses to achieve and maintain immunity.

While live attenuated vaccines were the primary way to make vaccines until the 1960s, concerns about potential safety issues, and the difficulty of making them, mean that few are attempting to develop new live attenuated vaccines today.


Other vaccines aren't made of whole organisms at all, but rather bits and pieces of a microbe. The combination vaccine that protects against diphtheria, pertussis, and tetanus—all at once—is one example. This vaccine is called the DTaP for children, and Tdap for adults. It contains toxins (the proteins that cause disease) from diphtheria, pertussis, and tetanus bacteria that have been inactivated by chemicals. (The toxins are called "toxoids" once inactivated.) This protects the host—a.k.a. you, potentially—from developing clinical diphtheria and tetanus disease, even if you are exposed to the microorganisms. (Some viruses have toxins—Ebola appears to, for example—but they're not the key antigens, so they're not used for our current vaccines.)

As they do when developing live attenuated or inactivated vaccines, scientists who create these bacterial vaccines need some target bacteria to culture. But because the bacteria don't need a host cell to grow, they can be produced in simple nutrient broths by vaccine manufacturers. The toxins are then separated from the rest of the bacteria and growth media and inactivated for use as vaccines.

Similarly, some vaccines contain just a few antigens from a bacterial species. Vaccines for Streptococcus pneumoniae, Haemophilus influenzae type B, and Neisseria meningitidis all use sugars that are found on the outer part of the bacteria as antigens. These sugars are purified from the bacteria and then bound to another protein to enhance the immune response. The protein helps to recruit T cells in addition to B cells and create a more robust reaction.

Finally, we can also use genetic engineering to produce vaccines. We do this for Hepatitis B, a virus that can cause severe liver disease and liver cancer. The vaccine for it consists of a single antigen: the hepatitis B surface antigen, which is a protein on the outside of the virus. The gene that makes this antigen is inserted into yeast cells; these cells can then be grown in a medium similar to bacteria and without the need for cell culture. The hepatitis B surface antigen is then separated from the yeast and serves as the primary vaccine component.


Once you have the live or killed viruses, or purified antigens, sometimes chemicals need to be added to protect the vaccine or to make it work better. Adjuvants, such as aluminum salts, are a common additive; they help enhance the immune response to some antigens by keeping the antigen in contact with the cells of the immune system for a longer period of time. Vaccines for DTaP/Tdap, meningitis, pneumococcus, and hepatitis B all use aluminum salts as an adjuvant.

Other chemicals may be added as stabilizers, to help keep the vaccine working effectively even in extreme conditions (such as hot temperatures). Stabilizers can include sugars or monosodium glutamate (MSG). Preservatives can be added to prevent microbial growth in the finished product.

For many years, the most common preservative was a compound called thimerosal, which is 50 percent ethylmercury by weight. Ethylmercury doesn't stick around; your body quickly eliminates it via the gut and feces. (This is different from methylmercury, which accumulates in fish and can, at high doses, cause long-lasting damage in humans.) In 2001, thimerosal was removed from the vaccines given in childhood due to consumer concerns, but many studies have demonstrated its safety.

Finally, the vaccine is divided into vials for shipping to physicians, hospitals, public health departments, and some pharmacies. These can be single-dose or multi-dose vials, which can be used for multiple patients as long as they're prepared and stored away from patient treatment areas. Preservatives are important for multi-dose vials: bacteria and fungi are very opportunistic, and multiple uses increase the potential for contamination of the vaccine. This is why thimerosal is still used in some multi-dose influenza vaccines.

Though some of the vaccine ingredients sound worrisome, most of these chemicals are removed during multiple purification steps, and those that remain (such as adjuvants) are necessary for the vaccine's effectiveness, are present in very low levels, and have an excellent track record of safety.


More from mental floss studios