15 of the Longest-Running Scientific Studies in History

Most experiments are designed to be done quickly. Get data, analyze data, publish data, move on. But the universe doesn’t work on nice brief timescales. For some things you need time. Lots of time.


In 1842, John Bennet Lawes patented his method for making superphosphate (a common, synthetic plant nutrient) and opened up what is believed to be the first artificial fertilizer factory in the world. The following year, Lawes and chemist Joseph Henry Gilbert began a series of experiments comparing the effects of organic and inorganic fertilizers, which are now the oldest agricultural studies on Earth. For over 150 years parts of a field of winter wheat have received either manure, artificial fertilizer, or no fertilizer. The results are about what you’d expect: artificial and natural fertilized plots produce around six to seven tons of grain per hectare, while the unfertilized plot produces around one ton of grain per hectare. But there’s more. They can use these studies to test everything from herbicides to soil microbes and even figure out oxygen ratios for better reconstruction of paleoclimates.


Lawes and Gilbert started several more experiments at around the same time. In one of these experiments with hay, Lawes observed that each plot was so distinct that it looked like he was experimenting with different seed mixes as opposed to different fertilizers. The nitrogen fertilizers being applied benefited the grasses over any other plant species, but if phosphorus and potassium were the main components of the fertilizer, the peas took over the plot. Since then, this field has been one of the most important biodiversity experiments on Earth.


Yet another one of Lawes’ experiments: In 1882 he abandoned part of the Broadbalk experiment to see what would happen. What happened was that within a few years, the wheat plants were completely outcompeted by weeds—and then trees moved in [PDF]. In 1900, half of the area was allowed to continue as normal and the other half has had the trees removed every year in one of the longest studies of how plants recolonize farmland.


In 1879, William Beal of Michigan State University buried 20 bottles of seeds on campus. The purpose of this experiment was to see how long the seeds would remain viable buried underground. Originally, one bottle was dug up every five years, but that soon changed to once every 10 years, and is now once every 20 years. In the last recovery in 2000, 26 plants were germinated, meaning slightly more than half survived over 100 years in the ground. The next will be dug up in 2020, and (assuming no more extensions) the experiment will end in 2100.

Even if it is extended for a while, there will probably still be viable seeds. In 2008, scientists were able to successfully germinate a circa-2000 year old date palm seed, and four years later, Russian scientists were able grow a plant from a 32,000 year old seed that had been buried by an ancient squirrel.


If you hit a mass of pitch (the leftovers from distilling crude oil) with a hammer, it shatters like a solid. In 1927, Thomas Parnell of the University of Queensland in Australia decided to demonstrate to his students that it was actually liquid. They just needed to watch it for a while. Some pitch was heated up and poured into a sealed stem glass funnel. Three years later, the stem of the funnel was cut and the pitch began to flow. Very slowly. Eight years later, the first drop fell. Soon the experiment was relegated to a cupboard to collect dust, until 1961 when John Mainstone learned of its existence and restored the test to its rightful glory. Sadly, he never saw a pitch drop. In 1979 it dropped on a weekend, in 1988 he was away getting a drink, in 2000 the webcam failed, and he died before the most recent drop in April 2014.

As it turns out, the Parnell-initiated pitch drop experiment isn’t even the oldest. After it gathered international headlines, reports of other pitch drop experiments became news. Aberystwyth University in Wales found a pitch drop experiment that was started 13 years before the Australian one, and has yet to produce a single drop (and indeed is not expected to for another 1300 years), while the Royal Scottish Museum in Edinburgh found a pitch drop experiment from 1902. All of them prove one thing though: With enough time, a substance that can be shattered with a hammer still might be a liquid.


Around 1840, Oxford physics professor Robert Walker bought a curious little contraption from a pair of London instrument makers that was made up of two dry piles (a type of battery) connected to bells with a metal sphere hanging in between them. When the ball hit one of the bells, it became negatively charged and shot towards the other positively charged bell where the process repeats itself. Because it uses only a minuscule amount of energy, the operation has occurred ten billion times and counting. It’s entirely possible that the ball or bells will wear out before the batteries fully discharge.

Although we don’t know the composition of the battery itself (and likely won’t until it winds down in a few hundred years), it has led to scientific advancements. During WWII, the British Admiralty developed an infrared telescope that needed a battery capable of producing high voltage, low current, and that could last forever. One of the scientists remembered seeing the Clarendon Dry Pile—also referred to as the Oxford Electric Bell—and was able to find out how to make his own dry pile for the telescope.


Sitting in the foyer of the University of Otago in New Zealand is the Beverly Clock. Developed in 1864 by Arthur Beverly, it is a phenomenal example of a self-winding clock. Beverly realized that, while most clocks used a weight falling to get the energy to run the clock mechanism, he could get the same energy with one cubic foot of air expanding and contracting over a six-degree Celsius temperature range. It hasn’t always worked; there have been times it needed cleanings, it stopped when the Physics department moved, and if the temperature is too stable it can stop. But it’s still going over 150 years later.


Since 1900, folks from across the continent have spent time counting birds. What began as an activity to keep people from hunting our feathered friends on Christmas Day, has turned into one of the world’s most massive and long-lasting citizen science projects. Although the 2015 results aren’t ready yet, we know that in 2014, 72,653 observers counted 68,753,007 birds of 2106 species.


One of the longest running development studies, in 1938 Harvard began studying a group of 268 sophomores (including one John F. Kennedy), and soon an additional study added 456 inner-city Bostonians. They’ve been followed ever since, from World War II through the Cold War and into the present day, with surveys every two years and physical examinations every five. Because of the sheer wealth of data, they’ve been able to learn all kinds of interesting and unexpected things. One such example: The quality of vacations one has in their youth often indicates increased happiness later in life.


In 1921, 1470 California children who scored over 135 on an IQ test began a relationship that would turn into one of the world’s most famous longitudinal studies—the Terman Life Cycle Study of Children with High Ability.  Over the years, in order to show that early promise didn’t lead to later disappointment, participants filled out questionnaires about everything from early development, interests, and health to relationships and personality.  One of the most interesting findings is that, even among these smart folk, character traits like perseverance made the most difference in career success.


Starting in 1940, the UK’s National Food Survey tracked household food consumption and expenditure, and was the longest lasting program of its kind in the world. In 2000 it was replaced with the Expenditure and Food Survey, and in 2008 the Living Costs and Food Survey. And it’s provided interesting results. For instance, earlier this year it was revealed that tea consumption has fallen from around 23 cups per person per week to only eight cups, and no one in the UK ate pizza in 1974, but now the average Brit eats 75 grams (2.5 ounces) a week.


In 1948, the National Heart, Lung, and Blood Institute teamed up with Boston University to get 5209 people from the town of Framingham to do a long-term study of how cardiovascular disease developed. Twenty-three years later they also recruited the adult children of the original experiment and in 2002 a third generation. Over the decades, the Framingham Heart Study researchers claim to have discovered that cigarette smoking increased risk, in addition to identifying potential risk factors for Alzheimer’s, and the dangers of high blood pressure.


While this one might not seem that impressive in terms of length, it has to be the record for number of generations that have come and gone over the course of the study: well over 50,000. Richard Lenski was curious whether flasks of identical bacteria would change in the same way over time, or if the groups would diverge from each other. Eventually, he got bored with the experiment, but his colleagues convinced him to keep going, and it’s a good thing they did. In 2003, Lenski noticed that one of flasks had gone cloudy, and some research led him to discover that the E. coli in one of the flasks had gained the ability to metabolize citrate. Because he had been freezing previous generations of his experiment, he was able to precisely track how this evolution occurred.


Sadly, sometimes things can go terribly wrong during long-term experiments. Between 1990 and 1992, British scientists collected thousands of sheep brains. Then, for over four years, those prepared sheep brains were injected into hundreds of mice to learn if the sheep brains were infected with BSE (mad-cow disease). Preliminary findings suggested that they were, and plans were drawn up to slaughter every sheep in England. Except those sheep brains? They were actually cow brains that had been mislabeled. And thus ended the longest running experiment on sheep and BSE.


Attention to glacier retreat and the effects of global warming on the world’s ice fields has rapidly increased over the course of the last few decades, but the Juneau Icefield Research Program has been monitoring the situation up north since 1948. In its nearly 70 years of existence, the project become the longest-running study of its kind, as well as an educational and exploratory experience. The monitoring of the many glaciers of the Juneau Icefield in Alaska and British Columbia has a rapidly approaching end date though—at least in geological terms. A recent study published in the Journal of Glaciology predicts that the field will be gone by 2200.

Dean Mouhtaropoulos/Getty Images
Essential Science
What Is a Scientific Theory?
Dean Mouhtaropoulos/Getty Images
Dean Mouhtaropoulos/Getty Images

In casual conversation, people often use the word theory to mean "hunch" or "guess": If you see the same man riding the northbound bus every morning, you might theorize that he has a job in the north end of the city; if you forget to put the bread in the breadbox and discover chunks have been taken out of it the next morning, you might theorize that you have mice in your kitchen.

In science, a theory is a stronger assertion. Typically, it's a claim about the relationship between various facts; a way of providing a concise explanation for what's been observed. The American Museum of Natural History puts it this way: "A theory is a well-substantiated explanation of an aspect of the natural world that can incorporate laws, hypotheses and facts."

For example, Newton's theory of gravity—also known as his law of universal gravitation—says that every object, anywhere in the universe, responds to the force of gravity in the same way. Observational data from the Moon's motion around the Earth, the motion of Jupiter's moons around Jupiter, and the downward fall of a dropped hammer are all consistent with Newton's theory. So Newton's theory provides a concise way of summarizing what we know about the motion of these objects—indeed, of any object responding to the force of gravity.

A scientific theory "organizes experience," James Robert Brown, a philosopher of science at the University of Toronto, tells Mental Floss. "It puts it into some kind of systematic form."


A theory's ability to account for already known facts lays a solid foundation for its acceptance. Let's take a closer look at Newton's theory of gravity as an example.

In the late 17th century, the planets were known to move in elliptical orbits around the Sun, but no one had a clear idea of why the orbits had to be shaped like ellipses. Similarly, the movement of falling objects had been well understood since the work of Galileo a half-century earlier; the Italian scientist had worked out a mathematical formula that describes how the speed of a falling object increases over time. Newton's great breakthrough was to tie all of this together. According to legend, his moment of insight came as he gazed upon a falling apple in his native Lincolnshire.

In Newton's theory, every object is attracted to every other object with a force that’s proportional to the masses of the objects, but inversely proportional to the square of the distance between them. This is known as an “inverse square” law. For example, if the distance between the Sun and the Earth were doubled, the gravitational attraction between the Earth and the Sun would be cut to one-quarter of its current strength. Newton, using his theories and a bit of calculus, was able to show that the gravitational force between the Sun and the planets as they move through space meant that orbits had to be elliptical.

Newton's theory is powerful because it explains so much: the falling apple, the motion of the Moon around the Earth, and the motion of all of the planets—and even comets—around the Sun. All of it now made sense.


A theory gains even more support if it predicts new, observable phenomena. The English astronomer Edmond Halley used Newton's theory of gravity to calculate the orbit of the comet that now bears his name. Taking into account the gravitational pull of the Sun, Jupiter, and Saturn, in 1705, he predicted that the comet, which had last been seen in 1682, would return in 1758. Sure enough, it did, reappearing in December of that year. (Unfortunately, Halley didn't live to see it; he died in 1742.) The predicted return of Halley's Comet, Brown says, was "a spectacular triumph" of Newton's theory.

In the early 20th century, Newton's theory of gravity would itself be superseded—as physicists put it—by Einstein's, known as general relativity. (Where Newton envisioned gravity as a force acting between objects, Einstein described gravity as the result of a curving or warping of space itself.) General relativity was able to explain certain phenomena that Newton's theory couldn't account for, such as an anomaly in the orbit of Mercury, which slowly rotates—the technical term for this is "precession"—so that while each loop the planet takes around the Sun is an ellipse, over the years Mercury traces out a spiral path similar to one you may have made as a kid on a Spirograph.

Significantly, Einstein’s theory also made predictions that differed from Newton's. One was the idea that gravity can bend starlight, which was spectacularly confirmed during a solar eclipse in 1919 (and made Einstein an overnight celebrity). Nearly 100 years later, in 2016, the discovery of gravitational waves confirmed yet another prediction. In the century between, at least eight predictions of Einstein's theory have been confirmed.


And yet physicists believe that Einstein's theory will one day give way to a new, more complete theory. It already seems to conflict with quantum mechanics, the theory that provides our best description of the subatomic world. The way the two theories describe the world is very different. General relativity describes the universe as containing particles with definite positions and speeds, moving about in response to gravitational fields that permeate all of space. Quantum mechanics, in contrast, yields only the probability that each particle will be found in some particular location at some particular time.

What would a "unified theory of physics"—one that combines quantum mechanics and Einstein's theory of gravity—look like? Presumably it would combine the explanatory power of both theories, allowing scientists to make sense of both the very large and the very small in the universe.


Let's shift from physics to biology for a moment. It is precisely because of its vast explanatory power that biologists hold Darwin's theory of evolution—which allows scientists to make sense of data from genetics, physiology, biochemistry, paleontology, biogeography, and many other fields—in such high esteem. As the biologist Theodosius Dobzhansky put it in an influential essay in 1973, "Nothing in biology makes sense except in the light of evolution."

Interestingly, the word evolution can be used to refer to both a theory and a fact—something Darwin himself realized. "Darwin, when he was talking about evolution, distinguished between the fact of evolution and the theory of evolution," Brown says. "The fact of evolution was that species had, in fact, evolved [i.e. changed over time]—and he had all sorts of evidence for this. The theory of evolution is an attempt to explain this evolutionary process." The explanation that Darwin eventually came up with was the idea of natural selection—roughly, the idea that an organism's offspring will vary, and that those offspring with more favorable traits will be more likely to survive, thus passing those traits on to the next generation.


Many theories are rock-solid: Scientists have just as much confidence in the theories of relativity, quantum mechanics, evolution, plate tectonics, and thermodynamics as they do in the statement that the Earth revolves around the Sun.

Other theories, closer to the cutting-edge of current research, are more tentative, like string theory (the idea that everything in the universe is made up of tiny, vibrating strings or loops of pure energy) or the various multiverse theories (the idea that our entire universe is just one of many). String theory and multiverse theories remain controversial because of the lack of direct experimental evidence for them, and some critics claim that multiverse theories aren't even testable in principle. They argue that there's no conceivable experiment that one could perform that would reveal the existence of these other universes.

Sometimes more than one theory is put forward to explain observations of natural phenomena; these theories might be said to "compete," with scientists judging which one provides the best explanation for the observations.

"That's how it should ideally work," Brown says. "You put forward your theory, I put forward my theory; we accumulate a lot of evidence. Eventually, one of our theories might prove to obviously be better than the other, over some period of time. At that point, the losing theory sort of falls away. And the winning theory will probably fight battles in the future."

This Just In
Yes, Parents Do Play Favorites—And Often Love Their Youngest Kid Best

If you have brothers or sisters, there was probably a point in your youth when you spent significant time bickering over—or at least privately obsessing over—whom Mom and Dad loved best. Was it the oldest sibling? The baby of the family? The seemingly forgotten middle kid?

As much as we'd like to believe that parents love all of their children equally, some parents do, apparently, love their youngest best, according to The Independent. A recent survey from the parenting website Mumsnet and its sister site, the grandparent-focused Gransnet, found that favoritism affects both parents and grandparents.

Out of 1185 parents and 1111 grandparents, 23 percent of parents and 42 percent of grandparents admitted to have a favorite out of their children or grandchildren. For parents, that tended to be the youngest—56 percent of those parents with a favorite said they preferred the baby of the family. Almost 40 percent of the grandparents with a favorite, meanwhile, preferred the oldest. Despite these numbers, half of the respondents thought having a favorite among their children and grandchildren is "awful," and the majority think it's damaging for that child's siblings.

Now, this isn't to say that youngest children experience blatant favoritism across all families. This wasn't a scientific study, and with only a few thousand users, the number of people with favorites is actually not as high as it might seem—23 percent is only around 272 parents, for instance. But other studies with a bit more scientific rigor have indicated that parents do usually have favorites among their children. In one study, 70 percent of fathers and 74 percent of mothers admitted to showing favoritism in their parenting. "Parents need to know that favoritism is normal," psychologist Ellen Weber Libby, who specializes in family dynamics, told The Wall Street Journal in 2017.

But youngest kids don't always feel the most loved. A 2005 study found that oldest children tended to feel like the preferred ones, and youngest children felt like their parents were biased toward their older siblings. Another study released in 2017 found that when youngest kids did feel like there was preferential treatment in their family, their relationships with their parents were more greatly affected than their older siblings, either for better (if they sensed they were the favorite) or for worse (if they sensed their siblings were). Feeling like the favorite or the lesser sibling didn't tend to affect older siblings' relationships with their parents.

However, the author of that study, Brigham Young University professor Alex Jensen, noted in a press release at the time that whether or not favoritism affects children tends to depend on how that favoritism is shown. "When parents are more loving and they're more supportive and consistent with all of the kids, the favoritism tends to not matter as much," he said, advising that “you need to treat them fairly, but not equally.” Sadly for those who don't feel like the golden child, a different study in 2016 suggests that there's not much you can do about it—mothers, at least, rarely change which child they favor most, even over the course of a lifetime.

[h/t The Independent]