15 of the Longest-Running Scientific Studies in History

Most experiments are designed to be done quickly. Get data, analyze data, publish data, move on. But the universe doesn’t work on nice brief timescales. For some things you need time. Lots of time.

1. THE BROADBALK EXPERIMENT // 173 YEARS

In 1842, John Bennet Lawes patented his method for making superphosphate (a common, synthetic plant nutrient) and opened up what is believed to be the first artificial fertilizer factory in the world. The following year, Lawes and chemist Joseph Henry Gilbert began a series of experiments comparing the effects of organic and inorganic fertilizers, which are now the oldest agricultural studies on Earth. For over 150 years parts of a field of winter wheat have received either manure, artificial fertilizer, or no fertilizer. The results are about what you’d expect: artificial and natural fertilized plots produce around six to seven tons of grain per hectare, while the unfertilized plot produces around one ton of grain per hectare. But there’s more. They can use these studies to test everything from herbicides to soil microbes and even figure out oxygen ratios for better reconstruction of paleoclimates.

2. THE PARK GRASS EXPERIMENT // 160 YEARS

Lawes and Gilbert started several more experiments at around the same time. In one of these experiments with hay, Lawes observed that each plot was so distinct that it looked like he was experimenting with different seed mixes as opposed to different fertilizers. The nitrogen fertilizers being applied benefited the grasses over any other plant species, but if phosphorus and potassium were the main components of the fertilizer, the peas took over the plot. Since then, this field has been one of the most important biodiversity experiments on Earth.

3. THE BROADBALK AND GEESCROFT WILDERNESSES // 134 YEARS

Yet another one of Lawes’ experiments: In 1882 he abandoned part of the Broadbalk experiment to see what would happen. What happened was that within a few years, the wheat plants were completely outcompeted by weeds—and then trees moved in [PDF]. In 1900, half of the area was allowed to continue as normal and the other half has had the trees removed every year in one of the longest studies of how plants recolonize farmland.

4. DR. BEAL’S SEED VIABILITY EXPERIMENT // 137 YEARS

In 1879, William Beal of Michigan State University buried 20 bottles of seeds on campus. The purpose of this experiment was to see how long the seeds would remain viable buried underground. Originally, one bottle was dug up every five years, but that soon changed to once every 10 years, and is now once every 20 years. In the last recovery in 2000, 26 plants were germinated, meaning slightly more than half survived over 100 years in the ground. The next will be dug up in 2020, and (assuming no more extensions) the experiment will end in 2100.

Even if it is extended for a while, there will probably still be viable seeds. In 2008, scientists were able to successfully germinate a circa-2000 year old date palm seed, and four years later, Russian scientists were able grow a plant from a 32,000 year old seed that had been buried by an ancient squirrel.

5. THE PITCH DROP EXPERIMENT // 86 YEARS

If you hit a mass of pitch (the leftovers from distilling crude oil) with a hammer, it shatters like a solid. In 1927, Thomas Parnell of the University of Queensland in Australia decided to demonstrate to his students that it was actually liquid. They just needed to watch it for a while. Some pitch was heated up and poured into a sealed stem glass funnel. Three years later, the stem of the funnel was cut and the pitch began to flow. Very slowly. Eight years later, the first drop fell. Soon the experiment was relegated to a cupboard to collect dust, until 1961 when John Mainstone learned of its existence and restored the test to its rightful glory. Sadly, he never saw a pitch drop. In 1979 it dropped on a weekend, in 1988 he was away getting a drink, in 2000 the webcam failed, and he died before the most recent drop in April 2014.

As it turns out, the Parnell-initiated pitch drop experiment isn’t even the oldest. After it gathered international headlines, reports of other pitch drop experiments became news. Aberystwyth University in Wales found a pitch drop experiment that was started 13 years before the Australian one, and has yet to produce a single drop (and indeed is not expected to for another 1300 years), while the Royal Scottish Museum in Edinburgh found a pitch drop experiment from 1902. All of them prove one thing though: With enough time, a substance that can be shattered with a hammer still might be a liquid.

6. THE CLARENDON DRY PILE // 176-191 YEARS

Around 1840, Oxford physics professor Robert Walker bought a curious little contraption from a pair of London instrument makers that was made up of two dry piles (a type of battery) connected to bells with a metal sphere hanging in between them. When the ball hit one of the bells, it became negatively charged and shot towards the other positively charged bell where the process repeats itself. Because it uses only a minuscule amount of energy, the operation has occurred ten billion times and counting. It’s entirely possible that the ball or bells will wear out before the batteries fully discharge.

Although we don’t know the composition of the battery itself (and likely won’t until it winds down in a few hundred years), it has led to scientific advancements. During WWII, the British Admiralty developed an infrared telescope that needed a battery capable of producing high voltage, low current, and that could last forever. One of the scientists remembered seeing the Clarendon Dry Pile—also referred to as the Oxford Electric Bell—and was able to find out how to make his own dry pile for the telescope.

7. THE BEVERLY (ATMOSPHERIC) CLOCK // 152 YEARS

Sitting in the foyer of the University of Otago in New Zealand is the Beverly Clock. Developed in 1864 by Arthur Beverly, it is a phenomenal example of a self-winding clock. Beverly realized that, while most clocks used a weight falling to get the energy to run the clock mechanism, he could get the same energy with one cubic foot of air expanding and contracting over a six-degree Celsius temperature range. It hasn’t always worked; there have been times it needed cleanings, it stopped when the Physics department moved, and if the temperature is too stable it can stop. But it’s still going over 150 years later.

8. THE AUDUBON CHRISTMAS BIRD COUNT // 116 YEARS

Since 1900, folks from across the continent have spent time counting birds. What began as an activity to keep people from hunting our feathered friends on Christmas Day, has turned into one of the world’s most massive and long-lasting citizen science projects. Although the 2015 results aren’t ready yet, we know that in 2014, 72,653 observers counted 68,753,007 birds of 2106 species.

9. THE HARVARD STUDY OF ADULT DEVELOPMENT // 78 YEARS

One of the longest running development studies, in 1938 Harvard began studying a group of 268 sophomores (including one John F. Kennedy), and soon an additional study added 456 inner-city Bostonians. They’ve been followed ever since, from World War II through the Cold War and into the present day, with surveys every two years and physical examinations every five. Because of the sheer wealth of data, they’ve been able to learn all kinds of interesting and unexpected things. One such example: The quality of vacations one has in their youth often indicates increased happiness later in life.

10. THE TERMAN LIFE CYCLE STUDY // 95 YEARS

In 1921, 1470 California children who scored over 135 on an IQ test began a relationship that would turn into one of the world’s most famous longitudinal studies—the Terman Life Cycle Study of Children with High Ability.  Over the years, in order to show that early promise didn’t lead to later disappointment, participants filled out questionnaires about everything from early development, interests, and health to relationships and personality.  One of the most interesting findings is that, even among these smart folk, character traits like perseverance made the most difference in career success.

11. THE NATIONAL FOOD SURVEY // 76 YEARS

Starting in 1940, the UK’s National Food Survey tracked household food consumption and expenditure, and was the longest lasting program of its kind in the world. In 2000 it was replaced with the Expenditure and Food Survey, and in 2008 the Living Costs and Food Survey. And it’s provided interesting results. For instance, earlier this year it was revealed that tea consumption has fallen from around 23 cups per person per week to only eight cups, and no one in the UK ate pizza in 1974, but now the average Brit eats 75 grams (2.5 ounces) a week.

12. THE FRAMINGHAM HEART STUDY // 68 YEARS

In 1948, the National Heart, Lung, and Blood Institute teamed up with Boston University to get 5209 people from the town of Framingham to do a long-term study of how cardiovascular disease developed. Twenty-three years later they also recruited the adult children of the original experiment and in 2002 a third generation. Over the decades, the Framingham Heart Study researchers claim to have discovered that cigarette smoking increased risk, in addition to identifying potential risk factors for Alzheimer’s, and the dangers of high blood pressure.

13. THE E. COLI LONG TERM EVOLUTION EXPERIMENT // 26 YEARS

While this one might not seem that impressive in terms of length, it has to be the record for number of generations that have come and gone over the course of the study: well over 50,000. Richard Lenski was curious whether flasks of identical bacteria would change in the same way over time, or if the groups would diverge from each other. Eventually, he got bored with the experiment, but his colleagues convinced him to keep going, and it’s a good thing they did. In 2003, Lenski noticed that one of flasks had gone cloudy, and some research led him to discover that the E. coli in one of the flasks had gained the ability to metabolize citrate. Because he had been freezing previous generations of his experiment, he was able to precisely track how this evolution occurred.

14. THE BSE EXPERIMENT // 11 YEARS

Sadly, sometimes things can go terribly wrong during long-term experiments. Between 1990 and 1992, British scientists collected thousands of sheep brains. Then, for over four years, those prepared sheep brains were injected into hundreds of mice to learn if the sheep brains were infected with BSE (mad-cow disease). Preliminary findings suggested that they were, and plans were drawn up to slaughter every sheep in England. Except those sheep brains? They were actually cow brains that had been mislabeled. And thus ended the longest running experiment on sheep and BSE.

15. THE JUNEAU ICEFIELD RESEARCH PROGRAM // 68 YEARS

Attention to glacier retreat and the effects of global warming on the world’s ice fields has rapidly increased over the course of the last few decades, but the Juneau Icefield Research Program has been monitoring the situation up north since 1948. In its nearly 70 years of existence, the project become the longest-running study of its kind, as well as an educational and exploratory experience. The monitoring of the many glaciers of the Juneau Icefield in Alaska and British Columbia has a rapidly approaching end date though—at least in geological terms. A recent study published in the Journal of Glaciology predicts that the field will be gone by 2200.

That Sugar Rush Is All In Your Head

iStock.com/egal
iStock.com/egal

We've all heard of the "sugar rush." It's a vision that prompts parents and even teachers to snatch candy away from kids, fearing they'll soon be bouncing off the walls, wired and hyperactive. It’s a myth American culture has clung to for decades—and these days, it’s not just a kid thing. Adults are wary of sugar, too. Some of this fear is warranted—diabetes, the obesity epidemic—but the truth is, sugar doesn't cause hyperactivity. Its impact on the body isn’t an up-and-down thing. The science is clear: There is no "sugar rush.”

To find out how and why the myth started, we need to go back to well before the first World War—then pay a visit to the 1970s.

Our Complicated Relationship With Sugar

According to cultural historian Samira Kawash, America has had a long, complex, love-hate relationship with sugar. In Candy: A Century of Panic and Pleasure, Kawash traces the turn from candy-as-treat to candy-as-food in the early 20th century. At that time, the dietary recommendations from scientists included a mix of carbohydrates, proteins, and fats, with sugar as essential for energy.

Not everyone was on board: The temperance movement, for example, pushed the idea that sugar caused an intoxication similar to alcohol, making candy-eaters sluggish, loopy, and overstimulated. In 1907, the chief of the Philadelphia Bureau of Health estimated that the "appetite" for candy and alcohol were "one and the same," Kawash writes. On the flip side, other scientists suggested that sugar from candy could stave off cravings for alcohol—a suggestion that candymakers then used in their advertisements.

While the debate about sugar as an energy source raged in America, militaries around the world were also exploring sugar as energy for soldiers. In 1898, the Prussian war office became the first to commission a study on the sweet stuff—with promising results: "Sugar in small doses is well-adapted to help men to perform extraordinary muscular labor," early researchers wrote. German military experiments introduced candy and chocolate cakes as fortification for the troops, and the U.S. military added sugary foods to soldiers' diets soon after. When American soldiers returned from World War I, they craved sweets, which "propelled an enormous boom" of candy sales that has lasted to this day, Kawash wrote on her blog, The Candy Professor. American advertisers framed candy as a quick, easy source of energy for busy adults during their workday.

As artificial sweeteners moved into kitchens in the 1950s, candymakers struggled to make their products appeal to women who were watching their waistlines. One industry group, Sugar Information Inc., produced a tiny "Memo to Dieters" pamphlet in 1954 designed to fit inside chocolate boxes. "Sugar before meals raises your blood sugar level and reduces your appetite," it claimed. But by the 1970s, the sugar-positivity heyday had started to wane.

The Origins of the Sugar Rush Myth

The idea that sugar causes hyperactivity gained traction in the early 1970s, when more attention was being paid to how diet might affect behavior. One of the major figures studying the possible connection between diet and behavior was an allergist named Benjamin Feingold, who hypothesized that certain food additives, including dyes and artificial flavorings, might lead to hyperactivity. He formalized this into a popular—yet controversial—elimination diet program. Though certain sugary foods were banned from the program for containing dyes and flavorings, sugar itself was never formally prohibited. Still, thanks in part of the Feingold diet, sugar started to become the poster child for diet and hyperactivity.

It wasn't until the late 1980s that serious doubts about sugar's connection to hyperactivity began to be raised by scientists. As FDA historian Suzanne White Junod wrote in 2003 [PDF], the 1988 Surgeon General's Report on Nutrition and Health concluded that "alleged links between sugar consumption and hyperactivity/attention deficit disorders in children had not been scientifically supported." Despite "mothers' mantra of no sweets before dinner," she noted, "more serious allegations of adverse pediatric consequences … have not withstood scientific scrutiny."

A 1994 paper found that aspartame—an artificial sweetener that had also been accused of inducing hyperactivity in children—had no effect on 15 children with ADHD, even though they had consumed 10 times more than the typical amount.

A year later, the Journal of the American Medical Association published a meta-analysis of the effect of sugar on children's behavior and cognition. It examined data from 23 studies that were conducted under controlled conditions: In every study, some children were given sugar, and others were given an artificial sweetener placebo like aspartame. Neither researchers nor children knew who received the real thing. The studies recruited neurotypical children, kids with ADHD, and a group who were "sensitive" to sugar, according to their parents.

The analysis found that "sugar does not affect the behavior or cognitive performance of children." (The authors did note that “a small effect of sugar or effects on subsets of children cannot be ruled out.”)

"So far, all the well-controlled scientific studies examining the relationship between sugar and behavior in children have not been able to demonstrate it," Mark Wolraich, an emeritus professor of pediatrics at the University of Oklahoma Health Sciences Center who has worked with children with ADHD for more than 30 years and the co-author of that 1995 paper, tells Mental Floss.

Yet the myth that consuming sugar causes hyperactivity hasn’t really gone away. One major reason is the placebo effect, which can have powerful results. The idea that you or your children might feel a "sugar rush" from too much candy isn't unlike the boost you hope to feel from an energy drink or a meal replacement shake or bar (which can contain several teaspoons of sugar). The same is true for parents who claim that their kids seem hyperactive at a party. Peer pressure and excitement seem to be to blame—not sugar.

"The strong belief of parents [in sugar's effects on children's behavior] may be due to expectancy and common association," Wolraich wrote in the JAMA paper.

It works the other way, too: Some parents say they've noticed a difference in their kids' behavior once they take out most sugars from their diets. This strategy, like the Feingold diet, continues to attract interest and followers because believing it works has an impact on whether it actually works or not.

Correlation, Causation, and Caffeine

Which isn't to say there are absolutely no links between sugar consumption and poor health outcomes. A 2006 paper found that drinking a lot of sugary soft drinks was associated with mental health issues, including hyperactivity, but the study's design relied on self-reported questionnaires that were filled out by more than 5000 10th-graders in Oslo, Norway. The authors also noted that caffeine is common in colas, which might have a confounding effect.

In another study, conducted by University of Vermont professor of economics Sara Solnick and Harvard health policy professor David Hemenway, the researchers investigated the so-called "Twinkie defense," in which sugar is said to contribute to an "altered state of mind." (The phrase Twinkie defense comes from the 1979 trial of Dan White for killing San Francisco city district supervisor Harvey Milk and Mayor George Moscone. His lawyers argued that White had "diminished capacity and was unable to premeditate his crime," as evidenced in part by his sudden adoption of a junk-food diet in the months before the murders. White was convicted of voluntary manslaughter.)

In their survey of nearly 1900 Boston public high schoolers, Solnick and Hemenway found "a significant and strong association between soft drinks and violence." Adolescents who drank more than five cans of soft drinks per week—nearly 30 percent of the group—were significantly more likely to have carried a weapon.

But Solnick tells Mental Floss the study isn't evidence of a "sugar rush."

"Even if sugar did cause aggression—which we did not prove—we have no way of knowing whether the effect is immediate (and perhaps short-lived) as the phrase 'sugar rush' implies, or whether it’s a longer-term process," she says. Sugar could, for example, increase irritability, which might sometimes flare up into aggression—but not as an immediate reaction to consuming sugar.

Harvard researchers are looking into the long-term effects of sugar using data from Project Viva, a large observational study of pregnant women, mothers, and their children. A 2018 paper in the American Journal of Preventive Medicine studied more than 1200 mother-child pairs from Project Viva, assessing mothers' self-reported diets during pregnancy as well as their children's health during early childhood.

"Sugar consumption, especially from [sugar-sweetened beverages], during pregnancy and childhood, and maternal diet soda consumption may adversely impact child cognition,” the authors concluded, though they noted that other factors could explain the association.

“This study design can look at relationships, but it cannot determine cause and effect,” says Wolraich, who was not involved in the study. "It is equally possible that parents of children with lower cognition are likely to cause a greater consumption of sugar or diet drinks, or that there is a third factor that influences cognition and consumption.”

The Science of the Sugar Crash

Though the evidence against the sugar rush is strong, a "sugar crash" is real—but typically it only affects people with diabetes.

According to the National Institute of Diabetes and Digestive and Kidney Diseases, low blood sugar—or hypoglycemia—is a serious medical condition. When a lot of sugar enters the bloodstream, it can spike the blood sugar level, causing fluctuation, instability, and eventually a crash—which is called reactive hypoglycemia. If a diabetic's blood sugar levels are too low, a number of symptoms—including shakiness, fatigue, weakness, and more—can follow. Severe hypoglycemia can lead to seizures and even coma.

For most of us, though, it's rare. Endocrinologist Dr. Natasa Janicic-Kahric told The Washington Post that "about 5 percent of Americans experience sugar crash."

You're more likely to experience it if you do a tough workout on an empty stomach. "If one exercises vigorously and doesn't have sufficient intake to supplement their use of calories, they can get lightheaded," Wolraich says. "But in most cases, the body is good at regulating a person's needs."

So what you're attributing to sugar—the highs and the lows—is probably all in your head.

Yes, There Is Such a Thing as Getting Too Much Sleep

iStock.com/byakkaya
iStock.com/byakkaya

Regularly getting a good night's rest is incredibly important. While you’re sleeping, your body is sorting memories, cleaning out your brain, boosting your immune system, and otherwise recovering from the day. But there is such a thing as too much of a good thing: According to Popular Science, it's possible to sleep too much.

It's hard to say exactly how much sleep you should be getting each night, but a new observational study of more than 116,000 people across 21 countries finds that sleeping nine or more hours a night is correlated with a higher mortality risk. The sweet spot for healthy sleep habits, according to this data, seems to be six to eight hours each night. (Even if part of that time comes from daytime naps.)

The new paper published in the European Heart Journal examined data from the Prospective Urban Rural Epidemiology study, followed individuals between the ages of 35 and 70 across the world, some of whom lived in high-income countries like Canada and Sweden; others of whom lived in countries considered middle-income, like Argentina and Turkey; and others who lived in countries considered to be low-income like Bangladesh and Pakistan.

Over the course of an average 7.8 years, study participants answered follow-up questions about what time they went to bed and got up, and whether they napped and for how long. They also answered general health questions about things like exercise rates, dietary patterns, and weight. The researchers then collected medical records and death certificates to track whether the subjects had major cardiac events (like heart attacks) or died during the study period.

The researchers found both sleeping too much and sleeping too little to be associated with a higher likelihood of dying before the study was through. Across the world, participants who got less than six hours a day or more than eight hours a day were more likely to experience major cardiac events than participants who slept between six and eight hours a night. When the researchers adjusted the results for age and sex, they still found sleep duration to be a significant predictor of heart issues and all-cause mortality.

While adjusting for factors like physical activity, BMI, and diet did change the results a bit, the basic pattern—a J-shaped curve showing higher risk for short sleepers, low risk for moderate sleepers, and even higher risk for very long sleepers—was the same. While previous research has suggested that naps can be good for your health, this study found that napping was associated with worse outcomes if it put someone over the eight-hours-of-sleep mark in that 24-hour period.

The results may feel like vindication to people who feel terrible whenever they stay in bed too long, but there are some caveats. Sleeping nine hours a day might be a sign that someone has an underlying health condition that in itself poses a higher mortality risk, rather than the cause of the higher mortality risk in itself. The researchers tried to account for this by analyzing the data only for people who were known to have no prevalent diseases and who weren't at risk for conditions like sleep apnea and insomnia, and later by excluding people who had a cardiac event or died during the first two years of the study.

"This suggests that sleep duration per se may be associated with increased risks," they write (emphasis in the original), "but causality cannot be definitively proven from this or other observational studies (and randomized studies of different sleep durations may be difficult to conduct)." So we may never know for sure just how much risk we take upon ourselves when we settle in for a long nap.

Considering that plenty of other research suggests that around seven hours of sleep total is an ideal target, you should probably aim for that number while setting your alarm. And if getting too much shut-eye isn't your problem, check out our tips for getting back to sleep after you've woken up in the middle of the night.

[h/t Popular Science]

SECTIONS

arrow
LIVE SMARTER