CLOSE

Irish Teeth Reveal the Chemical Signature of the Great Famine

The Custom House Famine Memorial in Dublin. Image credit: William Murphy via Flickr // CC BY-SA 2.0

 
The Great Famine in Ireland, one of the worst starvations in human history, lasted from 1845 to 1852. Sometimes called the “Irish Potato Famine” due to a disease that ravaged the crop that many Irish diets were based on, this period saw the population of Ireland decrease by about one-quarter. Around 1 million people died from starvation and other diseases, while another million or so left Ireland for new lives elsewhere in Europe and the U.S. While historically, the famine is well known, research into its physical effects is a comparatively new topic in archaeology.

A novel study by Julia Beaumont of the University of Bradford and Janet Montgomery of Durham University, published recently in PLOS One, tackles the question of how to identify famine and other chronic stress from specific skeletons. They focus their analysis on human remains from the Kilkenny Union Workhouse in southeast Ireland, just one of the many workhouses that sprang up after 1838, when a law was passed to help “remedy” poverty by institutionalizing the poor and making them work long hours. Individuals and entire families would enter the workhouse, which was segregated by age and sex, overcrowded, and full of sick people.

At least 970 people were buried in mass graves at Kilkenny, in unconsecrated ground. The researchers focused on the teeth of 20 of them, representing a cross-section of age and sex. Six had died before age 9.

The failure of the potato crop shortly after the emergence of Irish workhouses meant reduced food for the poor and, as a consequence, a significant amount of sickness and death among this vulnerable population. Although the government was slow to respond to the food crisis, eventually it began to import corn from America to feed the poor. And this introduction of corn is particularly helpful archaeologically, because corn's chemical composition is very different from that of potatoes and Old World grains. Archaeologists who analyze human bones and teeth can see the dramatic differences in corn-based and wheat-based diets by measuring the ratio of the two carbon isotopes in the skeleton.

The first important finding from Beaumont and Montgomery’s study is that, for many of the 20 people they analyzed, they could see the carbon isotopes rising after the start of the Great Famine. By micro-sampling the dentine portion of teeth at various stages of formation, they show an increase in corn consumption through time that correlates well with historical information about diet.

But their second finding is even more intriguing: Even as carbon isotopes increased, the nitrogen isotopes decreased. Archaeologists use nitrogen isotopes to understand the amount of protein in a diet. If you are a carnivore and eat food high on the food chain, you have a higher nitrogen isotope signature than if you are a vegetarian. The drop in nitrogen isotopes the researchers found in the teeth that occurred after the introduction of corn does not track with historical records; there is no known change in the protein that the poor were eating at this time.

Beaumont and Montgomery argue that the change in isotopes reflects a cycle of starvation. The high nitrogen values prior to the introduction of corn don't suggest these people had a lot of meat protein to eat. Instead, these isotopes most likely indicate that their bodies, starving, were in a sense eating themselves, by recycling their own protein and fat. When the Kilkenny workers started eating corn, their nitrogen values dropped as their bodies were able to use corn for survival.

The researchers say the “famine pattern” in this historic Irish population is therefore one of average carbon values paired with high nitrogen values, followed by higher carbon and lower nitrogen values when corn is introduced to stave off starvation.

Beaumont and Montgomery see this pattern in the teeth of children who died in the workhouse during the famine, but also in the teeth of some of the adults. Since teeth form during childhood, this finding suggests that the adults suffered from—and overcame—one or more periods of chronic stress prior to the Great Famine. These stresses might have been caused by famine, but prolonged disease can leave similar isotopic traces, so they can't say for sure the adults experienced multiple periods of starvation.

This research comes at a time when micro-sampling of teeth is becoming a popular technique in archaeology. A recent study by researchers at McMaster University, for example, micro-sampled tooth dentine to look at cases of rickets, a deficiency of vitamin D.

Beaumont has plans to expand this research and to correlate this new methodology with other techniques useful for finding evidence of famine. “I have some teeth from other populations with nutritional deficiencies which I am micro-sampling to try to achieve a resolution that matches the physical signs, such as enamel hypoplasias," Beaumont tells mental_floss. (An enamel hypoplasia is a defect in tooth enamel.) "I want to work with others in the field to investigate the histology.”

Studies into ancient diets are not just useful for archaeologists; sadly, starvation and famine are not things of the past. Their findings can also be used by forensic anthropologists investigating recent deaths, especially, as the researchers write, “of populations and individuals for whom nutritional stress may have contributed to their death.” This work may prove critically important in the future for solving forensic cases of fatally malnourished children.

As for the skeletal remains of the 20 people studied—they were all re-interred at the Famine Memorial Garden in Kilkenny.

nextArticle.image_alt|e
iStock
arrow
The Elements
10 Quick Facts About Cobalt
iStock
iStock

How well do you know the periodic table? Our series The Elements explores the fundamental building blocks of the observable universe—and their relevance to your life—one by one.

Cobalt hides out in everyday objects and happenings around us, from batteries and blue paint to medical procedures. We've used it for millennia, even before the common era, but it didn't get proper credit until the 18th century. With its 27 protons, cobalt is sandwiched between iron and nickel in the middle portion of the periodic table with the other "transition" metals, which bridge the main group elements located on either side. Here are ten curious facts about this element.

1. PURE COBALT DOES NOT NATURALLY EXIST ON EARTH.

Though you can find cobalt just about everywhere—in the soil, in mineral deposits, and even in crusts on the seafloor—it's always combined with other elements like nickel, copper, iron, or arsenic, such as in the bright crimson arsenate mineral erythrite. It's usually collected as a byproduct of mining for other metals—especially nickel and copper—and, once purified, is a burnished gray color.

2. COBALT MAY NOT BE RARE, BUT IT IS VALUABLE.

Despite being relatively common, it's considered a critical raw material by the European Union because there are few places where it's abundant enough to be mined in larger quantities. The only mine in the world where it's the primary product is in Morocco.

3. COBALT WAS NAMED AFTER SUBTERRANEAN GERMAN GOBLINS.

Centuries ago, miners in the mountains of Germany had a great deal of trouble trying to melt down certain ores for useful metals like silver and copper, and even dealt with poisonous fumes released from the rock, which could make them very sick or even kill them. They blamed the kobolds—pesky, underground sprites of local folklore (and more recently, the name of a Dungeons & Dragons species). Though the vapors actually arose from the arsenic also contained in the ores, when chemists later extracted cobalt from these minerals, the name stuck.

4. COBALT WAS FINALLY ISOLATED IN THE 18TH CENTURY.

It was not until the 1730s that Swedish chemist George Brandt purified and identified cobalt from arsenic-containing ores, then another 50 years until Torbern Bergman, another Swede, verified Brandt's new element. It is worth noting, though, that at the time the elements were simply in an incomplete list and had not been organized into a meaningful table.

5. COBALT IS BEST KNOWN FOR CREATING A RICH BLUE HUE…

People have been using cobalt-containing pigments to get that rich blue hue as far back as the 3rd millennium BCE, when Persians used them to color their necklace beads. From Egypt to China, artisans created blue glass from cobalt compounds for thousands of years. The color was long attributed to the element bismuth, depriving cobalt of pigment fame. 

6. … BUT COBALT MAKES OTHER COLORS TOO.

The famed "cobalt blue" is actually the result of the compound cobalt aluminate. Cobalt in other chemical combinations can also make a variety of other colors. Cobalt phosphate is used to make a violet pigment, and cobalt green is achieved by combining cobalt oxides with zinc oxides.

7. TODAY WE USE COBALT TO MAKE POWERFUL MAGNETS AND "SUPERALLOYS."

Cobalt is one of the few elements that are ferromagnetic, which means it can become magnetized when exposed to an external magnetic field. Cobalt remains magnetic at extremely high temperatures, making it very useful for the specialized magnets in generators and hard drives. When mixed with the right metals, cobalt can also help create materials called "superalloys" that keep their strength under huge stress and high temperatures—advantageous, for instance, in a jet engine. Most people, however, can find cobalt hiding closer to home, inside some rechargeable batteries.

8. COBALT COULD ONE DAY REPLACE PRECIOUS METALS IN INDUSTRY.

Scientists such as chemist Patrick Holland at Yale University are looking at ways to use cobalt in place of the more rare and expensive metals often used in industrial catalysts. These catalysts—chemical "helpers" that speed up reactions—are used in making adhesives, lubricants, or pharmaceutical precursors, for instance. Precious metals like platinum and iridium often make good catalysts, but they are also pricey, can be toxic to humans, and, as precious implies, are not abundant. There is a "big upswing in people looking at iron, nickel, and cobalt because of their price," Holland tells Mental Floss.

All three could be viable options in the future. The challenge, Holland says, is "walking the tightrope" between creating an effective, reactive catalyst and one that is too reactive or overly sensitive to impurities.

9. COBALT HAS MULTIPLE ROLES IN MODERN MEDICINE.

The metal perches in the middle of the impressively complex molecule vitamin B12—a.k.a. cobalamin—which is involved in making red blood cells and DNA, and helps keep your nervous system healthy. Cobalt also lends an extra distinction to B12: It's the only vitamin that contains a metal atom.

To measure B12 intake in patients, doctors use a "labeled" version of B12 in which the cobalt atom is replaced with a radioactive cobalt isotope. Oncologists and technicians also use the radiation from cobalt isotopes in some cancer therapies as well as to sterilize medical and surgical tools. These days, cobalt alloys are even found in artificial hip joints and knees.

10. COBALT WAS ONCE ADDED TO BEER—WITH DEADLY CONSEQUENCES.

In the 1960s, some breweries added cobalt chloride to their beers because it helped maintain the appealing foam that builds when beer is poured. By 1967, more than 100 heavy beer drinkers in Quebec City, Minneapolis, Omaha, and Belgium had suffered heart failure, and nearly half of them died. At the time, doctors were also administering cobalt to patients for medical reasons without causing this severe effect, so the blame couldn't lie with the metal alone. After studying the remains of the deceased, scientists proposed that the so-called "cobalt-beer cardiomyopathy" had been caused by an unhealthy mélange of cobalt, high alcohol intake, and poor diet. The FDA banned the use of cobalt chloride as a food additive shortly after. 

nextArticle.image_alt|e
iStock
arrow
science
Scientists Are Using an Ancient Egyptian Pigment to Create New Technologies 
iStock
iStock

The ancient Egyptians famously built massive pyramids and developed hieroglyphics, but they're also credited with inventing the world's earliest-known artificial pigment. Today referred to as Egyptian blue, the deep azure shade was first created about 4600 years ago by heating together sand (containing quartz), minerals with copper, and natron, a salty mixture of sodium compounds used for embalming mummies. When ground together, this chemical concoction was a dead ringer for a pricier pigment made from a semi-precious blue stone called lapis lazuli, SciShow's Michael Aranda explains in the video below.

Egyptian blue was a popular color, and the hue appeared on coffins, pottery, and murals. But like most trends, the color fell out of fashion once red and yellow became big in Roman art. The pigment's recipe was lost until the 19th century, when scientists first began analyzing the chemical composition of artifacts bearing the ancient hue.

Scientists today know how to recreate Egyptian blue—but instead of using it to paint their labs, they're researching ways to incorporate the pigment into dyes for medical imaging techniques, creating new types of security ink, and even making dusting powder for fingerprint detection.

Learn how the newly discovered scientific properties of Egyptian blue can make these inventions possible by watching SciShow's video below.

SECTIONS

arrow
LIVE SMARTER
More from mental floss studios