8 Essential Facts About Uranium

Uranium glass vessels.
Uranium glass vessels.
Attila Kisbenedek/AFP/Getty Images

How well do you know the periodic table? Our series The Elements explores the fundamental building blocks of the observable universe—and their relevance to your life—one by one.

Uranium took some time asserting itself. For centuries, heaps of it languished in waste rock piles near European mines. After formal discovery of the element in the late 18th century, it found a useful niche coloring glass and dinner plates. In the first half of the 20th century, scientists began investigating uranium's innate potential as an energy source, and it has earned its place among the substances that define the "Atomic Age," the era in which we still live. Here are some essential facts about U92.


With a nucleus packed with 92 protons, uranium is the heaviest of the elements. That weight once compelled shipbuilders to use spent uranium as ballast in ship keels. Were it employed that way now, sailing into port could set off defense systems.

Uranium was first found in silver mines in the 1500s in what's now the Czech Republic. It generally appeared where the silver vein ran out, earning it the nickname pechblende, meaning "bad luck rock." In 1789, Martin Klaproth, a German chemist analyzing mineral samples from the mines, heated it and isolated a "strange kind of half-metal"—uranium dioxide. He named it after the recently discovered planet Uranus.

French physicist Henri Becquerel discovered uranium's radioactive properties—and radioactivity itself—in 1896. He left uranyl potassium sulfate, a type of salt, on a photographic plate in a drawer, and found the uranium had fogged the glass like exposure to sunlight would have. It had emitted its own rays.


Uranium decays into other elements, shedding protons to become protactinium, radium, radon, polonium, and on for a total of 14 transitions, all of them radioactive, until it finds a resting point as lead. Before Ernest Rutherford and Frederick Soddy discovered this trait around 1901, the notion of transforming one element into another was thought to be solely the territory of alchemists.


Uranium's size creates instability. As Tom Zoellner writes in Uranium: War, Energy, and the Rock That Shaped the World, "A uranium atom is so overloaded that it has begun to cast off pieces of itself, as a deluded man might tear off his clothes. In a frenzy to achieve a state of rest, it slings off a missile of two protons and two neutrons at a velocity fast enough to whip around the circumference of the earth in roughly two seconds."


Traces of uranium appear in rock, soil, and water, and can be ingested in root vegetables and seafood. Kidneys take the burden of removing it from the bloodstream, and at high enough levels, that process can damage cells, according to the Argonne National Laboratory. But here's the good news: After short-term, low-level exposures, kidneys can repair themselves.


Before we recognized uranium's potential for energy—and bombs—most of its uses revolved around color. Photographers washed platinotype prints in uranium salts to tone otherwise black and white images reddish-brown. Added to glass, uranium gave beads and goblets a canary hue. Perhaps most disconcertingly, uranium makes Fiesta Ware's red-orange glaze—a.k.a. "radioactive red"—as hot as it looks; plates made before 1973 still send Geiger counters into a frenzy.


Uranium occurs naturally in three isotopes (forms with different masses): 234, 235, and 238. Only uranium-235—which constitutes a mere 0.72 percent of an average uranium ore sample—can trigger a nuclear chain reaction. In that process, a neutron bombards a uranium nucleus, causing it to split, shedding neutrons that go on to divide more nuclei.

In the 1940s, a team of scientists began experimenting in the then-secret city of Los Alamos, New Mexico, with how to harness that power. They called it "tickling the dragon's tail." The uranium bomb their work built, Little Boy, detonated over the Japanese city of Hiroshima on August 6, 1945. Estimates vary, but the detonation is thought to have killed 70,000 people in the initial blast and at least another 130,000 more from radiation poisoning over the following five years.

The same property that powered bombs is what now makes uranium useful for electricity. "It's very energy dense, so the amount of energy you can get out of one gram of uranium is exponentially more than you can get out of a gram of coal or a gram of oil," Denise Lee, research and development staff member at Oak Ridge National Laboratory, tells Mental Floss. A uranium fuel pellet the size of a fingertip boasts the same energy potential as 17,000 cubic feet of natural gas, 1780 pounds of coal, or 149 gallons of oil, according to the Nuclear Energy Institute, an industry group.


In the 1970s, ore samples from a mine in what is now Gabon came up short on uranium-235, finding it at 0.717 percent instead of the expected 0.72 percent. In part of the mine, about 200 kilograms were mysteriously absent—enough to have fueled a half-dozen nuclear bombs. At the time, the possibility of nuclear fission reactors spontaneously occurring was just a theory. The conditions for it required a certain deposit size, a higher concentration of uranium-235, and a surrounding environment that encouraged nuclei to continue splitting. Based on uranium-235's half-life, researchers determined that about 2 billion years ago, uranium occurred as about 3 percent of the ore. It was enough to set off nuclear fission reactions in at least 16 places, which flickered on and off for hundreds of thousands of years. As impressive as that sounds, the average output was likely less than 100 kilowatts—enough to run a few dozen toasters, as physicist Alex Meshik explained in Scientific American.


A 2010 study from MIT found the world had enough uranium reserves to supply power for decades to come. At present, all commercial nuclear power plants use at least some uranium, though plutonium is in the mix as well. One run through the reactors consumes only about 3 percent of the enriched uranium. "If you could reprocess it multiple times, it can be practically infinite," Stephanie Bruffey, a research and development staff member for Oak Ridge National Laboratory, tells Mental Floss. Tons of depleted uranium or its radioactive waste byproducts sit on concrete platforms at nuclear power plants and in vaults at historic weapons facilities around the country; these once temporary storage systems have become a permanent home. 

8 Facts About the Element Neon

Most of us are familiar with neon as a term for bright colors and vibrant signs, but you may not know as much about the element underlying the name, which scientists were first able to isolate starting in 1898. Here are eight facts about neon—abbreviated Ne and number 10 on the periodic table— that might surprise you.

1. The element neon wasn’t William Ramsay’s first big discovery.

Sir William Ramsay already had a few elements under his belt by the time he and fellow British chemist Morris Travers became the first scientists to isolate neon. In 1894, he and physicist John Williams had isolated argon from air for the first time. Then, in 1895, he became the first person to isolate helium on Earth. But he had a hunch that more noble gases might exist, and he and Travers isolated neon, krypton, and xenon for the first time in 1898. As a result of his discoveries, Ramsay won the Nobel Prize in chemistry in 1904.

2. It’s one of the noble gases.

There are seven noble gases: helium, neon, argon, krypton, xenon, radon, and oganesson (a synthetic element). Like the other noble gases, neon is colorless, odorless, tasteless, and under standard conditions, nonflammable. Neon is highly unreactive—the least reactive of any of the noble gases, in fact—and doesn’t form chemical bonds with other elements, so there are no neon compounds. That non-reactivity is what makes neon so useful in light bulbs.

3. The name means new.

With the exception of helium, all of the noble gases have names ending in -on. The word neon comes from the Greek word for new, νέος.

4. It's pulled out of the air.

Neon is one of the most abundant elements in the universe. Stars produce it, and it’s one of the components of solar wind. It's also found in the lunar atmosphere. But it’s difficult to find on Earth. Neon is located in Earth’s mantle as well as in tiny amounts in air, which is where we get commercial neon. Dry air contains just 0.0018 percent neon, compared to 20.95 percent oxygen and 78.09 percent nitrogen, plus trace amounts of other gases. Using a process of alternately compressing and expanding air, scientists can turn most of these gases into liquids, separating them for industrial and commercial use. (Liquid nitrogen, for instance, is used to freeze warts and make cold brew coffee, among other applications.) In the case of neon, it’s not a simple or efficient process. It takes 88,000 pounds of liquid air to produce 1 pound of neon.

5. It glows red.

Although we associate neon with a whole spectrum of bright, colorful lights, neon itself only glows reddish-orange. The signs we think of as just “neon” often actually contain argon, helium, xenon, or mercury vapor in some combination. On their own, these gases produce different colors—mercury glows blue, while helium glows pinkish-red and xenon glows purple. So to create a range of warm and cool colors, engineers combine the different gases or add coatings to the inside of the lighting tubes. For instance, deep blue light might be a mixture of argon and mercury, while a red sign probably has a neon-argon mixture. Depending on the color, some of the signs we call neon may not contain any neon at all. (These days, though, many bright signs are made with LEDs, rather than any of these inert gases.)

6. It quickly became a lighting element.

From the start, Ramsay and Travers knew that neon glowed if it came into contact with a high voltage of electric current. In fact, Ramsay referred to its "brilliant flame-covered light, consisting of many red, orange, and yellow lines” in his Nobel Prize lecture. Soon enough, French engineer Georges Claude began trying to harness it for use in commercial lighting. He had developed a new process to liquify air and separate its different components on an industrial scale. His company, L’Air Liquide, started out selling liquid oxygen, but Claude also figured out a way to make money off one of the byproducts of the process, neon. Inspired by the design of Moore lamps, he put neon into long glass tubes that were book-ended with electrodes. He debuted his first glowing neon tubes in Paris in 1910, and sold his first neon sign in 1912. He attained a U.S. patent for neon lighting in 1915, and went on to make a fortune.

7. It made it to California before Las Vegas.

Neon signage didn’t immediately come to Las Vegas, though it would later become an integral part of that city’s architectural aesthetic. (Vegas is now home to the Neon Museum, a collection of classic neon signs.) It’s unclear where neon signs first came to the U.S.—legend has it that Los Angeles became the first U.S. city to boast a neon sign thanks to the luxury car company Packard (which caused traffic jams when it debuted its brightly colored billboard)—but academics and historians have had trouble verifying that claim. The earliest neon sign researchers Dydia DeLyser and Paul Greenstein were able to track down in the U.S. was indeed a Packard sign in California dating back to 1923. But it hung outside a showroom in San Francisco, not Los Angeles.

8. It’s for more than just signs.

Neon is also used in lasers, electronic equipment, diving gear, and more. It’s a highly effective refrigerant, and is used to cool motors, power equipment, and superconductors, among other things.

8 Facts About Silver

Peter Macdiarmid/Getty Images
Peter Macdiarmid/Getty Images

Subtle silver gets pushed aside next to gold, but in many ways it outranks its lustrous competition. The cool-toned element is more conductive and more reflective, and boasts properties absent in other metals, like a reaction with light that put the “silver” in “silver screen.” Read on for more.


Archeological records show humans have mined and used silver (or Ag, number 47 on the periodic table) for at least 5000 years. Silver shows up in slag heaps at ancient mines in Turkey and Greece, as well as in deposits in China, Korea, Japan, and South America. Its visible shine made it popular in jewelry, decorative objects, and practical tools like the aptly named silverware. Its rarity gave it high value. Silver coins are credited with fueling the rise of classical Athens, and Vikings used “hacksilver”—chunks of silver bullion chopped off a larger block of the metal—as money.


As a soft, pliable metal, silver is easily smelted, but the process still requires moderate heat. Metal workers in the precolonial Americas didn’t have bellows to pump oxygen to a fire; instead, several people would encircle the fire and blow on it through tubes to increase its intensity. The Inca of the Andes became expert silversmiths. They believed gold was the sweat of the sun, and silver came from the tears of the moon.


Of all metals, silver is the best conductor of heat and electricity, so it can be used in a wide variety of applications. Metal solder, electrical parts, printed circuit boards, and batteries have all be made with silver. But it’s expensive: In electrical wiring, copper is often used instead.


In the 1720s, German physicist Johann Heinrich Schulze produced the first images with silver. Having discovered that a piece of chalk dipped in silver nitrate would turn black when exposed to sunlight, Schulze affixed stencils to a glass jar filled with a mix of chalk and silver nitrate. When he brought the jar into the sun, the light “printed” the stencil letters onto the chalk. A century later, Louis-Jacques-Mandé Daguerre created photographic prints on silver-coated copper plates. At the same time, British chemist William Henry Fox Talbot devised a method for developing an exposed image on silver iodide-coated paper with gallic acid.

“The effect was seen as magical, a devilish art. But this mystical development of an invisible picture was a simple reduction reaction,” science reporter Victoria Gill explains on the Royal Society of Chemistry’s podcast Chemistry in its Element. “Hollywood could never have existed without the chemical reaction that gave celluloid film its ability to capture the stars and bring them to the aptly dubbed silver screen.” Silver salts are still used in rendering high-quality images.


Silver reacts with sulfur in the air, which forms a layer of tarnish that can darken or change the color of a silver object. The tarnish interferes with how silver reflects light, often turning the object black, gray, or a mix of purple, orange, and red. An at-home experiment can demonstrate the process: Put a shelled and quartered hard-boiled egg (preferably still warm) in the same container as a silver object, like a spoon, and seal the container closed. The tarnish should appear within an hour, thanks to the egg’s release of hydrogen sulfide gas, and grow darker as time goes on.


According to a 2009 review, silver was one of the most important anti-microbial tools in use before the discovery of modern antibiotics in the 1940s. The ancient Macedonians were likely the first to apply silver plates to surgical wounds, while doctors in World War I used silver to prevent infections when suturing battlefield injuries. Silver is toxic to bacteria, but not to humans—unless it’s consumed in large quantities. Ingesting too much silver can cause argyria, a condition where the skin permanently turns gray or blue due to silver’s reactivity with light.

A 2013 study in Science Transitional Medicine looked into the mechanisms behind silver’s anti-microbial powers. The findings suggested that silver makes bacterial cells more permeable and interferes with their metabolism. When antibiotics were administered with a small amount of silver, the drugs killed between 10 and 1000 times more bacteria than without it. “It’s not so much a silver bullet; more a silver spoon to help [bacteria] take their medicine,” lead researcher James Collins, a biomedical engineer at Boston University, told Nature.


When regions need rain after a prolonged drought, scientists can “seed” clouds by spraying silver iodide particles into the atmosphere. In the 1940s, Bernard Vonnegut (brother of the author Kurt Vonnegut) demonstrated in a lab that silver iodide provides a scaffold on which water molecules can freeze, which (theoretically) would lead to precipitation in the form of snowflakes. In a 2018 study, researchers from the University of Colorado in Boulder and other institutions demonstrated the process in actual clouds. The team sent out two planes; one to spray silver iodide and the other to track its course and measure how water responded. The second plane recorded a zigzagged line of water particles freezing in the same flight path as the plane spraying silver, confirming silver iodide’s role in cloud seeding.

Bernard Vonnegut had made his discovery while he and his brother both worked for General Electric in Schenectady, New York. The two discussed the idea of water stabilized as ice at room temperature—a concept that Kurt Vonnegut went on to explore as ice-nine in his novel Cat’s Cradle.


The United States established a “bimetallic” currency during George Washington’s presidency. The policy required the federal government to purchase millions of ounces of silver each year to mint coins or set the value of paper currency. Government demand for silver contributed to the boom of Western mining towns in the mid-19th century, encouraged by the 1890 Sherman Silver Purchase Act, which increased the federal purchase of silver.

But falling values in relation to gold eventually led to the repeal of the Sherman act, and the price of silver crashed. The mining settlements shrank from hundreds of residents to just a handful—and some were completely abandoned. Ghost towns (or minimally populated near-ghost towns) with names like Bullionville, El Dorado, Potosi, and Midas can still can be explored in Nevada, the Silver State.