Crocker Land: The Legendary Arctic Island That Didn't Actually Exist

Luke Spencer
Luke Spencer

In the archives of the American Geographical Society in Milwaukee lies a century-old map with a peculiar secret. Just north of Greenland, the map shows a small, hook-shaped island labeled “Crocker Land” with the words “Seen By Peary, 1906” printed just below.

The Peary in question is Robert Peary, one of the most famous polar explorers of the late 19th and early 20th centuries, and the man who claimed to have been the first to step foot on the North Pole. But what makes this map remarkable is that Crocker Land was all but a phantom. It wasn't “seen by Peary”—as later expeditions would prove, the explorer had invented it out of the thin Arctic air.

Explorer Robert Peary aboard the Roosevelt.

Robert Peary aboard the Roosevelt.

Hulton Archive/Getty Images

By 1906, Peary was the hardened veteran of five expeditions to the Arctic Circle. Desperate to be the first to the North Pole, he left New York in the summer of 1905 in a state-of-the-art ice-breaking vessel, the Roosevelt—named in honor of one of the principal backers of the expedition, President Theodore Roosevelt. The mission to set foot on the top of the world ended in failure, however: Peary said he sledged to within 175 miles of the pole (a claim others would later question), but was forced to turn back by storms and dwindling supplies.

Peary immediately began planning another attempt, but found himself short of cash. He apparently tried to coax funds from one of his previous backers, San Francisco financier George Crocker—who had donated $50,000 to the 1905-'06 mission—by naming a previously undiscovered landmass after him. In his 1907 book Nearest the Pole, Peary claimed that during his 1906 mission he'd spotted “the faint white summits” of previously undiscovered land 130 miles northwest of Cape Thomas Hubbard, one of the most northerly parts of Canada. Peary named this newfound island “Crocker Land” in his benefactor’s honor, hoping to secure another $50,000 for the next expedition.

His efforts were for naught: Crocker diverted much of his resources to helping San Francisco rebuild after the 1906 earthquake, with little apparently free for funding Arctic exploration. But Peary did make another attempt at the North Pole after securing backing from the National Geographic Society, and on April 6, 1909, he stood on the roof of the planet—at least by his own account. “The Pole at last!!!" the explorer wrote in his journal. "The prize of 3 centuries, my dream and ambition for 23 years. Mine at last."

Peary wouldn't celebrate his achievement for long, though: When the explorer returned home, he discovered that Frederick Cook—who had served under Peary on his 1891 North Greenland expedition—was claiming he'd been the first to reach the pole a full year earlier. For a time, a debate over the two men's claims raged—and Crocker Land became part of the fight. Cook claimed that on his way to the North Pole he’d traveled to the area where the island was supposed to be, but had seen nothing there. Crocker Land, he said, didn't exist.

Peary’s supporters began to counter-attack, and one of his assistants on the 1909 trip, Donald MacMillan, announced that he would lead an expedition to prove the existence of Crocker Land, vindicating Peary and forever ruining the reputation of Cook.

There was also, of course, the glory of being the first to set foot on the previously unexplored island. Historian David Welky, author of A Wretched and Precarious Situation: In Search of the Last Arctic Frontier, recently explained to National Geographic that with both poles conquered, Crocker Land was “the last great unknown place in the world.”

A report from the Crocker Land expedition.
American Geographical Society Library. University of Wisconsin-Milwaukee Libraries.

After receiving backing from the American Museum of Natural History, the University of Illinois, and the American Geographical Society, the MacMillan expedition departed from the Brooklyn Navy Yard in July 1913. MacMillan and his team took provisions, dogs, a cook, “a moving picture machine,” and wireless equipment, with the grand plan of making a radio broadcast live to the United States from the island.

But almost immediately, the expedition was met with misfortune: MacMillan’s ship, the Diana, was wrecked on the voyage to Greenland by her allegedly drunken captain, so MacMillan transferred to another ship, the Erik, to continue his journey. By early 1914, with the seas frozen, MacMillan set out to attempt a 1200-mile long sled journey from Etah, Greenland, through one of the most inhospitable and harshest landscapes on Earth, in search of Peary’s phantom island.

Though initially inspired by their mission to find Crocker Land, MacMillan’s team grew disheartened as they sledged through the Arctic landscape without finding it. “You can imagine how earnestly we scanned every foot of that horizon—not a thing in sight,” MacMillan wrote in his 1918 book, Four Years In The White North.

But a discovery one April day by Fitzhugh Green, a 25-year-old ensign in the US Navy, gave them hope. As MacMillan later recounted, Green was “no sooner out of the igloo than he came running back, calling in through the door, ‘We have it!’ Following Green, we ran to the top of the highest mound. There could be no doubt about it. Great heavens! What a land! Hills, valleys, snow-capped peaks extending through at least one hundred and twenty degrees of the horizon.”

But visions of the fame brought by being the first to step foot on Crocker Land quickly evaporated. “I turned to Pee-a-wah-to,” wrote MacMillan of his Inuit guide (also referred to by some explorers as Piugaattog). “After critically examining the supposed landfall for a few minutes, he astounded me by replying that he thought it was a ‘poo-jok' (mist).”

Indeed, MacMillan recorded that “the landscape gradually changed its appearance and varied in extent with the swinging around of the Sun; finally at night it disappeared altogether.” For five more days, the explorers pressed on, until it became clear that what Green had seen was a mirage, a polar fata morgana. Named for the sorceress Morgana le Fay in the legends of King Arthur, these powerful illusions are produced when light bends as it passes through the freezing air, leading to mysterious images of apparent mountains, islands, and sometimes even floating ships.

Fata morganas are a common occurrence in polar regions, but would a man like Peary have been fooled? “As we drank our hot tea and gnawed the pemmican, we did a good deal of thinking,” MacMillan wrote. “Could Peary with all his experience have been mistaken? Was this mirage which had deceived us the very thing which had deceived him eight years before? If he did see Crocker Land, then it was considerably more than 120 miles away, for we were now at least 100 miles from shore, with nothing in sight.”

MacMillan’s mission was forced to accept the unthinkable and turn back. “My dreams of the last four years were merely dreams; my hopes had ended in bitter disappointment,” MacMillan wrote. But the despair at realizing that Crocker Land didn’t exist was merely the beginning of the ordeal.

Donald MacMillan in seal skin coat on the Crocker Land Expedition.
Donald MacMillan in seal skin coat on the Crocker Land Expedition.
American Geographical Society Library. University of Wisconsin-Milwaukee Libraries.

MacMillan sent Fitzhugh Green and the Inuit guide Piugaattog west to explore a possible route back to their base camp in Etah. The two became trapped in the ice, and one of their dog teams died. Fighting over the remaining dogs, Green—with alarming lack of remorse—explained in his diary what happened next: “I shot once in the air ... I then killed [Piugaattog] with a shot through the shoulder and another through the head.” Green returned to the main party and confessed to MacMillan. Rather than reveal the murder, the expedition leader told the Inuit members of the mission that Piugaattog had perished in the blizzard.

Several members of the MacMillan mission would remain trapped in the ice for another three years, victims of the Arctic weather. Two attempts by the American Museum of Natural History to rescue them met with failure, and it wasn’t until 1917 that MacMillan and his party were finally saved by the steamer Neptune, captained by seasoned Arctic sailor Robert Bartlett.

While stranded in the ice, the men put their time to good use; they studied glaciers, astronomy, the tides, Inuit culture, and anything else that attracted their curiosity. They eventually returned with over 5000 photographs, thousands of specimens, and some of the earliest film taken of the Arctic (much of which can be seen today in the repositories of the American Geographical Society at the University of Wisconsin Milwaukee).

It’s unclear whether MacMillan ever confronted Peary about Crocker Land—about what exactly the explorer had seen in 1906, and perhaps what his motives were. When MacMillan’s news about not having found Crocker Land reached the United States, Peary defended himself to the press by noting how difficult spotting land in the Arctic could be, telling reporters, “Seen from a distance ... an iceberg with earth and stones may be taken for a rock, a cliff-walled valley filled with fog for a fjord, and the dense low clouds above a patch of open water for land.” (He maintained, however, that "physical indications and theory" still pointed to land somewhere in the area.) Yet later researchers have noted that Peary’s notes from his 1905-'06 expedition don’t mention Crocker Land at all. As Welky told National Geographic, “He talks about a hunting trip that day, climbing the hills to get this view, but says absolutely nothing about seeing Crocker Land. Several crewmembers also kept diaries, and according to those he never mentioned anything about seeing a new continent.”

There’s no mention of Crocker Land in early drafts of Nearest the Pole, either—it's only mentioned in the final manuscript. That suggests Peary had a deliberate reason for the the inclusion of the island.

Crocker, meanwhile, wouldn’t live to see if he was immortalized by this mysterious new land mass: He died in December 1909 of stomach cancer, a year after Peary had set out in the Roosevelt again in search of the Pole, and before MacMillan’s expedition.

Any remnants of the legend of Crocker Land were put to bed in 1938, when Isaac Schlossbach flew over where the mysterious island was supposed to be, looked down from his cockpit, and saw nothing.

Why the Filet-O-Fish Sandwich Has Been on the McDonald's Menu for Nearly 60 Years

McDonald's has introduced and quietly killed many dishes over the years (remember McDonald's pizza?), but there's a core group of items that have held their spot on the menu for decades. Listed alongside the Big Mac and McNuggets is the Filet-O-Fish—a McDonald's staple you may have forgotten about if you're not the type of person who orders seafood from fast food restaurants. But the classic sandwich, consisting of a fried fish filet, tartar sauce, and American cheese on a bun, didn't get on the menu by mistake—and thanks to its popularity around Lent, it's likely to stick around.

According to Taste of Home, the inception of the Filet-O-Fish can be traced back to a McDonald's franchise that opened near Cincinnati, Ohio in 1959. Back then the restaurant offered beef burgers as its only main dish, and for most of the year, diners couldn't get enough of them. Things changed during Lent: Many Catholics abstain from eating meat and poultry on Fridays during the holy season as a form of fasting, and in the early 1960s, Cincinnati was more than 85 percent Catholic. Fridays are supposed to be one of the busiest days of the week for restaurants, but sales at the Ohio McDonald's took a nosedive every Friday leading up to Easter.

Franchise owner Lou Groen went to McDonald's founder Ray Kroc with the plan of adding a meat alternative to the menu to lure back Catholic customers. He proposed a fried halibut sandwich with tartar sauce (though meat is off-limits for Catholics on Fridays during Lent, seafood doesn't count as meat). Kroc didn't love the idea, citing his fears of stores smelling like fish, and suggested a "Hula Burger" made from a pineapple slice with cheese instead. To decide which item would earn a permanent place on the menu, they put the two sandwiches head to head at Groen's McDonald's one Friday during Lent.

The restaurant sold 350 Filet-O-Fish sandwiches that day—clearly beating the Hula Burger (though exactly how many pineapple burgers sold, Kroc wouldn't say). The basic recipe has received a few tweaks, switching from halibut to the cheaper cod and from cod to the more sustainable Alaskan pollock, but the Filet-O-Fish has remained part of the McDonald's lineup in some form ever since. Today 300 million of the sandwiches are sold annually, and about a quarter of those sales are made during Lent.

Other seafood products McDonald's has introduced haven't had the same staying power as the Filet-O-Fish. In 2013, the chain rolled out Fish McBites, a chickenless take on McNuggets, only to pull them from menus that same year.

[h/t Taste of Home]

The Disturbing Reason Schools Tattooed Their Students in the 1950s

Kurt Hutton, Hulton Archive/Getty Images
Kurt Hutton, Hulton Archive/Getty Images

When Paul Bailey was born at Beaver County Hospital in Milford, Utah on May 9, 1955, it took less than two hours for the staff to give him a tattoo. Located on his torso under his left arm, the tiny marking was rendered in indelible ink with a needle gun and indicated Bailey’s blood type: O-Positive.

“It is believed to be the youngest baby ever to have his blood type tattooed on his chest,” reported the Beaver County News, cooly referring to the infant as an “it.” A hospital employee was quick to note parental consent had been obtained first.

The permanent tattooing of a child who was only hours old was not met with any hysteria. Just the opposite: In parts of Utah and Indiana, local health officials had long been hard at work instituting a program that would facilitate potentially life-saving blood transfusions in the event of a nuclear attack. By branding children and adults alike with their blood type, donors could be immediately identified and used as “walking blood banks” for the critically injured.

Taken out of context, it seems unimaginable. But in the 1950s, when the Cold War was at its apex and atomic warfare appeared not only possible but likely, children willingly lined up at schools to perform their civic duty. They raised their arm, gritted their teeth, and held still while the tattoo needle began piercing their flesh.

 

The practice of subjecting children to tattoos for blood-typing has appropriately morbid roots. Testifying at the Nuremberg Tribunal on War Crimes in the 1940s, American Medical Association physician Andrew Ivy observed that members of the Nazi Waffen-SS carried body markings indicating their blood type [PDF]. When he returned to his hometown of Chicago, Ivy carried with him a solution for quickly identifying blood donors—a growing concern due to the outbreak of the Korean War in 1950. The conflict was depleting blood banks of inventory, and it was clear that reserves would be necessary.

School children sit next to one another circa the 1950s
Reg Speller, Fox Photos/Getty Images

If the Soviet Union targeted areas of the United States for destruction, it would be vital to have a protocol for blood transfusions to treat radiation poisoning. Matches would need to be found quickly. (Transfusions depend on matching blood to avoid the adverse reactions that come from mixing different types. When a person receives blood different from their own, the body will create antibodies to destroy the red blood cells.)

In 1950, the Department of Defense placed the American Red Cross in charge of blood donor banks for the armed forces. In 1952, the Red Cross was the coordinating agency [PDF] for obtaining blood from civilians for the National Blood Program, which was meant to replenish donor supply during wartime. Those were both measures for soldiers. Meanwhile, local medical societies were left to determine how best to prepare their civilian communities for a nuclear event and its aftermath.

As part of the Chicago Medical Civil Defense Committee, Ivy promoted the use of the tattoos, declaring them as painless as a vaccination. Residents would get blood-typed by having their finger pricked and a tiny droplet smeared on a card. From there, they would be tattooed with the ABO blood group and Rhesus factor (or Rh factor), which denotes whether or not a person has a certain type of blood protein present.

The Chicago Medical Society and the Board of Health endorsed the program and citizens voiced a measure of support for it. One letter to the editor of The Plainfield Courier-News in New Jersey speculated it might even be a good idea to tattoo Social Security numbers on people's bodies to make identification easier.

Despite such marked enthusiasm, the project never entered into a pilot testing stage in Chicago.

Officials with the Lake County Medical Society in nearby Lake County, Indiana were more receptive to the idea. In the spring of 1951, 5000 residents were blood-typed using the card method. But, officials cautioned, the cards could be lost in the chaos of war or even the relative quiet of everyday life. Tattoos and dog tags were encouraged instead. When 1000 people lined up for blood-typing at a county fair, two-thirds agreed to be tattooed as part of what the county had dubbed "Operation Tat-Type." By December 1951, 15,000 Lake County residents had been blood-typed. Roughly 60 percent opted for a permanent marking.

The program was so well-received that the Lake County Medical Society quickly moved toward making children into mobile blood bags. In January 1952, five elementary schools in Hobart, Indiana enrolled in the pilot testing stage. Children were sent home with permission slips explaining the effort. If parents consented, students would line up on appointed tattoo days to get their blood typed with a finger prick. From there, they’d file into a room—often the school library—set up with makeshift curtains behind which they could hear a curious buzzing noise.

When a child stepped inside, they were greeted by a school administrator armed with indelible ink and wielding a Burgess Vibrotool, a medical tattoo gun featuring 30 to 50 needles. The child would raise their left arm to expose their torso (since arms and legs might be blown off in an attack) and were told the process would only take seconds.

A child raises his hand in class circa the 1950s
Vecchio/Three Lions/Getty Images

Some children were stoic. Some cried before, during, or after. One 11-year-old recounting her experience with the program said a classmate emerged from the session and promptly fainted. All were left with a tattoo less than an inch in diameter on their left side, intentionally pale so it would be as unobtrusive as possible.

At the same time that grade schoolers—and subsequently high school students—were being imprinted in Indiana, kids in Cache and Rich counties in Utah were also submitting to the program, despite potential religious obstacles for the region's substantial Mormon population. In fact, Bruce McConkie, a representative of the Church of Jesus Christ of Latter-Day Saints, declared that blood-type tattoos were exempt from the typical prohibitions on Mormons defacing their bodies, giving the program a boost among the devout. The experiment would not last much longer, though.

 

By 1955, 60,000 adults and children had gotten tattooed with their blood types in Lake County. In Milford, health officials persisted in promoting the program widely, offering the tattoos for free during routine vaccination appointments. But despite the cooperation exhibited by communities in Indiana and Utah, the programs never spread beyond their borders.

The Korean conflict had come to an end in 1953, reducing the strain put on blood supplies and along with it, the need for citizens to double as walking blood banks. More importantly, outside of the program's avid boosters, most physicians were extremely reticent to rely solely on a tattoo for blood-typing. They preferred to do their own testing to make certain a donor was a match with a patient.

There were other logistical challenges that made the program less than useful. The climate of a post-nuclear landscape meant that bodies might be charred, burning off tattoos and rendering the entire operation largely pointless. With the Soviet Union’s growing nuclear arsenal—1600 warheads were ready to take to the skies by 1960—the idea of civic defense became outmoded. Ducking and covering under desks, which might have shielded some from the immediate effects of a nuclear blast, would be meaningless in the face of such mass destruction.

Programs like tat-typing eventually fell out of favor, yet tens of thousands of adults consented to participate even after the flaws in the program were publicized, and a portion allowed their young children to be marked, too. Their motivation? According to Carol Fischler, who spoke with the podcast 99% Invisible about being tattooed as a young girl in Indiana, the paranoia over the Cold War in the 1950s drowned out any thought of the practice being outrageous or harmful. Kids wanted to do their part. Many nervously bit their lip but still lined up with the attitude that the tattoo was part of being a proud American.

Perhaps equally important, children who complained of the tattoo leaving them particularly sore received another benefit: They got the rest of the afternoon off.

SECTIONS

arrow
LIVE SMARTER