The Strange, Short-Lived British Trend of Hiring Ornamental Hermits

An 1830s print of visitors arriving at a hermitage.
An 1830s print of visitors arriving at a hermitage.
Flickr // Public Domain

If you were a grand gentlemen of the Georgian era, having a huge country house with lavishly landscaped grounds wasn’t enough to impress your visitors. No, you needed a little something extra. You needed an ornamental hermit.

True hermits, those who shun society and live in isolation to pursue higher spiritual enlightenment, had been a part of the religious landscape of Britain for centuries. The trend of adding hermits to estate grounds for aesthetic purposes arose in the 18th century out of a naturalistic influence in British gardens. Famed landscape gardener Lancelot “Capability” Brown (1715-1783) was a leading proponent of this naturalistic approach, which shunned the French-style formal gardens of old (think neatly trimmed lawns, elaborately shaped box hedges, and geometric gravel paths) in favor of serpentine paths that meandered past romantic-looking lakes, rustic clumps of trees, and artfully crumbling follies. This new style of garden frequently also featured a picturesque hermitage constructed of brick or stone, or even gnarled tree roots and branches. Many were decorated inside with shells or bones to create a suitably atmospheric retreat.

The hermitage at Waterstown, County Westmeath, Ireland.
The hermitage at Waterstown, County Westmeath, Ireland.

The Hermit in the Garden: From Imperial Rome to Ornamental Gnome by Gordon Campbell, Oxford University Press, reprinted with permission.

With the new fashion for building hermitages in country estates, the next logical step was to populate them with an actual hermit. It’s not clear who first started the trend, but at some point in the early 18th century, having a resident hermit quietly contemplating existence—and occasionally sharing some golden nugget of wisdom with visitors—came to be seen as a must-have accessory for the perfect garden idyll.

Real hermits were hard to find, so wealthy landowners had to get creative. Some put advertisements in the press, offering food, lodging, and a stipend for those willing to adopt a life of solitude. The Honorable Charles Hamilton placed one such ad after buying Painshill Park (an estate in Cobham, Surrey) and extensively remodeling the grounds. Hamilton created a lake, grottoes, Chinese bridge, temple, and a hermitage on his estate, then placed an ad for a hermit to live there for seven years in exchange for £700 (roughly $900, or $77,000 in today’s money). The hermit was not allowed to speak to anyone, cut their hair, or leave the estate. Unfortunately, the successful applicant was discovered in the local pub just three weeks after being appointed. He was relieved of his role and not replaced, perhaps demonstrating the difficulty of attracting a serious hermit.

One of the more famous Georgian hermits was Father Francis, who lived at Hawkstone Park in Shropshire in a summer hermitage made with stone walls, a heather-thatched roof, and a stable door. Inside, he would sit at a table strewn with symbolic items, such as a skull, an hourglass, and a globe, while conversing with visitors, offering spiritual guidance and ponderings on the nature of solitude. So popular was the attraction of a meeting with a real-life hermit that the Hill family, who owned the park, were obliged to build their own pub, The Hawkstone Arms, to cater to all the guests.

A 1787 etching of "eccentric hermit" John Bigg.
A 1787 etching of "eccentric hermit" John Bigg.

But while some estate owners struggled to find a good hermit, taking on the role did have some appeal, as evidenced by this 1810 ad in the Courier:

“A young man, who wishes to retire from the world and live as a hermit, in some convenient spot in England, is willing to engage with any nobleman or gentleman who may be desirous of having one. Any letter addressed to S. Laurence (post paid), to be left at Mr. Otton's No. 6 Coleman Lane, Plymouth, mentioning what gratuity will be given, and all other particulars, will be duly attended.”

Sadly, it is not known whether or not the would-be hermit received any replies.

When a nobleman was unable to attract a real hermit to reside in his hermitage, a number of novel solutions were employed. In 1763, the botanist Gilbert White managed to persuade his brother, the Reverend Henry White, to temporarily put aside his cassock in order to pose as a wizened sage at Gilbert’s Selborne estate for the amusement of his guests. Miss Catharine Battie was one such guest, who later wrote in her diary (with a frustrating lack of punctuation) that “in the middle of tea we had a visit from the old Hermit his appearance made me start he sat some with us & then went away after tea we went in to the Woods return’d to the Hermitage to see it by Lamp light it look’d sweetly indeed. Never shall I forget the happiness of this day ...”

If an obliging brother was not available to pose as a hermit, garden owners instead might furnish the hermitage with traditional hermit accessories, such as an hourglass, book, and glasses, so that visitors might presume the resident hermit had just popped out for a moment. Some took this to even greater extremes, putting a dummy or automaton in the hermit’s place. One such example was found at the Wodehouse in Wombourne, Staffordshire, England [PDF], where in the mid-18th century Samuel Hellier added a mechanical hermit that was said to move and give a lifelike impression.

Another mechanical hermit was apparently used at Hawkstone Park to replace Father Francis after his death, although it received a critical review from one 18th-century tourist: “The face is natural enough, the figure stiff and not well managed. The effect would be infinitely better if the door were placed at the angle of the wall and not opposite you. The passenger would then come upon St. [sic] Francis by surprise, whereas the ringing of the bell and door opening into a building quite dark within renders the effect less natural.”

The fashion for employing an ornamental hermit was fairly fleeting, perhaps due to the trouble of recruiting a reliable one. However, the phenomenon does provide some insight into the growth of tourism in the Georgian period—the leisured classes were beginning to explore country estates, and a hermit was seen as another attraction alongside the temples, fountains, and sweeping vistas provided in the newly landscaped grounds.

Today, the fascination with hermits still exists. At the end of April 2017, a new hermit, 58-year-old Stan Vanuytrecht, moved into a hermitage in Saalfelden, Austria, high up in the mountains. Fifty people applied for his position, despite the lack of internet, running water, or heating. The hermitage, which has been continuously inhabited for the last 350 years, welcomes visitors to come and enjoy spiritual conversation with their resident hermit, and expects plenty of guests.

Why the Filet-O-Fish Sandwich Has Been on the McDonald's Menu for Nearly 60 Years

McDonald's has introduced and quietly killed many dishes over the years (remember McDonald's pizza?), but there's a core group of items that have held their spot on the menu for decades. Listed alongside the Big Mac and McNuggets is the Filet-O-Fish—a McDonald's staple you may have forgotten about if you're not the type of person who orders seafood from fast food restaurants. But the classic sandwich, consisting of a fried fish filet, tartar sauce, and American cheese on a bun, didn't get on the menu by mistake—and thanks to its popularity around Lent, it's likely to stick around.

According to Taste of Home, the inception of the Filet-O-Fish can be traced back to a McDonald's franchise that opened near Cincinnati, Ohio in 1959. Back then the restaurant offered beef burgers as its only main dish, and for most of the year, diners couldn't get enough of them. Things changed during Lent: Many Catholics abstain from eating meat and poultry on Fridays during the holy season as a form of fasting, and in the early 1960s, Cincinnati was more than 85 percent Catholic. Fridays are supposed to be one of the busiest days of the week for restaurants, but sales at the Ohio McDonald's took a nosedive every Friday leading up to Easter.

Franchise owner Lou Groen went to McDonald's founder Ray Kroc with the plan of adding a meat alternative to the menu to lure back Catholic customers. He proposed a fried halibut sandwich with tartar sauce (though meat is off-limits for Catholics on Fridays during Lent, seafood doesn't count as meat). Kroc didn't love the idea, citing his fears of stores smelling like fish, and suggested a "Hula Burger" made from a pineapple slice with cheese instead. To decide which item would earn a permanent place on the menu, they put the two sandwiches head to head at Groen's McDonald's one Friday during Lent.

The restaurant sold 350 Filet-O-Fish sandwiches that day—clearly beating the Hula Burger (though exactly how many pineapple burgers sold, Kroc wouldn't say). The basic recipe has received a few tweaks, switching from halibut to the cheaper cod and from cod to the more sustainable Alaskan pollock, but the Filet-O-Fish has remained part of the McDonald's lineup in some form ever since. Today 300 million of the sandwiches are sold annually, and about a quarter of those sales are made during Lent.

Other seafood products McDonald's has introduced haven't had the same staying power as the Filet-O-Fish. In 2013, the chain rolled out Fish McBites, a chickenless take on McNuggets, only to pull them from menus that same year.

[h/t Taste of Home]

The Disturbing Reason Schools Tattooed Their Students in the 1950s

Kurt Hutton, Hulton Archive/Getty Images
Kurt Hutton, Hulton Archive/Getty Images

When Paul Bailey was born at Beaver County Hospital in Milford, Utah on May 9, 1955, it took less than two hours for the staff to give him a tattoo. Located on his torso under his left arm, the tiny marking was rendered in indelible ink with a needle gun and indicated Bailey’s blood type: O-Positive.

“It is believed to be the youngest baby ever to have his blood type tattooed on his chest,” reported the Beaver County News, cooly referring to the infant as an “it.” A hospital employee was quick to note parental consent had been obtained first.

The permanent tattooing of a child who was only hours old was not met with any hysteria. Just the opposite: In parts of Utah and Indiana, local health officials had long been hard at work instituting a program that would facilitate potentially life-saving blood transfusions in the event of a nuclear attack. By branding children and adults alike with their blood type, donors could be immediately identified and used as “walking blood banks” for the critically injured.

Taken out of context, it seems unimaginable. But in the 1950s, when the Cold War was at its apex and atomic warfare appeared not only possible but likely, children willingly lined up at schools to perform their civic duty. They raised their arm, gritted their teeth, and held still while the tattoo needle began piercing their flesh.

 

The practice of subjecting children to tattoos for blood-typing has appropriately morbid roots. Testifying at the Nuremberg Tribunal on War Crimes in the 1940s, American Medical Association physician Andrew Ivy observed that members of the Nazi Waffen-SS carried body markings indicating their blood type [PDF]. When he returned to his hometown of Chicago, Ivy carried with him a solution for quickly identifying blood donors—a growing concern due to the outbreak of the Korean War in 1950. The conflict was depleting blood banks of inventory, and it was clear that reserves would be necessary.

School children sit next to one another circa the 1950s
Reg Speller, Fox Photos/Getty Images

If the Soviet Union targeted areas of the United States for destruction, it would be vital to have a protocol for blood transfusions to treat radiation poisoning. Matches would need to be found quickly. (Transfusions depend on matching blood to avoid the adverse reactions that come from mixing different types. When a person receives blood different from their own, the body will create antibodies to destroy the red blood cells.)

In 1950, the Department of Defense placed the American Red Cross in charge of blood donor banks for the armed forces. In 1952, the Red Cross was the coordinating agency [PDF] for obtaining blood from civilians for the National Blood Program, which was meant to replenish donor supply during wartime. Those were both measures for soldiers. Meanwhile, local medical societies were left to determine how best to prepare their civilian communities for a nuclear event and its aftermath.

As part of the Chicago Medical Civil Defense Committee, Ivy promoted the use of the tattoos, declaring them as painless as a vaccination. Residents would get blood-typed by having their finger pricked and a tiny droplet smeared on a card. From there, they would be tattooed with the ABO blood group and Rhesus factor (or Rh factor), which denotes whether or not a person has a certain type of blood protein present.

The Chicago Medical Society and the Board of Health endorsed the program and citizens voiced a measure of support for it. One letter to the editor of The Plainfield Courier-News in New Jersey speculated it might even be a good idea to tattoo Social Security numbers on people's bodies to make identification easier.

Despite such marked enthusiasm, the project never entered into a pilot testing stage in Chicago.

Officials with the Lake County Medical Society in nearby Lake County, Indiana were more receptive to the idea. In the spring of 1951, 5000 residents were blood-typed using the card method. But, officials cautioned, the cards could be lost in the chaos of war or even the relative quiet of everyday life. Tattoos and dog tags were encouraged instead. When 1000 people lined up for blood-typing at a county fair, two-thirds agreed to be tattooed as part of what the county had dubbed "Operation Tat-Type." By December 1951, 15,000 Lake County residents had been blood-typed. Roughly 60 percent opted for a permanent marking.

The program was so well-received that the Lake County Medical Society quickly moved toward making children into mobile blood bags. In January 1952, five elementary schools in Hobart, Indiana enrolled in the pilot testing stage. Children were sent home with permission slips explaining the effort. If parents consented, students would line up on appointed tattoo days to get their blood typed with a finger prick. From there, they’d file into a room—often the school library—set up with makeshift curtains behind which they could hear a curious buzzing noise.

When a child stepped inside, they were greeted by a school administrator armed with indelible ink and wielding a Burgess Vibrotool, a medical tattoo gun featuring 30 to 50 needles. The child would raise their left arm to expose their torso (since arms and legs might be blown off in an attack) and were told the process would only take seconds.

A child raises his hand in class circa the 1950s
Vecchio/Three Lions/Getty Images

Some children were stoic. Some cried before, during, or after. One 11-year-old recounting her experience with the program said a classmate emerged from the session and promptly fainted. All were left with a tattoo less than an inch in diameter on their left side, intentionally pale so it would be as unobtrusive as possible.

At the same time that grade schoolers—and subsequently high school students—were being imprinted in Indiana, kids in Cache and Rich counties in Utah were also submitting to the program, despite potential religious obstacles for the region's substantial Mormon population. In fact, Bruce McConkie, a representative of the Church of Jesus Christ of Latter-Day Saints, declared that blood-type tattoos were exempt from the typical prohibitions on Mormons defacing their bodies, giving the program a boost among the devout. The experiment would not last much longer, though.

 

By 1955, 60,000 adults and children had gotten tattooed with their blood types in Lake County. In Milford, health officials persisted in promoting the program widely, offering the tattoos for free during routine vaccination appointments. But despite the cooperation exhibited by communities in Indiana and Utah, the programs never spread beyond their borders.

The Korean conflict had come to an end in 1953, reducing the strain put on blood supplies and along with it, the need for citizens to double as walking blood banks. More importantly, outside of the program's avid boosters, most physicians were extremely reticent to rely solely on a tattoo for blood-typing. They preferred to do their own testing to make certain a donor was a match with a patient.

There were other logistical challenges that made the program less than useful. The climate of a post-nuclear landscape meant that bodies might be charred, burning off tattoos and rendering the entire operation largely pointless. With the Soviet Union’s growing nuclear arsenal—1600 warheads were ready to take to the skies by 1960—the idea of civic defense became outmoded. Ducking and covering under desks, which might have shielded some from the immediate effects of a nuclear blast, would be meaningless in the face of such mass destruction.

Programs like tat-typing eventually fell out of favor, yet tens of thousands of adults consented to participate even after the flaws in the program were publicized, and a portion allowed their young children to be marked, too. Their motivation? According to Carol Fischler, who spoke with the podcast 99% Invisible about being tattooed as a young girl in Indiana, the paranoia over the Cold War in the 1950s drowned out any thought of the practice being outrageous or harmful. Kids wanted to do their part. Many nervously bit their lip but still lined up with the attitude that the tattoo was part of being a proud American.

Perhaps equally important, children who complained of the tattoo leaving them particularly sore received another benefit: They got the rest of the afternoon off.

SECTIONS

arrow
LIVE SMARTER