Antisocial Media: The Rise and Fall of Friendster

iStock
iStock

When software engineer Jonathan Abrams arrived in Silicon Valley in 1996, the internet was known for three things: vast amounts of information, pornography, and anonymity. If users weren't investigating the first two, they were exploiting the third to argue about movies or politics, their unfiltered opinions unencumbered by concerns over embarrassment. People were known only by their screen handles.

Abrams, who came to California to program for the web browser Netscape, had an idea. What if people could use their real names, faces, and locations online? Instead of having an avatar, they'd simply upload their existing personality in the form of photos, profiles, and interests. They could socialize with others in a transparent fashion, mingling within their existing circles to find new friends or even dates. Strangers would be introduced through a mutual contact. If executed properly, the network would have real-world implications on relationships, something the internet rarely facilitated at that time.

Abrams called his concept Friendster. Launched in March 2003, it quickly grew to host millions of users. Google began talks of a lucrative buyout. Abrams showed up on Jimmy Kimmel Live, anticipating the dot-com-engineer-as-rock-star template. His investors believed Friendster could generate billions.

Instead, Friendster's momentum stalled. Myspace became the dominant social platform, with Facebook quickly gaining ground. Abrams, who once appeared poised to collect a fortune from his creation, watched as copycat sites poached his user base and his influence waned. What should've been a case study of internet success became one of the highest profile casualties of the web's unrestricted growth. It became too big not to fail.

 

Many businesses rely on a creation myth, the idea that a single inciting incident provides the spark of inspiration that turns a company from a small concern into a revenue-generating powerhouse. For publicity purposes, these stories are just that—fictions devised to excite the press and charm consumers. Pierre Omidyar, who programmed AuctionWeb and later renamed it eBay, was said to have conceived of the project to help his wife, Pamela, find Pez dispensers for her collection. In fact, there were no Pez dispensers. It was a fable concocted by an eBay marketing employee who wanted to romanticize the site's origins.

In early press coverage of Friendster, there was little mention of Abrams looking to monetize the burgeoning opportunities available online. Instead, he was portrayed as a single man with a recently broken heart who wanted to make dating easier. Abrams later said there was no truth to this origin story, though he did derive inspiration from Match.com, a successful dating site launched in 1995. Abrams's idea was to develop something like Match.com, only with the ability to meet people through friends. Instead of messaging someone out of the blue, you could connect via a social referral.

Human-shaped icons represent the concept of social networking
iStock

Following stints at Netscape and an aggregation site called HotLinks, Abrams wrote and developed Friendster for a spring 2003 launch. He sent invites to 20 friends and family members in the hopes interest would multiply. It did, and quickly. By June, Friendster had 835,000 users. By fall, there were 3 million. Facebook's launch in February 2004 was months away, and so low-key that Abrams met with Mark Zuckerberg to see if he'd consider selling. If an internet user wanted to socialize in a transparent manner, Friendster was the go-to destination.

When users signed up for the site, they were only allowed to message people who were within six degrees of separation or less. To help endorse unfamiliar faces, Friendster also permitted users to leave "testimonials" on profiles that could extol a person's virtues and possibly persuade a connection to meet up in the real world.

Naturally, not all mutual connections were necessarily good friends: They might have been acquaintances at best, and the resulting casual atmosphere was more of a precursor to Tinder than Facebook. One user told New York Magazine that Friendster was less a singles mixer and more "six degrees of how I got Chlamydia."

Still, it worked. The site's immediate success did not go unnoticed by venture capitalists, who had been circling popular platforms—America Online, Yahoo!, and, later, YouTube—and injecting start-ups with millions in operating funds. At the time, the promise of savvy business minds flipping URLs for hundreds of millions or even billions was a tangible concept, and one that Abrams kept in mind as he fielded an offer from Google in 2003 to buy Friendster for $30 million. It would be a windfall.

Abrams declined.

 

Investors—including future PayPal co-founder Peter Thiel and Google investor K. Ram Shriram—advised Abrams that there was too much money to leave on the table in return for short-term gain. Abrams opted to accept $13 million toward building out the site. He sat on the board of directors and watched as backers began to strategize the best path forward.

Quickly, Abrams noticed a paradigm shift taking place. As a programmer, Abrams solved problems, and Friendster was facing a big one. Buoyed by press attention (including the Kimmel appearance where Abrams handed out condoms to audience members, presumably in anticipation of all the relationships Friendster could help facilitate), the site was slowing down, unable to absorb all of the incoming traffic. Servers struggled to generate customized networks for each user, all of which were dependent on who they were already connected to. A page sometimes took 40 seconds to load.

The investors considered lag time a mundane concern. Adding new features was even less attractive, as that might slow the pages down further. They wanted to focus on partnerships and on positioning Friendster as a behemoth that could attract a nine- or 10-figure purchase price. This is what venture capitalists did, scooping up 10 or 20 opportunities and hoping a handful might explode into something enormous.

But for business owners and entrepreneurs like Abrams, they didn’t have a portfolio to deal with. They were concerned only with their creation. Its failure was all-encompassing; there weren't 19 other venues to turn to if things didn't work out.

Two word balloons represent the concept of social networking
iStock

Abrams saw the need for a site reconfiguration. The board was indifferent. Eventually he was removed and assigned a role as chairman, an empty title that was taken away from him in 2005. As the board squabbled over macro issues, Abrams watched as micro issues—specifically, the site itself—deteriorated. Frustrated with wait times, users began migrating to Myspace, which offered more customizable features and let voyeurs browse profiles without "friending" others. Myspace attracted 22.1 million unique users monthly in 2005. Friendster was getting just 1.1 million.

 

By 2006, Friendster was mired in software kinks and something less tangible: a loss of cachet among users who were gravitating toward other social platforms. Though Abrams was out, investors continued to pour money into Friendster in the hopes that they could recoup costs. In 2009, they sold to MOL Global for $40 million, which would later convert the site into a social gaming destination. But it was too late. Though the site still had an immense number of users—115 million, with 75 million coming from Asia—they were passive, barely interacting with other users. By 2011, user data—photos, profiles, messages—was being purged.

In ignoring the quality of the end-user experience, the decision-makers at Friendster had effectively buried the promise of Abrams's concept. They sold off his patents to Facebook in 2010 for $40 million. Coupled with the MOL sale, it may have been a tidy sum, but one that paled in comparison to Friendster's potential. A 2006 article in The New York Times reported with some degree of morbid fascination that if Abrams had accepted the Google offer of $30 million in 2003 in the form of stock, it would've quickly been worth $1 billion.

In the years since, Abrams has tinkered with other sites—including an evite platform called Socialzr and a news monitoring app called Nuzzel, which is still in operation—and tends to Founders Den, a club and work space in San Francisco. He's normally reticent to discuss Friendster, believing there's little point in dwelling on a missed opportunity.

The site did, ultimately, became a case study for Harvard Business School—though perhaps not in the way investors had intended. Friendster was taught as a cautionary tale, an example that not every good idea will find its way to success.

When Y2K Sent Us Into a Digital Depression

iStock.com/Laspi
iStock.com/Laspi

It's hard to pinpoint the exact moment when the paranoia first began to creep in. Sometime during the late 1990s, consumers noticed that their credit cards with expiration dates in the year 2000 were being declined by merchants. Shortly thereafter, people began stocking up on shelf-stable food and water, potentially condemning themselves to months of all-SPAM diets. A number of concerned citizens outside of Toronto, Canada, flocked to the Ark Two Survival Community, a nuclear fallout shelter-turned-bunker comprised of dozens of decommissioned school buses buried several feet below the Earth and protected by a layer of reinforced concrete.

In the months leading into New Year's Day 2000, millions of people steeled themselves for a worst-case scenario of computers succumbing to a programming glitch that would render them useless. Banking institutions might collapse; power grids could shut down. Anarchy would take over. The media had the perfect shorthand for the potential catastrophe: Y2K, for Year 2000. The term was used exhaustively in their coverage of a situation some believed had the potential to become one of the worst man-made disasters in history—if not the collapse of modern civilization as we knew it.

In the end, it was neither. But that doesn't mean it didn't have some far-reaching consequences.

John Koskinen of the President's Council on Y2K Conversion makes a public address
Michael Smith, Getty Images

The anticipatory anxiety of Y2K was rooted in the programs that had been written for the ginormous computers of the late 1960s. In an effort to conserve memory and speed up software, programmers truncated the date system to use two digits for the year instead of four. When the calendar was set to roll over to the year 2000, the belief was that "00" would be a proverbial wrench in the system, with computers unable to decipher 2000 from 1900. Their calculations would be thrown. Using "98" for 1998 was a positive value; using "00" would result in negative equations. How computers would react was based mostly on theories.

That ambiguity was quickly seized upon by two factions: third-party software consultants and doomsday preppers. For the former, rewriting code became a cottage industry, with corporations large and small racing to revise antiquated systems and spending significant amounts of money and manpower in doing so. General Motors estimated the cost of upgrading their systems would be about $626 million. The federal government, which began preparing for possible doom in 1995, ended up with an $8.4 billion bill.

Some of that cost was eaten up by soliciting analyses of the potential problems. The U.S. Department of Energy commissioned a study looking at the potential for problems with the nation's energy supply if computers went haywire. The North American Electric Reliability Council thought the risks were manageable, but cautioned that a single outage could have a domino effect on connected power grids.

As a result, many newspaper stories were a mixture of practical thinking with a disclaimer: More than likely nothing will happen … but if something does happen, we're all screwed.

"Figuring out how seriously to take the Y2K problem is a problem in itself," wrote Leslie Nicholson in the January 17, 1999 edition of the Philadelphia Inquirer. "There is simply no precedent."

Pending economic and societal collapse fueled the second pop-up industry: survivalist suppliers. As people stocked up on canned goods, bottled water, flashlights, and generators, miniature societies like Ark Two began to spring up.

While the panic surrounding Y2K was dismissed by some as unwarranted, there was always fuel to add to the fire. The United States and Russia convened to monitor ballistic missile activity in the event a glitch inadvertently launched a devastating weapon. People were warned checks might bounce and banking institutions could freeze. The Federal Reserve printed $70 billion in cash in case people began hoarding currency. Even the Red Cross chimed in, advising Americans to stock up on supplies. Y2K was being treated like a moderate-category storm.

Adding to the concern was the fact that credible sources were sounding alarms. Edward E. Yardeni, then-chief economist at Deutsche Morgan Grenfell/C.J. Lawrence, predicted that there was a 60 percent chance of a major worldwide recession.

As New Year's Eve 2000 approached, it became clear that Y2K had evolved beyond a software hiccup. Outside of war and natural disasters, it represented one of the few times society seemed poised for a dystopian future. People watched their televisions as clocks hovered close to midnight, waiting to see if their lights would flicker or their landline phones would continue to ring.

A software program is represented by a series of ones and zeroes
iStock.com/alengo

Of course, nothing happened. So many resources had been extended toward the problem that the majority of software-reliant businesses and infrastructures were prepared. There were no power outages, no looting, and no hazards. The only notable event of January 1, 2000 was the reporting of the resignation of Boris Yeltsin and the arrival of Vladimir Putin as Russia's new president.

With the benefit of hindsight, pundits would later observe that much of the Y2K concern was an expression of a more deeply rooted fear of technology. Subconsciously, we may have been primed to recoil at the thought of computers dominating our society to the extent that their failure could have catastrophic consequences.

All told, it's estimated that approximately $100 billion was spent making upgrades to offset any potential issues. To put that into context: South Florida spent $15.5 billion rebuilding after the mass destruction caused by Hurricane Andrew in 1992.

Was it all worth it? Experts seem to think so, citing the expedited upgrades of old software and hardware in federal and corporate environments.

That may be some small comfort to Japan, which could be facing its own version of Y2K in April 2019. That's when Emperor Akihito is expected to abdicate the throne to his son, Naruhito, the first such transition since the dawn of the information age. (Akihito has been in power since January 1989, following the death of his father.) That's significant because the Japanese calendar counts up from the coronation of a new emperor and uses the name of each emperor's era. Akihito's is known as the Heisei era. Naruhito's is not yet named, which means that things could get tricky as the change in leadership—and the need for a calendar update—comes closer.

It's hard to predict what the extent of the country's problems will be as Akihito steps down. If history is any guide, though, it's likely to mean a lot of software upgrades, and possibly some SPAM.

When Mr. Rogers Taught Kids About Mutually Assured Nuclear Destruction

Focus Features
Focus Features

After months of hype, the ABC television network premiered a made-for-TV film titled The Day After on November 20, 1983. Presented with minimal commercial interruption, the two-hour feature illustrated a world in which both the United States and Russia made the cataclysmic decision to launch nuclear missiles. The blasts wiped a small town off the face of the Earth; the few who did survive writhed in pain, with their skin hanging off in clumps.

The imagery was graphic and unsettling, and it was supposed to be. Director Nicholas Meyer wanted to portray the fallout in sober detail. The Day After drew a sizable viewership and was hailed as a responsible use of television in order to educate audiences about the reality of the tension between the world’s superpowers.

In the weeks before the film premiered, though, another prominent broadcast was exploring the same themes. It was intended for young audiences and explored—via the use of puppets—the consequences of international aggression. For five episodes across one week, the threat of nuclear annihilation was looming in Mister Rogers’ Neighborhood.

A nuclear explosion creates a mushroom cloud
iStock.com/RomoloTava-ni

Since its inception on Pittsburgh's WQED in 1968, Mister Rogers’ Neighborhood had informed its young audience about topical issues in subversive and disarming ways. When civil rights were discussed, host Fred Rogers didn’t deliver a lecture about tolerance. Instead, he invited a black friend, Officer Clemmons, to cool off in his inflatable pool, a subtle nod to desegregation. In 1981, Rogers—the subject of this year's critically-acclaimed documentary, Won't You Be My Neighbor?explored the topic of divorce with puppet Patty Barcadi, whose parents had separated. Rogers comforts Prince Tuesday, who frets his own parents might split. Famously, Rogers also explored the subject of individuals with disabilities with the introduction of Jeff Erlanger, who became a quadriplegic at a young age after undergoing spinal surgery to remove a tumor. (Decades later, the two were reunited when Erlanger made a surprise appearance as Rogers was being inducted into the Television Academy Hall of Fame.)

Despite Rogers's history tackling tough topics, there was perhaps no greater a hot-button issue for the children’s show to tackle than nuclear war. Rogers wanted to address what he felt was a growing concern among schoolchildren who processed Cold War headlines and interpreted tensions between Russia and the U.S. as potentially disastrous. (In one survey of classrooms across several major cities, students labeled the possibility of nuclear war “likely.”)

Rogers conceived and taped a five-episode storyline on the subject in the summer of 1983, which wound up being prescient. In November 1983, president Ronald Reagan ordered the invasion of Grenada to topple a Marxist regime.

“Little did I know we would be involved in a worldwide conflict now,” Rogers told the Associated Press. “But that’s all the better because our shows give families an opportunity for communication. If children should hear the news of war, at least they have a handle here, to assist in family communications.”

In the five-part series titled “Conflict,” Rogers again turned to the puppets that populated his Neighborhood of Make-Believe. Provincial ruler King Friday (voiced by Rogers) is handed a “computer read-out” that tips him off to some counterintelligence: Cornflake S. Pecially, ruler of the neighboring land of Southwood, is allegedly making bombs. In a panic, King Friday orders his underlings to do the same, mobilizing efforts to make certain they can match Southwood’s fiery super weapons—even if it means not having the financial resources to care for his people in other ways.

Lady Elaine Fairchilde and Lady Aberlin aren’t quite convinced. Rather than succumb to paranoia, they decide to travel to Southwood to see for themselves. They find its citizens building a bridge, not a bomb. A misunderstanding had almost led to unnecessary violence.

Of course, no mushroom clouds envelop the Neighborhood of Make-Believe, and none of the puppets suffer the devastating effects of radiation poisoning. Rogers wasn’t even claiming the story was necessarily about war, but the prevention of it.

“This show gives us a chance to talk about war, and about how it’s essential that people learn to deal with their feelings and to talk about things and resolve conflicts,” he said.

A publicity photo of Fred Rogers for 'Mr Rogers' Neighborhood'
Getty Images

The episodes sparked conversation in classrooms, where some teachers used the footage to broach the subject. At an elementary school in Venetia, Pennsylvania, students in a third-grade social studies class discussed the consequences of war. “No water” was one response. “Injuries” was another.

Unlike The Day After, which one psychiatrist declared as inappropriate for children under 12, Rogers proved it was possible to provoke conversation without rattling any nerves.

Following their initial run in 1983, the five-part “Conflict” episodes have never been repeated. The close of the 1980s saw a reduction in concerns over nuclear attacks, and it’s possible producers of Mister Rogers’ Neighborhood regarded the shows as dated.

They resurfaced briefly on YouTube in 2017 before vanishing. The series was subsequently uploaded to a Dailymotion video account in 2018. Like The Day After, the shows are an interesting time capsule of an era when the fear of devastating conflict was palpable. For a number of kids who experienced that concern, Mr. Rogers helped frame it in a way they could understand.

“I don’t want this to be a frightening thing,” Rogers said. “I want children to know that war is something we can talk about. Whatever is mentionable is manageable.”

SECTIONS

arrow
LIVE SMARTER