CLOSE
Original image

A Brief History of Closed Captioning

Original image

Whether you've encountered its unmistakable white text on black background at the gym, in a bar, or on the couch, you're familiar with closed captioning. Here's a brief history of the technology that has provided a (mostly accurate) transcript of television programming for more than 40 years, and made its network debut 35 years ago.

TELEVISION CAPTIONING BEGINS WITH JULIA CHILD

The nation's first captioning agency, the Caption Center, was founded in 1972 at the Boston public television station WGBH. The station introduced open television captioning to rebroadcasts of The French Chef with Julia Child and began captioning rebroadcasts of ABC News programs as well, in an effort to make television more accessible to the millions of Americans who are deaf or hard of hearing.

CLOSED CAPTIONING MAKES ITS DEBUT

Captions on The French Chef were viewable to everyone who watched, which was great for members of the deaf and hard of hearing community, but somewhat distracting for other viewers. So the Caption Center and its partners began developing technology that would display captions only for viewers with a certain device.

"The system, called 'closed captioning,' uses a decoder that enables viewers to see the written dialogue or narration at the bottom of the screens," reported The New York Times in 1974. "On sets without the decoder, the written matter is invisible."

The technology, which converts human-generated captions into electronic code that is inserted into a part of the television signal not normally seen, was refined through demonstrations and experiments funded in part by the Department of Health, Education and Welfare. In 1979, the Federal Communications Commission formed the National Captioning Institute (NCI), a nonprofit organization dedicated to promoting and providing access to closed captioning. The first closed-captioned programs were broadcast on March 16, 1980, by ABC, NBC, and PBS. CBS, which wanted to use its own captioning system called teletext, was the target of protests before agreeing to join its network brethren in using closed captioning a few years later.

CC AND THE LAW

CC

In 1990, a law—the Television Decoder Circuitry Act of 1990—was passed mandating that all televisions 13 inches or larger manufactured for sale in the U.S. contain caption decoders. Sixteen years later, the FCC ruled that all broadcast and cable television programs must include captioning, with some exceptions. The exceptions include ads that run less than five minutes and programs air between 2 a.m. and 6 a.m. According to captions.com, nearly all of the commercials that aired during this year's Super Bowl XLIX were captioned (the cost of captioning a 30-second spot is about $200, which is just a fraction of the approximately $4 million it costs to buy the ad space).

PRERECORDED VS. REAL-TIME CAPTIONING

closed-caption
Prerecorded captioning is applied to prerecorded programming, such as sitcoms, movies, commercials, and game shows. It can take up to 16 hours to caption a one-hour prerecorded program, as the process involves more than transcribing a program's script. Using special software, the captioner must set the placement of the caption on the screen, as well as set when the caption appears and disappears. In the early days of captioning, scripts were edited for understanding and ease of reading. Today, captions generally provide verbatim accounts of what is said on the screen, as well as descriptions of sounds in the background.

Real-time captioning, which was introduced in 1982, provides a means for the deaf and hard of hearing community to enjoy live press conferences, local news, and sporting events on television as they happen. Real-time captioning is typically done by court reporters or similarly trained professionals who can type accurately at speeds of up to 250 words per minute. While captioners for prerecorded programs typically use standard keyboards, a real-time captioner requires a steno machine.

HOW A STENO MACHINE WORKS

A steno machine contains 22 keys and uses a code based on phonetics for every word, enabling skilled stenographers to occasionally reach typing speeds of more than 300 words per minute. Words and phrases may be captured by pressing multiple keys at the same time, and with varying force, a process known as chording. Real-time captioners, or stenocaptioners, regularly update their phonetic dictionaries, which translate their phonetic codes into words that are then encoded into the video signal to form closed captions.

REAL-TIME CAPTIONING ISN'T EASY

For live newscasts, closed captioners often receive the script that appears on the teleprompter in advance, but not every anchor follows this script as religiously as Ron Burgundy. Whereas court reporters generally aren't concerned with context and can clean up the first draft of their transcript at a later time, context matters for real-time captioners, who have one shot to accurately record what is being said. Given the speed at which they work, homonyms can prove especially difficult for stenocaptioners, as can unfamiliar or unusual names.

According to Jeff Hutchins, a co-founder of VITAC, one of the nation's leading captioning companies, there's more to being a closed captioner than knowing how to type. "There's a certain pathology to the process that we recognize," he told The New York Times in 2000. "A young lady will come in here, pretty good court reporter, very confident about her abilities, excited that she's going to get into captioning, and she will begin the training process very fired up, excited. Generally we know that in two to four weeks that she is going to be walking around with stooped shoulders, totally dejected, feeling like, 'I'll never get this.'"

Stenocaptioners can make more than $100,000 a year, but the work is stressful. In 2007, Kathy DiLorezno, former president of the National Court Reporters Association, told the Pittsburgh Post-Gazette that the job is akin to "writing naked, because a million people are reading your words. You can't make a mistake."

MISTAKES HAPPEN

closed

While a faulty decoder or poor signal can produce captioning errors, more often than not they are the result of human error, particularly during live programming. Though stenocaptioners prepare for broadcasts by updating their phonetic dictionaries with phonetic symbols for names and places that they expect to hear, even the most prepared and accurate stenocaptioner can make a mistake from time to time. For instance, all it takes is a single incorrect keystroke to type the phonetic codes for two completely different words. Mistakes aren't limited to words, either. In 2005, American Idol displayed the wrong phone number to vote for contestants in the closed captioning of its broadcast. Media companies are experimenting with automatic error-correcting features, voice-to-text technology, and innovative ways to provide captions for multimedia on the Internet. Though captioning continues to become cheaper, faster, and more prevalent than it is today, the occasional mistake will likely always remain.

This post originally appeared in 2009.

Original image
iStock // Ekaterina Minaeva
technology
arrow
Man Buys Two Metric Tons of LEGO Bricks; Sorts Them Via Machine Learning
May 21, 2017
Original image
iStock // Ekaterina Minaeva

Jacques Mattheij made a small, but awesome, mistake. He went on eBay one evening and bid on a bunch of bulk LEGO brick auctions, then went to sleep. Upon waking, he discovered that he was the high bidder on many, and was now the proud owner of two tons of LEGO bricks. (This is about 4400 pounds.) He wrote, "[L]esson 1: if you win almost all bids you are bidding too high."

Mattheij had noticed that bulk, unsorted bricks sell for something like €10/kilogram, whereas sets are roughly €40/kg and rare parts go for up to €100/kg. Much of the value of the bricks is in their sorting. If he could reduce the entropy of these bins of unsorted bricks, he could make a tidy profit. While many people do this work by hand, the problem is enormous—just the kind of challenge for a computer. Mattheij writes:

There are 38000+ shapes and there are 100+ possible shades of color (you can roughly tell how old someone is by asking them what lego colors they remember from their youth).

In the following months, Mattheij built a proof-of-concept sorting system using, of course, LEGO. He broke the problem down into a series of sub-problems (including "feeding LEGO reliably from a hopper is surprisingly hard," one of those facts of nature that will stymie even the best system design). After tinkering with the prototype at length, he expanded the system to a surprisingly complex system of conveyer belts (powered by a home treadmill), various pieces of cabinetry, and "copious quantities of crazy glue."

Here's a video showing the current system running at low speed:

The key part of the system was running the bricks past a camera paired with a computer running a neural net-based image classifier. That allows the computer (when sufficiently trained on brick images) to recognize bricks and thus categorize them by color, shape, or other parameters. Remember that as bricks pass by, they can be in any orientation, can be dirty, can even be stuck to other pieces. So having a flexible software system is key to recognizing—in a fraction of a second—what a given brick is, in order to sort it out. When a match is found, a jet of compressed air pops the piece off the conveyer belt and into a waiting bin.

After much experimentation, Mattheij rewrote the software (several times in fact) to accomplish a variety of basic tasks. At its core, the system takes images from a webcam and feeds them to a neural network to do the classification. Of course, the neural net needs to be "trained" by showing it lots of images, and telling it what those images represent. Mattheij's breakthrough was allowing the machine to effectively train itself, with guidance: Running pieces through allows the system to take its own photos, make a guess, and build on that guess. As long as Mattheij corrects the incorrect guesses, he ends up with a decent (and self-reinforcing) corpus of training data. As the machine continues running, it can rack up more training, allowing it to recognize a broad variety of pieces on the fly.

Here's another video, focusing on how the pieces move on conveyer belts (running at slow speed so puny humans can follow). You can also see the air jets in action:

In an email interview, Mattheij told Mental Floss that the system currently sorts LEGO bricks into more than 50 categories. It can also be run in a color-sorting mode to bin the parts across 12 color groups. (Thus at present you'd likely do a two-pass sort on the bricks: once for shape, then a separate pass for color.) He continues to refine the system, with a focus on making its recognition abilities faster. At some point down the line, he plans to make the software portion open source. You're on your own as far as building conveyer belts, bins, and so forth.

Check out Mattheij's writeup in two parts for more information. It starts with an overview of the story, followed up with a deep dive on the software. He's also tweeting about the project (among other things). And if you look around a bit, you'll find bulk LEGO brick auctions online—it's definitely a thing!

Original image
iStock
Animals
arrow
Scientists Think They Know How Whales Got So Big
May 24, 2017
Original image
iStock

It can be difficult to understand how enormous the blue whale—the largest animal to ever exist—really is. The mammal can measure up to 105 feet long, have a tongue that can weigh as much as an elephant, and have a massive, golf cart–sized heart powering a 200-ton frame. But while the blue whale might currently be the Andre the Giant of the sea, it wasn’t always so imposing.

For the majority of the 30 million years that baleen whales (the blue whale is one) have occupied the Earth, the mammals usually topped off at roughly 30 feet in length. It wasn’t until about 3 million years ago that the clade of whales experienced an evolutionary growth spurt, tripling in size. And scientists haven’t had any concrete idea why, Wired reports.

A study published in the journal Proceedings of the Royal Society B might help change that. Researchers examined fossil records and studied phylogenetic models (evolutionary relationships) among baleen whales, and found some evidence that climate change may have been the catalyst for turning the large animals into behemoths.

As the ice ages wore on and oceans were receiving nutrient-rich runoff, the whales encountered an increasing number of krill—the small, shrimp-like creatures that provided a food source—resulting from upwelling waters. The more they ate, the more they grew, and their bodies adapted over time. Their mouths grew larger and their fat stores increased, helping them to fuel longer migrations to additional food-enriched areas. Today blue whales eat up to four tons of krill every day.

If climate change set the ancestors of the blue whale on the path to its enormous size today, the study invites the question of what it might do to them in the future. Changes in ocean currents or temperature could alter the amount of available nutrients to whales, cutting off their food supply. With demand for whale oil in the 1900s having already dented their numbers, scientists are hoping that further shifts in their oceanic ecosystem won’t relegate them to history.

[h/t Wired]

SECTIONS
BIG QUESTIONS
BIG QUESTIONS
WEATHER WATCH
BE THE CHANGE
JOB SECRETS
QUIZZES
WORLD WAR 1
SMART SHOPPING
STONES, BONES, & WRECKS
#TBT
THE PRESIDENTS
WORDS
RETROBITUARIES