CLOSE
Original image
getty images

A Guide to Scoring Figure Skating at the Olympics

Original image
getty images

Have you been watching the figure skating competition at the Olympics and wondering what the heck is going on? Why isn't anyone getting 6.0 scores any more? And why did the guy who fell on a quadruple salchow still win over guys that didn't fall at all?

First used during 2004 competitive season, the International Judging System (IJS) is the modus operandi for the competitive sport of figure skating. It's far more complex than the previous 6.0 system, and understandably creates a lot of questions about competition results from figure skating fans and insiders alike.

Here is a brief primer to help make better sense of it in time to watch the ladies competition in Sochi this week.

A Brief History

At the Salt Lake City Winter Olympics in 2002, a French judge confessed to being pressured to take part in a vote-swapping scandal after a questionable result in the pairs competition that rocked the skating world. It forced the International Skating Union to dump the long-esteemed (and infamously subjective) 6.0 judging system and build a more objective system from scratch. The result was the IJS; to say it's complicated is like saying rocket science is basic arithmetic.

The (Im)Perfect 6.0

While the new system is complicated, the old system wasn't a cakewalk either. In the 6.0 system, a panel of judges (anywhere from three judges at small competitions to nine at major elite-level events) would assign skaters two marks for their performances, rating them on a scale of 0.0 (horrible) to 6.0 (perfection). The “technical merit” mark measured the level of difficulty and quality of execution of jumps and spins, and the “presentation” or “artistic merit” mark went for quality of overall performance, including footwork, artistry and interpretation of music. Those two scores were then added together and translated to “ordinals”—that is, if the top skater receives two 5.9s (a total of 11.8), and the next best receives two 5.8s (11.6), 11.8 becomes a “1,” while the 11.6 becomes a “2.” From there, the majority rules. If the top skater got a majority of first place ordinals, they win. To come in second, the next skater would need to receive a majority of second place ordinals or higher. Third place needs a majority of third or higher, and so on. 

After the 2002 Olympic pairs competition, it became evident that the 6.0 system was too easy to scam. The new system is designed to force the judges to dissect a skater's performance down to its individual elements.

...In with the new.

The new system is points-based. Skaters receive two marks for each performance—a “technical” score and a “program components” score—that are added together to form a composite score. Add the two together and the skater with the highest composite score wins.

But it's not as simple as it sounds. There are two sets of officials evaluating the competitors. The first is a “technical panel,” made up of five specialists (including an instant replay video operator) who watch each performance, identify each point-worthy element attempted by skaters, and assign it a base value in points. (For example, attempting a triple axel is worth 8.5 points, per the ISU's preordained rules.) Their evaluation provides one part of the overall technical score for the performance.

The second set of officials is a nine-member judging panel that evaluates the quality of execution of those identified elements, based on a scale of -3 to +3. (Falling while attempting a triple axel could earn a -3 score for that element, for example.) The judging panel's assessment provides the rest of the technical score.

The judging panel also assesses each skater's footwork, flow, skating quality, musical interpretation, and other movements that link the technical elements together to come up with the “program components” score.

Finally, there's an official referee, who oversees everything, to make sure there are no shenanigans afoot.

Racking up the points

To use the recent men's event as illustration of the IJS scoring, Japan's Yuzuru Hanyu won the gold medal in the men's event in Sochi last week, despite two falls and some major bobbles. But Hanyu really knew how to work the system, throwing enough high-scoring elements into his program, and doing (most) of them with style. One could almost hear the cha-chinging of points in the bank as he completed each element, like Super Mario collects coins on his way to save the princess.

Here's how it played out: In Hanyu's gold medal-winning freeskate, his first technical element was a quadruple salchow, and he fell. So the technical panel looked at it and determined that yes, it was a quadruple salchow—in which he takes off on a back, inside-edge of a blade and completes four full rotations in the air—and thus it has a base value of 10.5 points. Boom! Points in the bank for Hanyu.

The judging panel then looked at it, saw that he fell, and gave him the lowest score for execution: -3. (All judges give individual scores, but the top and bottom scores are thrown out and the rest are averaged.) Add them together and Hanyu now has a total of 7.5 points. His next technical element was a quadruple toe loop, which he landed. Again, the technical panel determined that it was indeed a quad toe, so he got a base score of 10.3 points. The judging panel then awarded him 2.14 points for execution (it was a great jump), so he got 12.44 total for the quad toe. Add that to the quad salchow attempt, and the points racked up fast.

Just for comparison's sake, let's look at the first two elements of silver medalist Patrick Chan's freeskate.

Chan landed a quadruple-toe-loop-triple-toe-loop combination right off the bat. Because it was a combination of jumps, the technical panel said it was worth 14.40 points. The judging panel gave him the highest possible execution score of 3. That gave him 17.40 points. (At that point, Hanyu only had 7.5.) He then tried another quad toe, but touched his hand down on the ice during his landing. The technical panel gave him the base of 10.3 just like Hanyu, but the judging panel gave him -1.57 points because of the slight misstep. So he got a total of 8.73 points for the second quad toe, while Hanyu got a 12.44 for his.

In the end, Hanyu attempted more elements with higher base scores, and got higher grades of execution on most of them. The pair ended up with an almost four-point difference in their technical scores, and even though Chan got a higher program component score than Hanyu (by 1.72 points), it was not enough to make up the difference.

In the end, the final score in an Olympic- or World-level competition is actually a combination of the marks from the short program and the long program—so it's possible to do badly in one program or the other, and still win a medal, mathematically speaking.

Oh, and if you have several hours and want to know what every technical element is worth, feel free to comb through the exhaustive ISU rules.

Try out your understanding of the judging rules this week as the top ladies of figure skating take the ice to battle for gold in Sochi.

Original image
iStock // Ekaterina Minaeva
arrow
technology
Man Buys Two Metric Tons of LEGO Bricks; Sorts Them Via Machine Learning
Original image
iStock // Ekaterina Minaeva

Jacques Mattheij made a small, but awesome, mistake. He went on eBay one evening and bid on a bunch of bulk LEGO brick auctions, then went to sleep. Upon waking, he discovered that he was the high bidder on many, and was now the proud owner of two tons of LEGO bricks. (This is about 4400 pounds.) He wrote, "[L]esson 1: if you win almost all bids you are bidding too high."

Mattheij had noticed that bulk, unsorted bricks sell for something like €10/kilogram, whereas sets are roughly €40/kg and rare parts go for up to €100/kg. Much of the value of the bricks is in their sorting. If he could reduce the entropy of these bins of unsorted bricks, he could make a tidy profit. While many people do this work by hand, the problem is enormous—just the kind of challenge for a computer. Mattheij writes:

There are 38000+ shapes and there are 100+ possible shades of color (you can roughly tell how old someone is by asking them what lego colors they remember from their youth).

In the following months, Mattheij built a proof-of-concept sorting system using, of course, LEGO. He broke the problem down into a series of sub-problems (including "feeding LEGO reliably from a hopper is surprisingly hard," one of those facts of nature that will stymie even the best system design). After tinkering with the prototype at length, he expanded the system to a surprisingly complex system of conveyer belts (powered by a home treadmill), various pieces of cabinetry, and "copious quantities of crazy glue."

Here's a video showing the current system running at low speed:

The key part of the system was running the bricks past a camera paired with a computer running a neural net-based image classifier. That allows the computer (when sufficiently trained on brick images) to recognize bricks and thus categorize them by color, shape, or other parameters. Remember that as bricks pass by, they can be in any orientation, can be dirty, can even be stuck to other pieces. So having a flexible software system is key to recognizing—in a fraction of a second—what a given brick is, in order to sort it out. When a match is found, a jet of compressed air pops the piece off the conveyer belt and into a waiting bin.

After much experimentation, Mattheij rewrote the software (several times in fact) to accomplish a variety of basic tasks. At its core, the system takes images from a webcam and feeds them to a neural network to do the classification. Of course, the neural net needs to be "trained" by showing it lots of images, and telling it what those images represent. Mattheij's breakthrough was allowing the machine to effectively train itself, with guidance: Running pieces through allows the system to take its own photos, make a guess, and build on that guess. As long as Mattheij corrects the incorrect guesses, he ends up with a decent (and self-reinforcing) corpus of training data. As the machine continues running, it can rack up more training, allowing it to recognize a broad variety of pieces on the fly.

Here's another video, focusing on how the pieces move on conveyer belts (running at slow speed so puny humans can follow). You can also see the air jets in action:

In an email interview, Mattheij told Mental Floss that the system currently sorts LEGO bricks into more than 50 categories. It can also be run in a color-sorting mode to bin the parts across 12 color groups. (Thus at present you'd likely do a two-pass sort on the bricks: once for shape, then a separate pass for color.) He continues to refine the system, with a focus on making its recognition abilities faster. At some point down the line, he plans to make the software portion open source. You're on your own as far as building conveyer belts, bins, and so forth.

Check out Mattheij's writeup in two parts for more information. It starts with an overview of the story, followed up with a deep dive on the software. He's also tweeting about the project (among other things). And if you look around a bit, you'll find bulk LEGO brick auctions online—it's definitely a thing!

Original image
iStock
arrow
Health
One Bite From This Tick Can Make You Allergic to Meat
Original image
iStock

We like to believe that there’s no such thing as a bad organism, that every creature must have its place in the world. But ticks are really making that difficult. As if Lyme disease wasn't bad enough, scientists say some ticks carry a pathogen that causes a sudden and dangerous allergy to meat. Yes, meat.

The Lone Star tick (Amblyomma americanum) mostly looks like your average tick, with a tiny head and a big fat behind, except the adult female has a Texas-shaped spot on its back—thus the name.

Unlike other American ticks, the Lone Star feeds on humans at every stage of its life cycle. Even the larvae want our blood. You can’t get Lyme disease from the Lone Star tick, but you can get something even more mysterious: the inability to safely consume a bacon cheeseburger.

"The weird thing about [this reaction] is it can occur within three to 10 or 12 hours, so patients have no idea what prompted their allergic reactions," allergist Ronald Saff, of the Florida State University College of Medicine, told Business Insider.

What prompted them was STARI, or southern tick-associated rash illness. People with STARI may develop a circular rash like the one commonly seen in Lyme disease. They may feel achy, fatigued, and fevered. And their next meal could make them very, very sick.

Saff now sees at least one patient per week with STARI and a sensitivity to galactose-alpha-1, 3-galactose—more commonly known as alpha-gal—a sugar molecule found in mammal tissue like pork, beef, and lamb. Several hours after eating, patients’ immune systems overreact to alpha-gal, with symptoms ranging from an itchy rash to throat swelling.

Even worse, the more times a person is bitten, the more likely it becomes that they will develop this dangerous allergy.

The tick’s range currently covers the southern, eastern, and south-central U.S., but even that is changing. "We expect with warming temperatures, the tick is going to slowly make its way northward and westward and cause more problems than they're already causing," Saff said. We've already seen that occur with the deer ticks that cause Lyme disease, and 2017 is projected to be an especially bad year.

There’s so much we don’t understand about alpha-gal sensitivity. Scientists don’t know why it happens, how to treat it, or if it's permanent. All they can do is advise us to be vigilant and follow basic tick-avoidance practices.

[h/t Business Insider]

SECTIONS
BIG QUESTIONS
arrow
BIG QUESTIONS
WEATHER WATCH
BE THE CHANGE
JOB SECRETS
QUIZZES
WORLD WAR 1
SMART SHOPPING
STONES, BONES, & WRECKS
#TBT
THE PRESIDENTS
WORDS
RETROBITUARIES