Original image

Large Study Shows Brain-Training Apps Don't Improve Everyday Cognitive Ability

Original image

The big claims of brain-training games are going down in flames. A large meta-analysis of research into the apps’ efficacy, published in the journal Psychological Science in the Public Interest, found insufficient evidence to back up the assertion that they improve cognitive function.

Apps like Lumosity and games like Nintendo's Brain Age made a pretty big splash when they debuted a few years ago; heck, even mental_floss got on board. The games’ makers promised users that they could stay clever, improve their memories, and even stave off neurodegenerative diseases just by tapping their screens for a few minutes every day. The brain-game connection seemed sound enough; after all, everybody knows crosswords keep you sharp, right? (Except they don't—sorry.)

But “seeming sound” is not quite good enough. You see, there’s this little thing called the Federal Trade Commission (FTC), and they like corporations to back up their claims with, you know, facts. And the facts about these apps have not been looking too good.

They looked so bad, in fact, that in October of 2014, more than 70 scientists published an open letter criticizing app-makers for their “exaggerated and misleading claims,” which, they said, “exploit the anxiety of older adults about impending cognitive decline.”

Not to be stifled, another group of more than 100 scientists and psychologists affiliated with a site called “Cognitive Training Data” countered with a rebuttal. The letter-writers argued that “a substantial and growing body of evidence shows that certain cognitive training regimens can significantly improve cognitive function, including in ways that generalize to everyday life. This includes some exercises now available commercially.”

The FTC did not agree, and in 2015 levied fines against both Lumosity and LearningRx. “Lumosity preyed on consumers’ fears about age-related cognitive decline, suggesting their games could stave off memory loss, dementia, and even Alzheimer’s disease,” said Jessica Rich, Director of the FTC’s Bureau of Consumer Protection, in a statement. “But Lumosity simply did not have the science to back up its ads.”

As both sides argued their cases, the balance began to tip in favor of the naysayers. Earlier this year, psychologists testing the apps suggested that the benefits to dedicated users might well be caused by the placebo effect.

A larger investigation was overdue. So a team of scientists undertook a large meta-analysis, looking at the experiments cited by both sides. In addition to each study’s results, they also looked at its design. The best human studies are large. They include control groups and account for the possibility of a placebo effect. The absence of any of these elements can throw a study’s results into question.

Unfortunately, many of the studies the researchers looked at were indeed missing some important pieces. Daniel Simon, a co-author on the article and a psychologist at the University of Illinois at Urbana-Champaign, told NPR's "Shots" blog that "many of the studies did not really adhere to what we think of as the best practices."

Some of the studies were worthwhile, he said, and some of the results showed promise, although even this was limited. "You can practice, for example, scanning baggage at an airport and looking for a knife," he said. "And you get really, really good at spotting that knife." But that doesn’t mean you get better at spotting, say, a hairbrush, or a sword. And how often are you actually going to need to scan baggage for knives?

The team also found no solid evidence that the games could improve memory or cognitive skills. "It would be really nice if you could play some games and have it radically change your cognitive abilities," Simons says. "But the studies don't show that on objectively measured real-world outcomes."

The meta-analysis itself was so thorough that even signees of the pro-brain-game rebuttal letter have extended their compliments. George Rebok is a Johns Hopkins University psychologist who has devoted the last two decades to cognitive training research. "The evaluation was very even-handed and raised many excellent points," he told NPR. “It really helped raise the bar in terms of the level of science that we must aspire to.”

[h/t Shots]

Know of something you think we should cover? Email us at

Original image
iStock // Ekaterina Minaeva
Man Buys Two Metric Tons of LEGO Bricks; Sorts Them Via Machine Learning
May 21, 2017
Original image
iStock // Ekaterina Minaeva

Jacques Mattheij made a small, but awesome, mistake. He went on eBay one evening and bid on a bunch of bulk LEGO brick auctions, then went to sleep. Upon waking, he discovered that he was the high bidder on many, and was now the proud owner of two tons of LEGO bricks. (This is about 4400 pounds.) He wrote, "[L]esson 1: if you win almost all bids you are bidding too high."

Mattheij had noticed that bulk, unsorted bricks sell for something like €10/kilogram, whereas sets are roughly €40/kg and rare parts go for up to €100/kg. Much of the value of the bricks is in their sorting. If he could reduce the entropy of these bins of unsorted bricks, he could make a tidy profit. While many people do this work by hand, the problem is enormous—just the kind of challenge for a computer. Mattheij writes:

There are 38000+ shapes and there are 100+ possible shades of color (you can roughly tell how old someone is by asking them what lego colors they remember from their youth).

In the following months, Mattheij built a proof-of-concept sorting system using, of course, LEGO. He broke the problem down into a series of sub-problems (including "feeding LEGO reliably from a hopper is surprisingly hard," one of those facts of nature that will stymie even the best system design). After tinkering with the prototype at length, he expanded the system to a surprisingly complex system of conveyer belts (powered by a home treadmill), various pieces of cabinetry, and "copious quantities of crazy glue."

Here's a video showing the current system running at low speed:

The key part of the system was running the bricks past a camera paired with a computer running a neural net-based image classifier. That allows the computer (when sufficiently trained on brick images) to recognize bricks and thus categorize them by color, shape, or other parameters. Remember that as bricks pass by, they can be in any orientation, can be dirty, can even be stuck to other pieces. So having a flexible software system is key to recognizing—in a fraction of a second—what a given brick is, in order to sort it out. When a match is found, a jet of compressed air pops the piece off the conveyer belt and into a waiting bin.

After much experimentation, Mattheij rewrote the software (several times in fact) to accomplish a variety of basic tasks. At its core, the system takes images from a webcam and feeds them to a neural network to do the classification. Of course, the neural net needs to be "trained" by showing it lots of images, and telling it what those images represent. Mattheij's breakthrough was allowing the machine to effectively train itself, with guidance: Running pieces through allows the system to take its own photos, make a guess, and build on that guess. As long as Mattheij corrects the incorrect guesses, he ends up with a decent (and self-reinforcing) corpus of training data. As the machine continues running, it can rack up more training, allowing it to recognize a broad variety of pieces on the fly.

Here's another video, focusing on how the pieces move on conveyer belts (running at slow speed so puny humans can follow). You can also see the air jets in action:

In an email interview, Mattheij told Mental Floss that the system currently sorts LEGO bricks into more than 50 categories. It can also be run in a color-sorting mode to bin the parts across 12 color groups. (Thus at present you'd likely do a two-pass sort on the bricks: once for shape, then a separate pass for color.) He continues to refine the system, with a focus on making its recognition abilities faster. At some point down the line, he plans to make the software portion open source. You're on your own as far as building conveyer belts, bins, and so forth.

Check out Mattheij's writeup in two parts for more information. It starts with an overview of the story, followed up with a deep dive on the software. He's also tweeting about the project (among other things). And if you look around a bit, you'll find bulk LEGO brick auctions online—it's definitely a thing!

Original image
Nick Briggs/Comic Relief
What Happened to Jamie and Aurelia From Love Actually?
May 26, 2017
Original image
Nick Briggs/Comic Relief

Fans of the romantic-comedy Love Actually recently got a bonus reunion in the form of Red Nose Day Actually, a short charity special that gave audiences a peek at where their favorite characters ended up almost 15 years later.

One of the most improbable pairings from the original film was between Jamie (Colin Firth) and Aurelia (Lúcia Moniz), who fell in love despite almost no shared vocabulary. Jamie is English, and Aurelia is Portuguese, and they know just enough of each other’s native tongues for Jamie to propose and Aurelia to accept.

A decade and a half on, they have both improved their knowledge of each other’s languages—if not perfectly, in Jamie’s case. But apparently, their love is much stronger than his grasp on Portuguese grammar, because they’ve got three bilingual kids and another on the way. (And still enjoy having important romantic moments in the car.)

In 2015, Love Actually script editor Emma Freud revealed via Twitter what happened between Karen and Harry (Emma Thompson and Alan Rickman, who passed away last year). Most of the other couples get happy endings in the short—even if Hugh Grant's character hasn't gotten any better at dancing.

[h/t TV Guide]