CLOSE
Original image
istock

New Study Suggests the ‘Uncanny Valley’ Is Real

Original image
istock

In 1970, Japanese roboticist Masahiro Mori argued that humans find humanoid robots appealing only up to a point. As robots start to look more and more human, there’s a moment at which they reach a strange middle ground—they appear mostly human but are still identifiably “other.” Mori called this moment the “Uncanny Valley.” 

New York Magazine explains, “Whereas a robot like Wall-E can be easily parsed by our brain as being robotic, those in the uncanny valley often … elicit feelings of unease because they’re close to being human, but not.”

Though the theory has become increasingly popular over the last few decades, there has been little empirical evidence to back it up. One 2011 study of subjects' response to lifelike robots suggests the effect may come from the brain's inability to reconcile a convincing appearance with robotic motion. A systematic review of the research on the phenomenon conducted this year concluded that “empirical evidence for the uncanny valley hypothesis is still ambiguous if not non-existent,” but that a perceptual mismatch between artificial and human features might be to blame.

Though the jury is still out, interest in the subject continues. Recently, two researchers, Maya B. Mathur and David B. Reichling, ran a new study to determine how humans respond to robots that have varying levels of human appearance. 

They started by pulling photographs of the faces of 80 real robots. Their first test simply asked volunteers to rank the robots based on how human or mechanical they seemed, and whether they seemed to be expressing a positive or negative emotion. Their second and third tests, meanwhile, got to the heart of the uncanny valley question, asking volunteers to rank how “friendly” or “creepy” each robot seemed. They found that as faces started to look more human, volunteers at first described them as more likable. But just before the robots became nearly indistinguishable from humans, likability ratings dipped—showing that subjects were having an uncanny valley reaction to the humanoid robots. 

Next, Mathur and Reichling ran experiments to determine how people perceive robots they actually interact with. Testing for perceived “likability” and “trust,” the researchers found that, once again, likability dipped significantly when robot visages entered the uncanny valley. Trust, meanwhile, dipped slightly, but not nearly as much as likability. 

While more research is needed to interpret these preliminary findings, Mathur and Reichling’s study found significant support for Mori’s original hypothesis. So if you get creeped out by humanoid robots like Bina48 or the baby robots used in a recent psychology study, there's now more evidence to explain that feeling. 

[h/t: New York Magazine]

Original image
iStock
arrow
technology
Google's AI Can Make Its Own AI Now
Original image
iStock

Artificial intelligence is advanced enough to do some pretty complicated things: read lips, mimic sounds, analyze photographs of food, and even design beer. Unfortunately, even people who have plenty of coding knowledge might not know how to create the kind of algorithm that can perform these tasks. Google wants to bring the ability to harness artificial intelligence to more people, though, and according to WIRED, it's doing that by teaching machine-learning software to make more machine-learning software.

The project is called AutoML, and it's designed to come up with better machine-learning software than humans can. As algorithms become more important in scientific research, healthcare, and other fields outside the direct scope of robotics and math, the number of people who could benefit from using AI has outstripped the number of people who actually know how to set up a useful machine-learning program. Though computers can do a lot, according to Google, human experts are still needed to do things like preprocess the data, set parameters, and analyze the results. These are tasks that even developers may not have experience in.

The idea behind AutoML is that people who aren't hyper-specialists in the machine-learning field will be able to use AutoML to create their own machine-learning algorithms, without having to do as much legwork. It can also limit the amount of menial labor developers have to do, since the software can do the work of training the resulting neural networks, which often involves a lot of trial and error, as WIRED writes.

Aside from giving robots the ability to turn around and make new robots—somewhere, a novelist is plotting out a dystopian sci-fi story around that idea—it could make machine learning more accessible for people who don't work at Google, too. Companies and academic researchers are already trying to deploy AI to calculate calories based on food photos, find the best way to teach kids, and identify health risks in medical patients. Making it easier to create sophisticated machine-learning programs could lead to even more uses.

[h/t WIRED]

Original image
arrow
technology
Aibo, Sony’s Failed Robot Dog, Is Returning as a Smart Home Device
Original image

When Sony released its robotic dog Aibo in 1999, marketing it as “Man’s Best Friend for the 21st Century,” sales were impressive. But the public fascination didn’t last forever. Even though it was low-maintenance and allergy-free, most dog-lovers still preferred the pets they had to clean up after and feed. Aibo was discontinued seven years later.

Now, Mashable reports that Aibo is making a comeback, and it’s been given a few updates to make it a better fit for the current decade. When the robot companion returns to shelves in spring of 2018, it will double as a smart home device. That’s a big step up from the early Aibos, which couldn’t do much beyond playing fetch, wagging their tails, and singing the occasional song.

Sony’s original Aibo team, which was redistributed throughout the company in 2006, has reformed to tackle the project. Instead of trying to replace your flesh-and-blood Fido at home, they’ve designed a robot that can compete with other AI home speakers like Amazon Echo and Google Home. The new dog can connect to the internet, so owners will be able to command it to do things like look up the weather as well as sit and fetch. Aibo will run on an open source software, which means that third party developers will be able to program new features that Sony doesn’t include in the initial release.

While Aibo is often remembered as a turn-of-the-millennium failure, it's still beloved in some communities. In 2015 The New York Times published a short documentary profiling owners in Japan who struggle to care for their robots as parts become scarce. When the pets break down for good, some of them even hold Aibo funerals. It will soon became clear if the 2018 models inspire a cult following of their own.

[h/t Mashable]

SECTIONS

arrow
LIVE SMARTER
More from mental floss studios