CLOSE
istock
istock

New Study Suggests the ‘Uncanny Valley’ Is Real

istock
istock

In 1970, Japanese roboticist Masahiro Mori argued that humans find humanoid robots appealing only up to a point. As robots start to look more and more human, there’s a moment at which they reach a strange middle ground—they appear mostly human but are still identifiably “other.” Mori called this moment the “Uncanny Valley.” 

New York Magazine explains, “Whereas a robot like Wall-E can be easily parsed by our brain as being robotic, those in the uncanny valley often … elicit feelings of unease because they’re close to being human, but not.”

Though the theory has become increasingly popular over the last few decades, there has been little empirical evidence to back it up. One 2011 study of subjects' response to lifelike robots suggests the effect may come from the brain's inability to reconcile a convincing appearance with robotic motion. A systematic review of the research on the phenomenon conducted this year concluded that “empirical evidence for the uncanny valley hypothesis is still ambiguous if not non-existent,” but that a perceptual mismatch between artificial and human features might be to blame.

Though the jury is still out, interest in the subject continues. Recently, two researchers, Maya B. Mathur and David B. Reichling, ran a new study to determine how humans respond to robots that have varying levels of human appearance. 

They started by pulling photographs of the faces of 80 real robots. Their first test simply asked volunteers to rank the robots based on how human or mechanical they seemed, and whether they seemed to be expressing a positive or negative emotion. Their second and third tests, meanwhile, got to the heart of the uncanny valley question, asking volunteers to rank how “friendly” or “creepy” each robot seemed. They found that as faces started to look more human, volunteers at first described them as more likable. But just before the robots became nearly indistinguishable from humans, likability ratings dipped—showing that subjects were having an uncanny valley reaction to the humanoid robots. 

Next, Mathur and Reichling ran experiments to determine how people perceive robots they actually interact with. Testing for perceived “likability” and “trust,” the researchers found that, once again, likability dipped significantly when robot visages entered the uncanny valley. Trust, meanwhile, dipped slightly, but not nearly as much as likability. 

While more research is needed to interpret these preliminary findings, Mathur and Reichling’s study found significant support for Mori’s original hypothesis. So if you get creeped out by humanoid robots like Bina48 or the baby robots used in a recent psychology study, there's now more evidence to explain that feeling. 

[h/t: New York Magazine]

nextArticle.image_alt|e
NUS Environmental Research Institute, Subnero
arrow
technology
Researchers in Singapore Deploy Robot Swans to Test Water Quality
NUS Environmental Research Institute, Subnero
NUS Environmental Research Institute, Subnero

There's something peculiar about the new swans floating around reservoirs in Singapore. They drift across the water like normal birds, but upon closer inspection, onlookers will find they're not birds at all: They're cleverly disguised robots designed to test the quality of the city's water.

As Dezeen reports, the high-tech waterfowl, dubbed NUSwan (New Smart Water Assessment Network), are the work of researchers at the National University of Singapore [PDF]. The team invented the devices as a way to tackle the challenges of maintaining an urban water source. "Water bodies are exposed to varying sources of pollutants from urban run-offs and industries," they write in a statement. "Several methods and protocols in monitoring pollutants are already in place. However, the boundaries of extensive assessment for the water bodies are limited by labor intensive and resource exhaustive methods."

By building water assessment technology into a plastic swan, they're able to analyze the quality of the reservoirs cheaply and discreetly. Sensors on the robots' undersides measure factors like dissolved oxygen and chlorophyll levels. The swans wirelessly transmit whatever data they collect to the command center on land, and based on what they send, human pilots can remotely tweak the robots' performance in real time. The hope is that the simple, adaptable technology will allow researchers to take smarter samples and better understand the impact of the reservoir's micro-ecosystem on water quality.

Man placing robotic swan in water.
NUS Environmental Research Institute, Subnero

This isn't the first time humans have used robots disguised as animals as tools for studying nature. Check out this clip from the BBC series Spy in the Wild for an idea of just how realistic these robots can get.

[h/t Dezeen]

nextArticle.image_alt|e
iStock
arrow
technology
Google's AI Can Make Its Own AI Now
iStock
iStock

Artificial intelligence is advanced enough to do some pretty complicated things: read lips, mimic sounds, analyze photographs of food, and even design beer. Unfortunately, even people who have plenty of coding knowledge might not know how to create the kind of algorithm that can perform these tasks. Google wants to bring the ability to harness artificial intelligence to more people, though, and according to WIRED, it's doing that by teaching machine-learning software to make more machine-learning software.

The project is called AutoML, and it's designed to come up with better machine-learning software than humans can. As algorithms become more important in scientific research, healthcare, and other fields outside the direct scope of robotics and math, the number of people who could benefit from using AI has outstripped the number of people who actually know how to set up a useful machine-learning program. Though computers can do a lot, according to Google, human experts are still needed to do things like preprocess the data, set parameters, and analyze the results. These are tasks that even developers may not have experience in.

The idea behind AutoML is that people who aren't hyper-specialists in the machine-learning field will be able to use AutoML to create their own machine-learning algorithms, without having to do as much legwork. It can also limit the amount of menial labor developers have to do, since the software can do the work of training the resulting neural networks, which often involves a lot of trial and error, as WIRED writes.

Aside from giving robots the ability to turn around and make new robots—somewhere, a novelist is plotting out a dystopian sci-fi story around that idea—it could make machine learning more accessible for people who don't work at Google, too. Companies and academic researchers are already trying to deploy AI to calculate calories based on food photos, find the best way to teach kids, and identify health risks in medical patients. Making it easier to create sophisticated machine-learning programs could lead to even more uses.

[h/t WIRED]

SECTIONS

arrow
LIVE SMARTER
More from mental floss studios