MIT Wants to Teach Robots to Do Your Chores

iStock
iStock

Teaching a robot basic human tasks is more of a challenge than it seems. To teach a robot to pour you a glass of orange juice, for instance, the 'bot has to not just recognize the command to take the juice out of the fridge and pour it into a glass, but it has to understand the many tiny aspects of the task that the human brain infers—like, say, the steps where you have to walk into the kitchen, open the cupboard, and grab an empty glass.

VirtualHome, a 3D virtual environment created by MIT's Computer Science and Artificial Intelligence Laboratory with researchers at the University of Toronto, is designed to teach robots exactly how to accomplish household tasks like pouring juice. The simulator acts as a training ground for artificial intelligence, turning a large set of household tasks into robot-friendly, sequence-by-sequence programs.

First, researchers created a knowledge base that the AI would use to perform tasks [PDF]. The researchers asked participants on Amazon's Mechanical Turk to come up with descriptions of household activities, like making coffee or turning on the television, and describe the steps. Their descriptions naturally didn't include some of the steps that a robot would need, since they were composed as if speaking to another human—the "watch TV" command didn't include some obvious steps a robot might need, like "walk over to the TV" or "sit on the sofa and watch." They then had the same participants generate programs for these tasks using a simple system designed to teach young kids how to code. All told, they created more than 2800 programs for household tasks.

An avatar sets the table in a simulated dining room.
MIT CSAIL

Then, the researchers tested these programs in a Sims-inspired virtual home to see if the crowd-sourced instructions could work to train robots. They turned the programs into videos in which a virtual agent would execute the household task based on the code.

The researchers were focused on creating a virtual environment that could serve as a dataset for future AI training, rather than training any actual robots right now. But their model is designed so that one day, artificial intelligence could be trained by someone who isn't a robotics expert, converting natural language commands into robot-friendly code.

In the future, they hope to be able to turn videos from real life into similar programs, so that a robot could learn to do simple tasks by watching a YouTube video. An artificial intelligence system like Amazon's Alexa wouldn't need to be programmed by its manufacturer to do every single task—it could learn on the fly, without waiting for a developer to create a new skill.

Google Translate Now Lets Your Smartphone's Camera Read 13 More Languages in Real Time

iStock.com/nazar_ab
iStock.com/nazar_ab

Your days of lugging around foreign-language dictionaries while traveling are behind you. As VentureBeat reports, Google Translate's in-app camera now recognizes 13 new languages, including Arabic, Hindi, and Vietnamese.

In 2015, the Google Translate app launched a feature that allows users to translate written text in real time. All you need to do to use it is to tap the app's camera icon and point your phone at the words you wish to decode, whether they're on a menu, billboard, or road sign. Almost immediately, the app replaces the text displayed on your camera with the translation in your preferred language.

The tool initially worked with 27 languages and Google has introduced more over the past few years. With the latest additions, Google Translate now recognizes about 50 languages.

Many of the new languages now compatible with Google Translate—including Bengali, Gujarati, Kannada, Malayalam, Marathi, Nepali, Punjabi, Tamil, Telugu, Thai—are widely spoken in South Asia. Arabic, Bengali, Hindi, and Punjabi are four of the 10 most common languages on Earth.

Google Translate users can download the new update now for iOS and Android phones.

[h/t VentureBeat]

Mountable Laserlight Projector Creates a Personal Bike Lane for Cyclists

Beryl, Kickstarter
Beryl, Kickstarter

All the blinking lights and reflectors in the world aren't enough to prevent your bike from disappearing into a truck's blind spot. But what if you could extend the length of your bike by an 20 extra feet with the click of a button? That's the concept behind the Laserlight Core, a product currently raising funds on Kickstarter, Fast Company reports.

Laserlight resembles a small flashlight, and it attaches easily to the front of your handlebars. When biking, you can switch it on to project a laser image of a green bike symbol onto the street several yards in front of you. If the driver of a van, truck, or bus can't see your actual bike in their mirror, the idea is that the light will make them aware of your presence. The projection is about the width of a bike lane, so it may also encourage drivers to give cyclists more road space than they would have otherwise. According to an independent study on the light from Transport for London, bikers with Laserlight are about 97 percent visible at night to drivers in vans (compared to 65 visibility with a standard LED light).

Emily Brooke came up up with the concept seven years ago as a design student at England's University of Brighton. After a frighteningly close encounter with a van while biking, she wondered if she could invent a way to get the attention of drivers even when she was stuck squarely in their blind spots.

Her product, originally dubbed Blaze, launched on Kickstarter in 2012. The campaign was a success, and now she's returning to the crowdfunding platform with a new-and-improved version of the item. Laserlight Core is easier to mount than its predecessor and it also projects a clearer image. You can reserve yours with a pledge of $75 or more with shipping estimated for December of this year. (It makes a great gift for the dedicated cyclist in your life, too.)

[h/t Fast Company]

SECTIONS

arrow
LIVE SMARTER