This Tiny Robot Can Pull 2000 Times Its Weight

Stanford’s µTug robot doesn’t look like much, but the quarter-sized machine could put you to shame at the gym.

Using adhesives inspired by geckos and ants, this little ‘bot can pull up to 2000 times its weight. The 12-gram MicroTug developed by Stanford’s Biomimetics and Dexterous Manipulation Lab weighs less than a third of a bar of Hershey’s chocolate, but can drag a full cup of coffee across a desk. 

Just watch: 

Finally, a pocket-sized automaton that can clean your desk for you!

[h/t: Digg]

nextArticle.image_alt|e
Jason Dorfman, MIT CSAIL
arrow
technology
MIT’s New AI Can Sense Your Movements Through Walls Using Radio Signals
Jason Dorfman, MIT CSAIL
Jason Dorfman, MIT CSAIL

New artificial intelligence technology developed at MIT can see through walls, and it knows what you’re doing.

RF-Pose, created by researchers at the Computer Science and Artificial Intelligence Laboratory (CSAIL), uses wireless signals to estimate a person’s pose through a wall. It can only come up with a 2D stick figure of your movements, but it can nonetheless see your actions.

The system, described in a new paper [PDF], uses a neural network to piece together radio signals bouncing off the human body. It takes advantage of the fact that the body reflects radio frequency signals in the Wi-Fi range. These Wi-Fi signals can move through walls, but not through people.

Using data from low-power radio signals—1000 times lower than the power your home Wi-Fi router puts out—this algorithm can generate a relatively accurate picture of what the person behind the wall is doing by piecing together the signals reflected by the moving body.

The system can recognize movement in poor lighting and identify multiple different individuals in a scene. Though the technology is still in development, it’s not hard to imagine that the military might use it in surveillance, but the researchers also suggest that it may be useful for video game design and search-and-rescue missions. It might also help doctors monitor and analyze the movements of patients with disorders like Parkinson’s disease and multiple sclerosis.

This is just the latest in a series of projects using radio signals to mimic X-ray vision. CSAIL has been working on similar technology using Wi-Fi signals for several years, creating algorithms to recognize human forms and see motion through obstructions. In the future, they hope to expand the system to be able to recognize movement with 3D images rather than the current 2D stick figures.

nextArticle.image_alt|e
iStock
arrow
technology
MIT Wants to Teach Robots to Do Your Chores
iStock
iStock

Teaching a robot basic human tasks is more of a challenge than it seems. To teach a robot to pour you a glass of orange juice, for instance, the 'bot has to not just recognize the command to take the juice out of the fridge and pour it into a glass, but it has to understand the many tiny aspects of the task that the human brain infers—like, say, the steps where you have to walk into the kitchen, open the cupboard, and grab an empty glass.

VirtualHome, a 3D virtual environment created by MIT's Computer Science and Artificial Intelligence Laboratory with researchers at the University of Toronto, is designed to teach robots exactly how to accomplish household tasks like pouring juice. The simulator acts as a training ground for artificial intelligence, turning a large set of household tasks into robot-friendly, sequence-by-sequence programs.

First, researchers created a knowledge base that the AI would use to perform tasks [PDF]. The researchers asked participants on Amazon's Mechanical Turk to come up with descriptions of household activities, like making coffee or turning on the television, and describe the steps. Their descriptions naturally didn't include some of the steps that a robot would need, since they were composed as if speaking to another human—the "watch TV" command didn't include some obvious steps a robot might need, like "walk over to the TV" or "sit on the sofa and watch." They then had the same participants generate programs for these tasks using a simple system designed to teach young kids how to code. All told, they created more than 2800 programs for household tasks.

An avatar sets the table in a simulated dining room.
MIT CSAIL

Then, the researchers tested these programs in a Sims-inspired virtual home to see if the crowd-sourced instructions could work to train robots. They turned the programs into videos in which a virtual agent would execute the household task based on the code.

The researchers were focused on creating a virtual environment that could serve as a dataset for future AI training, rather than training any actual robots right now. But their model is designed so that one day, artificial intelligence could be trained by someone who isn't a robotics expert, converting natural language commands into robot-friendly code.

In the future, they hope to be able to turn videos from real life into similar programs, so that a robot could learn to do simple tasks by watching a YouTube video. An artificial intelligence system like Amazon's Alexa wouldn't need to be programmed by its manufacturer to do every single task—it could learn on the fly, without waiting for a developer to create a new skill.

SECTIONS

arrow
LIVE SMARTER
More from mental floss studios