iStock
iStock

High School Students in Japan Can Now Take Drone Classes

iStock
iStock

Forget the old staples like wood shop and home ec. Students at one private high school in Japan are getting a more high-tech education through a new drone and robotics course, Engadget reports. The class, offered at Vantan High School, is described by RocketNews24 as a full-time, three-year program in which students are expected to learn everything there is to know about how drones work, including maintaining and inspecting them, the art of piloting, and computer programming.

RocketNews24 reports that the nontraditional high school—which already offers courses in modeling, barista training, and nail art—will begin offering the drone and robotics program next spring to current high school and graduating junior high school students. There are also plans to offer a six-month drone pilot and aerial course to adults in the near future.

With the market for both defense and civilian drones forecasted to grow significantly over the next five to 10 years, the class may be a good opportunity for students eager to break into the trendy tech field.

[h/t Engadget]

Know of something you think we should cover? Email us at tips@mentalfloss.com.

nextArticle.image_alt|e
Jason Dorfman, MIT CSAIL
arrow
technology
MIT’s New AI Can Sense Your Movements Through Walls Using Radio Signals
Jason Dorfman, MIT CSAIL
Jason Dorfman, MIT CSAIL

New artificial intelligence technology developed at MIT can see through walls, and it knows what you’re doing.

RF-Pose, created by researchers at the Computer Science and Artificial Intelligence Laboratory (CSAIL), uses wireless signals to estimate a person’s pose through a wall. It can only come up with a 2D stick figure of your movements, but it can nonetheless see your actions.

The system, described in a new paper [PDF], uses a neural network to piece together radio signals bouncing off the human body. It takes advantage of the fact that the body reflects radio frequency signals in the Wi-Fi range. These Wi-Fi signals can move through walls, but not through people.

Using data from low-power radio signals—1000 times lower than the power your home Wi-Fi router puts out—this algorithm can generate a relatively accurate picture of what the person behind the wall is doing by piecing together the signals reflected by the moving body.

The system can recognize movement in poor lighting and identify multiple different individuals in a scene. Though the technology is still in development, it’s not hard to imagine that the military might use it in surveillance, but the researchers also suggest that it may be useful for video game design and search-and-rescue missions. It might also help doctors monitor and analyze the movements of patients with disorders like Parkinson’s disease and multiple sclerosis.

This is just the latest in a series of projects using radio signals to mimic X-ray vision. CSAIL has been working on similar technology using Wi-Fi signals for several years, creating algorithms to recognize human forms and see motion through obstructions. In the future, they hope to expand the system to be able to recognize movement with 3D images rather than the current 2D stick figures.

nextArticle.image_alt|e
iStock
arrow
technology
MIT Wants to Teach Robots to Do Your Chores
iStock
iStock

Teaching a robot basic human tasks is more of a challenge than it seems. To teach a robot to pour you a glass of orange juice, for instance, the 'bot has to not just recognize the command to take the juice out of the fridge and pour it into a glass, but it has to understand the many tiny aspects of the task that the human brain infers—like, say, the steps where you have to walk into the kitchen, open the cupboard, and grab an empty glass.

VirtualHome, a 3D virtual environment created by MIT's Computer Science and Artificial Intelligence Laboratory with researchers at the University of Toronto, is designed to teach robots exactly how to accomplish household tasks like pouring juice. The simulator acts as a training ground for artificial intelligence, turning a large set of household tasks into robot-friendly, sequence-by-sequence programs.

First, researchers created a knowledge base that the AI would use to perform tasks [PDF]. The researchers asked participants on Amazon's Mechanical Turk to come up with descriptions of household activities, like making coffee or turning on the television, and describe the steps. Their descriptions naturally didn't include some of the steps that a robot would need, since they were composed as if speaking to another human—the "watch TV" command didn't include some obvious steps a robot might need, like "walk over to the TV" or "sit on the sofa and watch." They then had the same participants generate programs for these tasks using a simple system designed to teach young kids how to code. All told, they created more than 2800 programs for household tasks.

An avatar sets the table in a simulated dining room.
MIT CSAIL

Then, the researchers tested these programs in a Sims-inspired virtual home to see if the crowd-sourced instructions could work to train robots. They turned the programs into videos in which a virtual agent would execute the household task based on the code.

The researchers were focused on creating a virtual environment that could serve as a dataset for future AI training, rather than training any actual robots right now. But their model is designed so that one day, artificial intelligence could be trained by someone who isn't a robotics expert, converting natural language commands into robot-friendly code.

In the future, they hope to be able to turn videos from real life into similar programs, so that a robot could learn to do simple tasks by watching a YouTube video. An artificial intelligence system like Amazon's Alexa wouldn't need to be programmed by its manufacturer to do every single task—it could learn on the fly, without waiting for a developer to create a new skill.

SECTIONS

arrow
LIVE SMARTER
More from mental floss studios