CLOSE
Original image
iStock

5 Robots That Screwed Up Big-Time

Original image
iStock

Technology doesn’t always function as designed. Computers freeze, autocorrect sends inappropriate responses, and robots accidentally ruin everything. Here are five times automated ‘bots royally messed up.

1. THE NEWS ‘BOT THAT REPORTED CENTURY-OLD NEWS

Today, news outlets can use artificial intelligence to create videos, crowdsource reporting, write up quarterly earnings reports and sports recaps,and even interview sources. In a field whose main tasks are verifying facts and maintaining accuracy, though, robots aren’t always up to the task of playing reporter. On the evening of June 21, the Los Angeles Times published a story about a 6.8 earthquake in Santa Barbara, California. The story was true, sort of: The earthquake happened in 1925, not 2017. The QuakeBot used by the Times wrote the story in response to an accidental update from the USGS, sent by a staffer who was merely updating the historical data pertaining to the 1925 quake.

2. THE BOMB SQUAD ROBOT THAT FELL OVER ON LIVE TV

When authorities in St. Louis sent a bomb squad robot to investigate a suspicious package near City Hall in September 2016, they didn’t expect it to become a viral internet sensation. But after it inspected the possibly dangerous item—which turned out to be a harmless duffel bag full of clothes—its state-of-the-art technological capabilities were foiled by a much more difficult obstacle. Like so many robots before it, it tried to navigate uneven terrain, and fell flat on its face, to the delight of the news crew watching the scene unfold from a helicopter. “It appears the bomb robot has tipped over at a hill,” the local FOX affiliate tweeted, attaching a photo of the sad ‘bot lying prone in the grass.

3. THE ONLINE SHOPPING ‘BOT THAT SCORED DRUGS

In late December 2014, a group of artists designed an autonomous online shopping robot to comb a Darknet marketplace, purchasing goods and sending them back to the Swiss gallery where it was on exhibit. Not all of its $100-per-week bitcoin budget went to illegal items, but it did order 10 ecstasy pills, bringing the project to the attention of the police. (It also ordered counterfeit purses and shoes.) The police confiscated the robot, but eventually released it and decided not to charge its creators.

4. THE CHAT ROBOT THAT LEARNED TO BE A JERK

In 2016, Microsoft launched an A.I. chatbot named Tay that could learn from interactions it had with people online. It was design to carry out real-time research on conversation using Twitter, Facebook, GroupMe, and Snapchat, among others, essentially learning to talk like a Millennial. Sadly, people are not always their best selves online. In less than a day, the ‘bot had learned to tweet out offensive jokes, and it was pulled offline within 24 hours of its launch.

5. THE ROOMBA THAT MADE EVERYTHING IRREVOCABLY DIRTIER

Roombas are designed to vacuum your house while you sleep, chill on your couch, or otherwise tune out. Unfortunately, they can’t totally be trusted on their own. The tale of a 2016 Roomba “pooptastrophe” went viral after robot vacuum user Jesse Newton posted on Facebook about the night his dog’s bathroom accident collided horrifically with his Roomba’s automated run settings. In the middle of the night, his puppy pooped in his living room, just as his Roomba was about to begin its automated cleaning cycle. The robot vacuum ran over the dog poop and proceeded to spread feces throughout the house, ruining rugs, smearing poop on the legs of furniture, and so much more. So much for an effortless cleaning solution.

Original image
iStock
arrow
technology
Google's AI Can Make Its Own AI Now
Original image
iStock

Artificial intelligence is advanced enough to do some pretty complicated things: read lips, mimic sounds, analyze photographs of food, and even design beer. Unfortunately, even people who have plenty of coding knowledge might not know how to create the kind of algorithm that can perform these tasks. Google wants to bring the ability to harness artificial intelligence to more people, though, and according to WIRED, it's doing that by teaching machine-learning software to make more machine-learning software.

The project is called AutoML, and it's designed to come up with better machine-learning software than humans can. As algorithms become more important in scientific research, healthcare, and other fields outside the direct scope of robotics and math, the number of people who could benefit from using AI has outstripped the number of people who actually know how to set up a useful machine-learning program. Though computers can do a lot, according to Google, human experts are still needed to do things like preprocess the data, set parameters, and analyze the results. These are tasks that even developers may not have experience in.

The idea behind AutoML is that people who aren't hyper-specialists in the machine-learning field will be able to use AutoML to create their own machine-learning algorithms, without having to do as much legwork. It can also limit the amount of menial labor developers have to do, since the software can do the work of training the resulting neural networks, which often involves a lot of trial and error, as WIRED writes.

Aside from giving robots the ability to turn around and make new robots—somewhere, a novelist is plotting out a dystopian sci-fi story around that idea—it could make machine learning more accessible for people who don't work at Google, too. Companies and academic researchers are already trying to deploy AI to calculate calories based on food photos, find the best way to teach kids, and identify health risks in medical patients. Making it easier to create sophisticated machine-learning programs could lead to even more uses.

[h/t WIRED]

Original image
Courtesy Umbrellium
arrow
Design
These LED Crosswalks Adapt to Whoever Is Crossing
Original image
Courtesy Umbrellium

Crosswalks are an often-neglected part of urban design; they’re usually just white stripes on dark asphalt. But recently, they’re getting more exciting—and safer—makeovers. In the Netherlands, there is a glow-in-the-dark crosswalk. In western India, there is a 3D crosswalk. And now, in London, there’s an interactive LED crosswalk that changes its configuration based on the situation, as Fast Company reports.

Created by the London-based design studio Umbrellium, the Starling Crossing (short for the much more tongue-twisting STigmergic Adaptive Responsive LearnING Crossing) changes its layout, size, configuration, and other design factors based on who’s waiting to cross and where they’re going.

“The Starling Crossing is a pedestrian crossing, built on today’s technology, that puts people first, enabling them to cross safely the way they want to cross, rather than one that tells them they can only cross in one place or a fixed way,” the company writes. That means that the system—which relies on cameras and artificial intelligence to monitor both pedestrian and vehicle traffic—adapts based on road conditions and where it thinks a pedestrian is going to go.

Starling Crossing - overview from Umbrellium on Vimeo.

If a bike is coming down the street, for example, it will project a place for the cyclist to wait for the light in the crosswalk. If the person is veering left like they’re going to cross diagonally, it will move the light-up crosswalk that way. During rush hour, when there are more pedestrians trying to get across the street, it will widen to accommodate them. It can also detect wet or dark conditions, making the crosswalk path wider to give pedestrians more of a buffer zone. Though the neural network can calculate people’s trajectories and velocity, it can also trigger a pattern of warning lights to alert people that they’re about to walk right into an oncoming bike or other unexpected hazard.

All this is to say that the system adapts to the reality of the road and traffic patterns, rather than forcing pedestrians to stay within the confines of a crosswalk system that was designed for car traffic.

The prototype is currently installed on a TV studio set in London, not a real road, and it still has plenty of safety testing to go through before it will appear on a road near you. But hopefully this is the kind of road infrastructure we’ll soon be able to see out in the real world.

[h/t Fast Company]

SECTIONS

arrow
LIVE SMARTER
More from mental floss studios