10 Things You Might Not Know About Self-Driving Cars

Getty Images
Getty Images

Cars have long been a symbol of freedom in American culture, but advances in technology promise to reshape exactly what that means. In coming years, cars could very well offer their "drivers" freedom from actually having to drive. Self-driving cars—also referred to as autonomous or driverless—can navigate without human input and could redefine transportation, cities, and countless tangential industries. While you’ve likely heard some chatter or watched a few YouTube videos about the technology, here are 10 things you may not know about driverless vehicles.<


The buzz surrounding self-driving cars has been growing lately, but the idea is far from novel. At the 1939 World's Fair in New York, GM’s Futurama exhibit included driverless technology—and experts were sure it would be a reality by the 1960s. Clearly, we’re a little behind.

In 2004, for example, a driverless car challenge made headlines because no vehicles were able to complete it. Tech and auto companies alike are optimistic that the time is near, though. Google is aiming to commercialize its self-driving cars by 2020, Elon Musk says Tesla should have a fully autonomous vehicle complete within two years, and experts expect the technology to be commonplace (and actually affordable to the average American) by 2040.


Auto supplier Delphi, which flies pretty under the radar compared to companies like Google and Tesla, showed off its driverless Audi last year. The Roadrunner drove from San Francisco to New York City, navigating 15 states and 3400 miles over the course of nine days. While a driver was behind the wheel just in case, the car reportedly tackled 99 percent of the trip.


While fully driverless technology isn't yet a reality for most of us, the line between standard cars and self-driving ones is blurring. More automakers are equipping models with the sensors, GPS, radar and laser technologies that enable automation; 10 million cars with self-driving features are expected to be on the road by 2020.

The National Highway Traffic Safety Association breaks out several levels of autonomous vehicles. In a nutshell, level zero has no automation while level four would actually turn a driver into a totally passive passenger.

On the spectrum are function-specific features like automatic breaking, lane keeping, and cruise control, considered level one. General Motors is set to offer a level two feature—meaning at least two function-specific automations work together—in its 2017 models. And level three means the driver can cede full control of the car in certain conditions, but must be occasionally available to take the wheel.


GPS and sensor technology is being applied to tractors, mining trucks, cargo trucks, and more. Autonomous agriculture systems, which include self-driving tractors, have been in use since 2011, while two mines in Australia have been transporting all their goods with self-driving trucks since late last year.

Cargo trucks, which are aiming for level three automation, are at the forefront of vehicle to vehicle (V2V) technology. That’s when sensors between trucks communicate. The prominent application is platooning, which is when several trucks follow one another from a safe but close distance, dramatically improving fuel efficiency.


The company's fleet [PDF] includes 22 Lexus SUVs fully equipped with autonomous technology and 33 smaller self-driving prototypes. These self-driving vehicles can be spotted on public streets in Mountain View and Austin, and have driven 1.4 million automated miles.


When Google first announced its plans, project director Chris Urmson said the cars wouldn’t "have a steering wheel, accelerator pedal or brake pedal ... because they don't need them."

It seemed like those plans would have to change when the California DMV released draft rules requiring a steering wheel, brake pedal, and licensed operator. But some progress was made just this week.

Earlier this month, the National Highway Safety Association approved Google’s proposal for a car with “no need for a human driver.” While it said many regulations will have to be rewritten to address specific requirements—such as the need for and placement of a steering wheel and other controls—the ruling is seen as a huge step forward for the fully autonomous vehicles Urmson and his team are working toward.


Still, regulatory hurdles are expected to be one of the biggest bottlenecks to the adoption of driverless cars, especially considering variation between states. But the federal government is trying to front run that problem: Besides the NHTSA’s promising response to Google, the most recent budget proposal included proposed spending $4 billion over the next decade to test the technology and fast-track the creation of a regulatory framework.

The government is likely chomping at the bit to iron out the kinks since driverless technology promises to reduce carbon emissions, traffic congestion and car accidents.


All accidents involving Google’s driverless cars have been the result of human error; the first reported accident took place when a human-driven car rear-ended the driverless one. In fact, advanced driver assistance systems and autonomous vehicles are expected to reduce crashes by 90 percent. But they're not always successful. On March 17, 2018, one of Uber's driverless cars struck and killed a pedestrian in Tempe, Arizona.


Based on Google’s no-fault history, insurance company MetroMile calculated that annual car insurance for a self-driving car would cost just $250. Director Jason Foucher added that, in a future where all vehicles on the road were fully autonomous, the car manufacturer would likely offer blanket product liability coverage, with the cost of insurance, repairs and warranty included in the purchase or lease price.


The programming of autonomous vehicles is raising philosophical questions, the most popular of which is called “The Trolley Problem.” The debate is centered on worst-case scenarios: Should a self-driving car be programmed to protect the driver at all costs, or to do the least amount of damage possible?

Dr. Gregory Pence, a university philosophy chair, believes it’s unlikely a car can actually be programmed to handle all scenarios and make such a decision even if the debate were settled. But he stressed that such ethical questions still need to be considered early on in the creation and adoption of new technologies like autonomous vehicles.

Additional Sources: Digital Destiny: How the New Age of Data Will Transform the Way We Work, Live, and Communicate

This Smart Mug Alerts You When You've Had Too Much Caffeine


Since 2010, Ember has been giving perfectionists ultimate control over their morning coffee. Their travel mug lets you set the preferred temperature of your drink down to the degree when you're on the go, and their ceramic cup allows you to do the same in the office or at home. Now, in addition to telling you how hot your beverage is at all times, Ember lets you know how much caffeine you're consuming through Apple's Health app, CNET reports.

Ember's new feature takes advantage of the same Bluetooth technology that lets you control the temperature of you drink from your smartphone. Beginning October 17, you can connect your Ember vessel to your Apple device to keep track of what you're drinking. If you drink all your tea and coffee from an Ember mug, the Health app should be able to give you a rough estimate of your daily caffeine intake.

Ember wasn't originally designed to measure caffeine content, but its built-in sensors allow it do so. In order to maintain a constant temperature, the mug needs to know whether it's full or empty, and exactly how much liquid it's holding at any given time. The feature also gives you the option to preset your serving size within the app if you drink the same amount of coffee everyday. And if you like to drink specific beverages at their recommended temperatures, the mug can guess what type of drink it's holding based on how hot it is.

The new caffeine-calculating feature from Ember is especially useful for coffee addicts: If the mug senses you've exceeded your recommended caffeine intake for the day, it will alert you on your phone. Here are some energizing caffeine alternatives to keep that from happening.

[h/t CNET]

How Polygraphs Work—And Why They Aren't Admissible in Most Courts


The truth about lie detectors is that we all really want them to work. It would be much easier if, when police were faced with two contradictory versions of a single event, there was a machine that could identify which party was telling the truth. That’s what the innovators behind the modern-day polygraph set out to do—but the scientific community has its doubts about the polygraph, and all over the world, it remains controversial. Even its inventor was worried about calling it a "lie detector."


In 1921, John Larson was working as a part-time cop in Berkeley, California. A budding criminologist with a Ph.D. in physiology, Larson wanted to make police investigations more scientific and less reliant on gut instinct and information obtained from "third degree" interrogations.

Building on the work of William Moulton Marston, Larson believed that the act of deception was accompanied by physical tells. Lying, he thought, makes people nervous, and this could be identified by changes in breathing and blood pressure. Measuring these changes in real-time might serve as a reliable proxy for spotting lies.

Improving upon previously developed technologies, Larson created a device that simultaneously recorded changes in breathing patterns, blood pressure, and pulse. The device was further refined by his younger colleague, Leonarde Keeler, who made it faster, more reliable, and portable and added a perspiration test.

Within a few months, a local newspaper ​convinced Larson to publicly test his invention on a man suspected of killing a priest. Larson's machine, which he called a cardio-pneumo psychogram, indicated the suspect’s guilt; the press dubbed the invention a lie detector.

Despite the plaudits, Larson would become skeptical about his machine’s ability to reliably detect deception—especially in regards to Keeler’s methods which amounted to “a psychological third-degree." He was concerned that the polygraph had never matured into anything beyond a glorified stress-detector, and believed that American society had put too much faith in his device. Toward the end of his life, he would refer to it as “a Frankenstein’s monster, which I have spent over 40 years in combating.”

But Keeler, who patented the machine, was much more committed to the lie-detection project, and was eager to see the machine implemented widely to fight crime. In 1935, results of Keeler’s polygraph test were admitted for the first time as evidence in a jury trial—and secured a conviction.


In its current form, the polygraph test measures changes in respiration, perspiration, and heart rate. Sensors are strapped to the subject's fingers, arm, and chest to report on real-time reactions during interrogation. A spike on these parameters indicates nervousness, and potentially points to lying.

To try to eliminate false-positives, the test ​relies on "control questions."

In a murder investigation, for instance, a suspect may be asked relevant questions such as, "Did you know the victim?" or "Did you see her on the night of the murder?" But the suspect will also be asked broad, stress-inducing control questions about general wrongdoing: "Did you ever take something that doesn't belong to you?" or "Did you ever lie to a friend?" The purpose of the control questions is to be vague enough to make every innocent subject anxious (who hasn't ever lied to a friend?). Meanwhile, a guilty subject is likely to be more worried about answering the relevant questions.

This difference is what the polygraph test is about. According to the American Psychological Association, “A pattern of greater physiological response to relevant questions than to control questions leads to a diagnosis of ‘deception.’” They proclaim that, "Most psychologists agree that there is little evidence that polygraph tests can accurately detect lies."

But a diagnosis of deception doesn’t necessarily mean that someone has actually lied. A polygraph test doesn’t actually detect deception directly; it only shows stress, which was why Larson fought so hard against it being categorized as a "lie detector." Testers have a variety of ways to infer deception (like by using control questions), but, according to the American Psychological Association, the inference process is “structured, but unstandardized” and should not be referred to as “lie detection.”

And so, the validity of the results remains a subject of debate. Depending on whom you ask, the reliability of the test ranges from near-certainty to a coin toss. The American Polygraph Association claims the test has an almost 90 percent accuracy rate. But many psychologists—and even some ​police officers—contend that the test is ​biased toward finding liars and has a 50 percent chance of hitting a false-positive for honest people.


Most countries have traditionally been skeptical about the polygraph test and only a handful have incorporated it into their legal system. The test remains most popular in the United States, where many police departments rely on it to extract confessions from suspects. (In 1978, former CIA director Richard Helms argued that that's because "Americans are not very good at" lying.)

Over the years, the U.S. Supreme Court has issued numerous rulings on the question of whether polygraph tests should be admitted as evidence in criminal trials. Before Larson’s invention, courts treated lie-detection tests with suspicion. In a 1922 case, a judge prohibited the results of a pre-polygraph lie detector from being presented at trial, worrying that the test, despite its unreliability, could have an unwarranted sway on a jury’s opinion.

Then, after his polygraph results secured a conviction in a 1935 murder trial (through prior agreement between the defense and prosecution), Keeler—Larson’s protégé—asserted that “the findings of the lie detector are as acceptable in court as fingerprint testimony.”

But numerous court rulings have ensured that this won’t be the case. Though the technology of the polygraph has continued to improve and the questioning process has become more systematic and standardized, scientists and legal experts remained divided on the device's efficacy.

A 1998 Supreme Court ruling ​concluded that as long as that’s the case, the risk of false positives is too high. The polygraph test, the court concluded, enjoys a scientific “aura of infallibility,” despite the fact “there is simply no consensus that polygraph evidence is reliable,” and ruled that passing the test cannot be seen as proof of innocence. Accordingly, taking the test must remain voluntary, and its results must never be presented as conclusive.

Most importantly: The court left it up to the states to decide whether the test can be presented in court at all. Today, 23 states allow polygraph tests to be admitted as evidence in a trial, and many of those states require the agreement of both parties.

Critics of the polygraph test claim that even in states where the test can't be used as evidence, law enforcers often use it as a tool to ​bully suspects into giving confessions that then can be admitted.

“It does tend to make people frightened, and it does make people confess, even though it cannot detect a lie,” Geoff Bunn, a psychology professor at Manchester Metropolitan University, told The Daily Beast.

But despite criticism—and despite an entire ​industry of former investigators offering to teach individuals how to beat the test—the polygraph is still used ​widely in the United States, mostly in the process of job applications and security checks.