Amazon Just Spent $1 Billion on Ring, a Smart Doorbell Shark Tank Rejected

Frederick M. Brown, Getty Images
Frederick M. Brown, Getty Images

The entrepreneurs at Shark Tank have invested in a number of major products since the show premiered in the U.S. in 2009, but for every hit they've spotted, there have been plenty of missed opportunities. The biggest one that got away was a smart doorbell called Doorbot. After the idea was passed up by five sharks in a row, its valuation climbed higher than any product to appear on the show. Now, Quartz reports that Amazon has bought the business, now called Ring, for over $1 billion.

Entrepreneur Jamie Siminoff pitched his concept to the sharks in November 2013. He framed the product as a "caller I.D. for your front door," explaining that every time someone rang the doorbell, the homeowner would receive a video feed on their smartphone showing the person standing outside.

Siminoff asked for a $700,000 investment for a 10 percent stake, an offer each shark ultimately declined. "This company instead of being worth $7 million, can be worth $80 million, $90 million. I just don't see the progression," Mark Cuban said in the video below.

But Siminoff didn't let the rejection slow him down. He kept building the company, and within four years it was valued at $1 billion and had attracted big-name investors like Virgin's Richard Branson. Now, Amazon has seen the product's potential, not just as a caller I.D. for the home as Siminoff originally put it, but as a home security tool. Ring alerts users every time someone walk up to their door, whether they ring the bell or not, and users can speak to the intruder through their phones to scare them away.

The smart doorbell is now available to purchase on Amazon.

[h/t Quartz]

How Polygraphs Work—And Why They Aren't Admissible in Most Courts

iStock/Sproetniek
iStock/Sproetniek

The truth about lie detectors is that we all really want them to work. It would be much easier if, when police were faced with two contradictory versions of a single event, there was a machine that could identify which party was telling the truth. That’s what the innovators behind the modern-day polygraph set out to do—but the scientific community has its doubts about the polygraph, and all over the world, it remains controversial. Even its inventor was worried about calling it a "lie detector."

AN OFF-DUTY INVENTION

In 1921, John Larson was working as a part-time cop in Berkeley, California. A budding criminologist with a Ph.D. in physiology, Larson wanted to make police investigations more scientific and less reliant on gut instinct and information obtained from "third degree" interrogations.

Building on the work of William Moulton Marston, Larson believed that the act of deception was accompanied by physical tells. Lying, he thought, makes people nervous, and this could be identified by changes in breathing and blood pressure. Measuring these changes in real-time might serve as a reliable proxy for spotting lies.

Improving upon previously developed technologies, Larson created a device that simultaneously recorded changes in breathing patterns, blood pressure, and pulse. The device was further refined by his younger colleague, Leonarde Keeler, who made it faster, more reliable, and portable and added a perspiration test.

Within a few months, a local newspaper ​convinced Larson to publicly test his invention on a man suspected of killing a priest. Larson's machine, which he called a cardio-pneumo psychogram, indicated the suspect’s guilt; the press dubbed the invention a lie detector.

Despite the plaudits, Larson would become skeptical about his machine’s ability to reliably detect deception—especially in regards to Keeler’s methods which amounted to “a psychological third-degree." He was concerned that the polygraph had never matured into anything beyond a glorified stress-detector, and believed that American society had put too much faith in his device. Toward the end of his life, he would refer to it as “a Frankenstein’s monster, which I have spent over 40 years in combating.”

But Keeler, who patented the machine, was much more committed to the lie-detection project, and was eager to see the machine implemented widely to fight crime. In 1935, results of Keeler’s polygraph test were admitted for the first time as evidence in a jury trial—and secured a conviction.

HOW IT WORKS

In its current form, the polygraph test measures changes in respiration, perspiration, and heart rate. Sensors are strapped to the subject's fingers, arm, and chest to report on real-time reactions during interrogation. A spike on these parameters indicates nervousness, and potentially points to lying.

To try to eliminate false-positives, the test ​relies on "control questions."

In a murder investigation, for instance, a suspect may be asked relevant questions such as, "Did you know the victim?" or "Did you see her on the night of the murder?" But the suspect will also be asked broad, stress-inducing control questions about general wrongdoing: "Did you ever take something that doesn't belong to you?" or "Did you ever lie to a friend?" The purpose of the control questions is to be vague enough to make every innocent subject anxious (who hasn't ever lied to a friend?). Meanwhile, a guilty subject is likely to be more worried about answering the relevant questions.

This difference is what the polygraph test is about. According to the American Psychological Association, “A pattern of greater physiological response to relevant questions than to control questions leads to a diagnosis of ‘deception.’” They proclaim that, "Most psychologists agree that there is little evidence that polygraph tests can accurately detect lies."

But a diagnosis of deception doesn’t necessarily mean that someone has actually lied. A polygraph test doesn’t actually detect deception directly; it only shows stress, which was why Larson fought so hard against it being categorized as a "lie detector." Testers have a variety of ways to infer deception (like by using control questions), but, according to the American Psychological Association, the inference process is “structured, but unstandardized” and should not be referred to as “lie detection.”

And so, the validity of the results remains a subject of debate. Depending on whom you ask, the reliability of the test ranges from near-certainty to a coin toss. The American Polygraph Association claims the test has an almost 90 percent accuracy rate. But many psychologists—and even some ​police officers—contend that the test is ​biased toward finding liars and has a 50 percent chance of hitting a false-positive for honest people.

NOT QUITE THE SAME AS FINGERPRINTS

Most countries have traditionally been skeptical about the polygraph test and only a handful have incorporated it into their legal system. The test remains most popular in the United States, where many police departments rely on it to extract confessions from suspects. (In 1978, former CIA director Richard Helms argued that that's because "Americans are not very good at" lying.)

Over the years, the U.S. Supreme Court has issued numerous rulings on the question of whether polygraph tests should be admitted as evidence in criminal trials. Before Larson’s invention, courts treated lie-detection tests with suspicion. In a 1922 case, a judge prohibited the results of a pre-polygraph lie detector from being presented at trial, worrying that the test, despite its unreliability, could have an unwarranted sway on a jury’s opinion.

Then, after his polygraph results secured a conviction in a 1935 murder trial (through prior agreement between the defense and prosecution), Keeler—Larson’s protégé—asserted that “the findings of the lie detector are as acceptable in court as fingerprint testimony.”

But numerous court rulings have ensured that this won’t be the case. Though the technology of the polygraph has continued to improve and the questioning process has become more systematic and standardized, scientists and legal experts remained divided on the device's efficacy.

A 1998 Supreme Court ruling ​concluded that as long as that’s the case, the risk of false positives is too high. The polygraph test, the court concluded, enjoys a scientific “aura of infallibility,” despite the fact “there is simply no consensus that polygraph evidence is reliable,” and ruled that passing the test cannot be seen as proof of innocence. Accordingly, taking the test must remain voluntary, and its results must never be presented as conclusive.

Most importantly: The court left it up to the states to decide whether the test can be presented in court at all. Today, 23 states allow polygraph tests to be admitted as evidence in a trial, and many of those states require the agreement of both parties.

Critics of the polygraph test claim that even in states where the test can't be used as evidence, law enforcers often use it as a tool to ​bully suspects into giving confessions that then can be admitted.

“It does tend to make people frightened, and it does make people confess, even though it cannot detect a lie,” Geoff Bunn, a psychology professor at Manchester Metropolitan University, told The Daily Beast.

But despite criticism—and despite an entire ​industry of former investigators offering to teach individuals how to beat the test—the polygraph is still used ​widely in the United States, mostly in the process of job applications and security checks.

8 Emojis That Caused a Public Backlash

iStock.com/Rawpixel
iStock.com/Rawpixel

With technology improving daily and the potential to colonize Mars or cure diseases looking more promising, it’s surprising we still can’t cobble together a decent bagel emoji. Earlier this month, Apple took blowback from carb lovers for their rendering of the popular baked good as part of their iOS 12.1 beta 2 rollout. The bagel was too smoothly-rendered, critics charged, and lacked cream cheese.

Apple has since fixed the bagel for their beta 4 release, but it wasn’t the first time companies have been criticized for poorly-designed emojis. Here’s what else got the thumbs down from users.

1. BURGER

Everyone loves a good burger. Virtually no one enjoys a burger with the cheese located below the patty. This gastronomic offense was committed by Google during its Android Oreo 8.0 release in 2017 and fixed in 8.1.

2. BEER

In that same 8.0 update, Google took a curious approach to a glass of beer, placing froth on top despite the glass only being half-full.

3. PAELLA

Apple added this shallow pan food assortment to iOS 10.2 in 2016 and immediately drew fire for using unconventional ingredients like shrimp, peas, and something resembling slugs. The revised version replaced them with chicken, lima beans, and green beans.

4. LOBSTER

The Unicode Consortium, the nonprofit that introduces emojis and lets tech companies arrive on final designs, got people boiling mad in early 2018 when their rendering of a lobster was missing a pair of legs and sported a misshapen tail. (Strangely, the logo for seafood dining establishment Red Lobster makes a similar mistake—their lobster has only eight legs instead of 10.)

5. SALAD

Salads are often populated with a hard-boiled egg for a little protein, so it’s understandable Google opted to include one in its salad emoji for Android P earlier this year. But vegans took issue with the egg, prompting Google to revise the bowl of greens so it contained just lettuce and tomatoes.

6. FEELING FAT

Facebook didn’t get too many “Likes” from users in 2015, when it introduced an emoji that depicted a bulbous face to signal someone was “feeling fat.” Body-positive activists argued it could constitute body-shaming. The site switched the description to “feeling stuffed.”

7. SKATEBOARD

Skateboard enthusiasts were happy when Unicode introduced a four-wheeled emoji in 2018. They were not happy the board looked like a ‘'70s relic, with divided grip tape and an overly-curved body. Skateboard legend Tony Hawk helped Unicode refine the design into something more palatable to skaters.

8. PEACH BUTT

Owing to the relative simplicity of their designs, emojis can often take on alternative meanings. The best example may be the peach, which in iOS resembles a plump little butt complete with a crack. Apple foolishly tried fixing this in 2016, rounding off the edges to look more like the fruit. Users complained, and Apple backed off. Emojipedia ran the data and discovered the emoji was most frequently used with Tweets containing the words “ass,” “badgirl,” and “booty.”

SECTIONS

arrow
LIVE SMARTER