Why Are Bots Unable to Check "I Am Not a Robot" Checkboxes?

iStock.com/Oleksandr Hruts
iStock.com/Oleksandr Hruts / iStock.com/Oleksandr Hruts
facebooktwitterreddit

Oliver Emberton:

How complicated can one little checkbox be? You can't even imagine!

For starters, Google invented an entire virtual machine—essentially a simulated computer inside a computer—just to run that checkbox.

That virtual machine uses Google's own language, which they then encrypt. Twice.

But this is no simple encryption. Normally, when you password protect something, you might use a key to decode it. Google’s invented language is decoded with a key that is changed by the process of reading the language, and the language also changes as it is read.

Google combines (or hashes) that key with the web address you’re visiting, so you can’t use a CAPTCHA from one website to bypass another. It further combines that with “fingerprints” from your browser, catching microscopic variations in your computer that a bot would struggle to replicate (such as CSS rules).

All of this is done just to make it hard for you to understand what Google is even doing. You need to write tools just to analyze it. (Fortunately people did just that).

It turns out that these checkboxes record and analyze a lot of data, including: Your computer’s timezone and time; your IP address and rough location; your screen size and resolution; the browser you’re using; the plugins you’re using; how long the page took to display; how many key presses, mouse clicks, and tap/scrolls were made; and ... some other stuff we don’t quite understand.

We also know that these boxes ask your browser to draw an invisible image [PDF] and send it to Google for verification. The image contains things like a nonsense font, which (depending on your computer) will fall back to a system font and be drawn very differently. They then add to this a 3D image with a special texture, which is drawn in such a way that the result varies between computers.

Finally, these seemingly simple little checkboxes combine all of this data with their knowledge of the person using the computer. Almost everyone on the Internet uses something owned by Google—search, mail, ads, maps—and as you know, Google Tracks All Of Your Things™️. When you click that checkbox, Google reviews your browser history to see if it looks convincingly human.

This is easy for them, because they’re constantly observing the behavior of billions of real people.

How exactly they check all this information is impossible to know, but they’re almost certainly using machine learning (or AI) on their private servers, which is impossible for an outsider to replicate. I wouldn’t be surprised if they also built an adversarial AI to try to beat their own AI, and have both learn from each other.

So why is all this hard for a bot to beat? Because now you’ve got a ridiculous amount of messy human behaviors to simulate, and they’re almost unknowable, and they keep changing, and you can’t tell when. Your bot might have to sign up for a Google service and use it convincingly on a single computer, which should look different from the computers of other bots, in ways you don’t understand. It might need convincing delays and stumbles between key presses, scrolling and mouse movements. This is all incredibly difficult to crack and teach a computer, and complexity comes at a financial cost for the spammer. They might break it for a while, but if it costs them (say) $1 per successful attempt, it’s usually not worth them bothering.

Still, people do break Google’s protection [PDF]. CAPTCHAs are an ongoing arms race that neither side will ever win. The AI technology that makes Google’s approach so hard to fool is the same technology that is adapted to fool it.

Just wait until that AI is convincing enough to fool you.

Sweet dreams, human.

This post originally appeared on Quora. Click here to view.