Original image

How Math and Lasers Can Make You a Better Golfer

Original image

A few years ago, number crunchers and stat geeks infiltrated the MLB front offices and changed the way America’s pastime is played. Now they’re doing it to Scotland’s.

In 1999, the PGA launched the stat-gathering, laser-wielding ShotLink system. Now volunteers strategically surround each hole at each tournament, surveying the landscape with camera-like lasers. Those lasers track and measure every shot with insane accuracy—the margin of error is just a few centimeters.

They’ve sparked golf’s own “Moneyball” era. New information is flooding the scene, and TV broadcasters don’t have to fill airtime with stuffy stories about their grandchildren as they wait for measurements anymore. Cameramen don’t have to guess where the best spots to film are. And number junkies don’t have to settle for the same stale stats that have been staples of the game for decades—they can cook up new ones.

Nearly 600 of them, actually.

The flurry of information is changing how the pros play. Players can track every nuance of their game with unseen specificity: If Tiger wants to know how he does in bunkers 30 feet from the hole, there’s a stat for that. If Lefty wants to evaluate his approach shots, there’s a stat for that.

Players can pinpoint problems in their game like never before, giving them a better idea of what to practice—and what kind of shots to avoid. A player who once had a fuzzy idea of what caused his putting woes can finally say, “Ah! Those 16 footers going downhill are to blame.” And then he can attack the problem.

No wonder number crunchers are becoming as common as caddies. Duffers like Luke Donald and Stuart Appleby are becoming vocal data junkies, and by hiring a few number nerds, they’re increasing their chances of earning more dough.

Here’s how. Two professors at Penn found that players were more likely to miss a birdie putt than a putt for par—even if the distance was the same. Why? Because of a phenomenon called “loss aversion.” People prefer avoiding losses to making gains, and golfers are no different. They're more hesitant when shooting for birdie—a habit that increases their chance of leaving a shot short. According to Sean Martin at Golf Week, “If a top-20 player in 2008 was able to overcome this bias, he could increase his earnings by more than $1 million.”

Although ShotLink has made players more aware of their mental roadblocks, the game hasn’t gotten easier. More golf course designers are using ShotLink data to make the pin harder to find. By studying shot patterns, designers can manipulate the tee box, mowing patterns, and hazards to give pros more gray hairs.

Original image
Stan Honda // AFP // Getty Images
A Brief History of Deep Blue, IBM's Chess Computer
Original image
Stan Honda // AFP // Getty Images

On July 29, 1997, IBM researchers were awarded a $100,000 prize that had gone unclaimed for 17 years. It was the Fredkin Prize, created by Carnegie Mellon University (CMU) professor Edward Fredkin in 1980. An artificial intelligence pioneer, Fredkin challenged fellow computer scientists to create a computer that could beat the best human chess player in the world. That's exactly what Deep Blue did in May, 1997.

It was an extremely long road to victory. After Fredkin's initial challenge in 1980, a team from Bell Labs created a chess computer in 1981 that beat a chess master. In 1985, Feng-hsiung Hsu created ChipTest, a chess computer that set the stage for later efforts.

By 1988, a CMU team including Hsu created a system that beat an international master. That one was called "Deep Thought," named for the computer in The Hitchhiker's Guide to the Galaxy—a fictional computer spent 7.5 million years calculated "the Answer to The Ultimate Question of Life, the Universe, and Everything." (That answer, of course, was 42.)

Deep Thought underwent additional development at IBM, and in 1989 it went head-to-head with Garry Kasparov, who is widely considered the best chess player of all time. Kasparov destroyed the machine in a two-game match. Here's the first part of a documentary about Deep Thought, which helps set the stage for Deep Blue:

Deep Thought eventually led to Deep Blue, an IBM project led by Hsu, along with his former Deep Thought collaborator Murray Campbell, among others.

The computer science problem of chess is deep. First the machine needs to understand the state of the board—that's relatively easy—but then it needs to predict future moves. Given that the 32 pieces on the board are capable of moving to a variety of other positions, the "possibility space" for the next move (and all subsequent moves) is very large.

In theory, a sufficiently beefy computer could simulate every possible move (and counter-move) in its memory, rank which moves end up doing best in each simulated game, and then perform the optimal move on each turn. But to actually implement a computer that powerful—and fast enough to compete in a time-limited tournament—was a matter of extreme effort. It took Hsu more than a decade to master it.

Six men pose with a chess board and timer. On one side of the board, a sign reads Garry Kasparov. On the other side, a computer keyboard and monitor represent Deep Blue.
The IBM Deep Blue chess computer team poses in May, 1997. From left: Chung-Jen Tan (team manager), Gerry Brody, Joel Benjamin, Murray Campbell, Joseph Hoane and Feng-hsiung Hsu (seated).
Stan Honda // AFP // Getty Images

On February 10, 1996 in Philadelphia, Deep Blue went head-to-head with Kasparov, and Kasparov beat the computer handily. Though Deep Blue scored one winning game and two draws, it lost three games to Kasparov outright. Deep Blue did set a record for winning that one game, but it needed the match to earn the Fredkin Prize.

By this point, Kasparov was used to destroying chess computers, and the media lapped it up—this was a man-versus-machine story for the ages. By May 1997, IBM had heavily upgraded Deep Blue (some called it "Deeper Blue") with vastly improved computing resources, preparing for a rematch. When that rematch came, Kasparov would face a worthy opponent.

On May 11, 1997 in New York City, the upgraded Deep Blue entered the match with a large, excited audience. Kasparov won the first game, but Deep Blue took the second, tying the players. Then came three games that ended in draws. In the sixth game, Kasparov made a mistake in the opening. Deep Blue won that sixth game quickly, winning the match, much to the astonishment of the crowd. Kasparov asked for a rematch. The Deep Blue team declined.

Kasparov claimed to have perceived a human hand in Deep Blue's play. Kasparov wondered whether a human chess player was somehow feeding the computer moves, much like the infamous Mechanical Turk of yore. Various conspiracy theories flourished, but came to nothing.

When the Fredkin Prize was awarded to Hsu, Campbell, and IBM researcher A. Joseph Hoane Jr., Fredkin told reporters, "There has never been any doubt in my mind that a computer would ultimately beat a reigning world chess champion. The question has always been when." Hsu told The New York Times, "Some people are apprehensive about what the future can bring. But it's important to remember that a computer is a tool. The fact that a computer won is not a bad thing."

Original image
What Tennis Shoes Looked Like in the Early 1900s
Original image

Mental_floss co-founder Mangesh Hattikudur is at the US Open today. Between matches, he'll be serving up some tennis history and random knowledge.

Image credit:

In the pre-Swoosh era, the best shoes for lawn tennis had giant treads and looked like they could be worn to church.

Follow ibmsports on Instagram for scenes from the U.S. Open.


More from mental floss studios