Wikimedia Commons
Wikimedia Commons

Edsger Dijkstra, Computer Scientist

Wikimedia Commons
Wikimedia Commons

In our Retrobituaries series, we highlight interesting people who are no longer with us. Today let's explore the life of Edsger Dijkstra, who died at 72 in 2002. 

If you’ve used a computer or smart phone in the last few decades, you’ve come into contact with the work of Edsger Dijkstra. Since his death in 2002, his research in the field of computer science has in many ways only grown more important. Here are a few things you didn’t know about his life and his science. 

If you took his computer science class, you probably didn’t touch a computer.

Professor Dijkstra once said, “Computer science is no more about computers than astronomy is about telescopes,” and he taught his courses accordingly. He was a proponent of elegance in mathematical proofs, whereby puzzles are solved with efficiency and aesthetic sensitivity.

Grades were determined by the final exam, which was neither written on a piece of paper nor typed on a computer. Rather, students were given individual oral examinations in his office or at his home. The conversational exams lasted hours at a time, and students were asked how they might prove various mathematical propositions. They were then challenged to write out their proofs on a chalkboard. After the exam, students were offered a beer if they were of age, or a cup of tea, if they were not. 

He didn’t use email. Or a word processor.

Dijkstra was famous for his general rejection of personal computers. Instead of typing papers out using a word processor, he printed everything in longhand. He wrote well over a thousand essays of significant length this way, and for most of his academic career, they proliferated by ditto machine and fax. Each essay was given a number and prefixed with his initials, EWD.

Students who emailed Dijkstra were asked to include a physical mailing address in the letter. His secretary would print the message, and he would respond by hand.

Computers weren’t the only technology he shunned. He refused to use overhead projectors, calling them “the poison of the educational process.”


Use Google Maps? You can thank Dijkstra.

Among his profound contributions to computer science is a solution to the “single source shortest-path problem.” The solution, generally referred to as Dijkstra’s algorithm, calculates the shortest distance between a source node and a destination node on a graph. (Here is a visual representation.) The upshot is that if you’ve ever used Google Maps, you’re using a derivation of Dijkstra’s algorithm. Similarly, the algorithm is used for communications networks and airline flight plans. 

He “owned” a nonexistent company.

In many of his more humorous essays, he described a fictional company of which he served as chairman. The company was called Mathematics, Inc., and sold mathematical theorems and their maintenance. Among the company’s greatest triumphs was proving the Riemann hypothesis (which it renamed the Mathematics, Inc. Theorem), and then it unsuccessfully attempted to collect royalties on all uses of the mathematical conjecture in the real world. Evidence was never given of the proof, of course, because it was a trade secret. Mathematics Inc. claimed to have a global market share of 75 percent.

He was the first programmer in the Netherlands.

In the 1950s, his father suggested that he attend a Cambridge course on programming an Electronic Delay Storage Automatic Calculator, or EDSAC. Dijkstra did, believing that theoretical physics (which he was studying at the time at Leiden University) might one day rely upon computers. The following year, he was offered a job at Mathematisch Centrum in Amsterdam, making him the first person in the Netherlands to be employed as something called a “programmer.” (“A programmer?” he recalled of the moment he was offered the position. “But was that a respectable profession? For after all, what was programming? Where was the sound body of knowledge that could support it as an intellectually respectable discipline?” He was then challenged by his eventual employer to make it a respectable discipline.) 

This would later cause problems. On his marriage application in 1957, he was required to list his profession. Officials rejected his answer—”Programmer”—stating that there was no such job.

Previously on Retrobituaries: Albert Ellis, Pioneering Psychologist. See all Retrobituaries here.

Essential Science
What Is Infinity?

Albert Einstein famously said: “Two things are infinite: the universe and human stupidity. And I'm not sure about the universe.”

The notion of infinity has been pondered by the greatest minds over the ages, from Aristotle to German mathematician Georg Cantor. To most people today, it is something that is never-ending or has no limit. But if you really start to think about what that means, it might blow your mind. Is infinity just an abstract concept? Or can it exist in the real world?


Infinity is firmly rooted in mathematics. But according to Justin Moore, a math researcher at Cornell University in Ithaca, New York, even within the field there are slightly different uses of the word. “It's often referred to as a sort of virtual number at the end of the real number line,” he tells Mental Floss. “Or it can mean something too big to be counted by a whole number.”

There isn't just one type of infinity, either. Counting, for example, represents a type of infinity that is unbounded—what's known as a potential infinity. In theory, you can go on counting forever without ever reaching a largest number. However, infinity can be bounded, too, like the infinity symbol, for example. You can loop around it an unlimited number of times, but you must follow its contour—or boundary.

All infinities may not be equal, either. At the end of the 19th century, Cantor controversially proved that some collections of counting numbers are bigger than the counting numbers themselves. Since the counting numbers are already infinite, it means that some infinities are larger than others. He also showed that some types of infinities may be uncountable, as opposed to collections like the counting numbers.

"At the time, it was shocking—a real surprise," Oystein Linnebo, who researches philosophies of logic and mathematics at the University of Oslo, tells Mental Floss. "But over the course of a few decades, it got absorbed into mathematics."

Without infinity, many mathematical concepts would fall apart. The famous mathematical constant pi, for example, which is essential to many formulas involving the geometry of circles, spheres, and ellipses, is intrinsically linked to infinity. As an irrational number—a number that can't simply be expressed by a fraction—it's made up of an endless string of decimals.

And if infinity didn't exist, it would mean that there is a biggest number. "That would be a complete no-no," says Linnebo. Any number can be used to find an even bigger number, so it just wouldn't work, he says.


In the real world, though, infinity has yet to be pinned down. Perhaps you've seen infinite reflections in a pair of parallel mirrors on opposite sides of a room. But that's an optical effect—the objects themselves are not infinite, of course. "It's highly controversial and dubious whether you have infinities in the real world," says Linnebo. "Infinity has never been measured."

Trying to measure infinity to prove it exists might in itself be a futile task. Measurement implies a finite quantity, so the result would be the absence of a concrete amount. "The reading would be off the scale, and that's all you would be able to tell," says Linnebo.

The hunt for infinity in the real world has often turned to the universe—the biggest real thing that we know of. Yet there is no proof as to whether it is infinite or just very large. Einstein proposed that the universe is finite but unbounded—some sort of cross between the two. He described it as a variation of a sphere that is impossible to imagine.

We tend to think of infinity as being large, but some mathematicians have tried to seek out the infinitely small. In theory, if you take a segment between two points on a line, you should be able to divide it in two over and over again indefinitely. (This is the Xeno paradox known as dichotomy.) But if you try to apply the same logic to matter, you hit a roadblock. You can break down real-world objects into smaller and smaller pieces until you reach atoms and their elementary particles, such as electrons and the components of protons and neutrons. According to current knowledge, subatomic particles can't be broken down any further.


Black holes may be the closest we've come to detecting infinity in the real world. In the center of a black hole, a point called a singularity is a one-dimensional dot that is thought to contain a huge mass. Physicists theorize that at this bizarre location, some of the singularity's properties are infinite, such as density and curvature.

At the singularity, most of the laws of physics no longer work because these infinite quantities "break" many equations. Space and time, for example, are no longer two separate entities, and seem to merge.

According to Linnebo, though, black holes are far from being an example of a tangible infinity. "My impression is that the majority of physicists would say that is where our theory breaks down," he says. "When you get infinite curvature or density, you are beyond the area where the theory applies."

New theories may therefore be needed to describe this location, which seems to transcend what is possible in the physical world.

For now, infinity remains in the realm of the abstract. The human mind seems to have created the concept, yet can we even really picture what it looks like? Perhaps to truly envision it, our minds would need to be infinite as well.

Math Symbols Might Look Complicated, But They Were Invented to Make Life Easier

Numbers can be intimidating, especially for those of us who never quite mastered multiplication or tackled high-school trig. But the squiggly, straight, and angular symbols used in math have surprisingly basic origins.

For example, Robert Recorde, the 16th century Welsh mathematician who invented the “equal” sign, simply grew tired of constantly writing out the words “equal to.” To save time (and perhaps ease his writers’ cramp), he drew two parallel horizontal line segments, which he considered to be a pictorial representation of equality. Meanwhile, plenty of other symbols used in math are just Greek or Latin letters (instead of being some kind of secret code designed to torture students).

These symbols—and more—were all invented or adopted by academics who wanted to avoid redundancy or take a shortcut while tackling a math problem. Learn more about their history by watching TED-Ed’s video below.


More from mental floss studios