CLOSE
Original image
ThinkStock

6 Weird Theories on Early Human Intelligence

Original image
ThinkStock

Through thousands of years of knowledge and learning, we’ve developed extremely advanced intelligence as a species, especially when compared to other animals. But what made us unique? What evolutionary paths did we take that others did not?

That, of course, is one of the million dollar questions of early human development. There’s no concrete way for us to know for sure (at least until we build time machines), but we can make some educated guesses, and they get pretty weird... 

1. It All Came From One Human

In evolution, there are two separate paths that changes can take. One is microevolution: Small changes over a long time. The other is macroevolution: Large, abrupt changes that completely change a species. 

To date, scientists have multiple theories on how the two interact, but one of the older theories that’s starting to make a comeback is what's known as macromutation, aka the "hopeful monster." Basically, it's a genetic aberration that is so different from its relatives that it's essentially a whole new species.  (Think of the mutants in X-Men.)

A neurobiologist from Oxford University, Colin Blakemore, believes this very thing happened to humans. Some ancestor, somewhere (he posits it might even be Mitochondrial Eve) was born with a severe genetic defect that made him or her way smarter than other early humans. It was a total accident that just happened to be highly beneficial from a survival perspective, and this person (who could presumably still mate with other humans) passed on this mutation to his or her offspring.

2. It’s Because of a DNA Glitch

Scientists going over the results from the Human Genome Project found that humans have something completely unique: A duplicated gene named SRGAP2. Don’t worry about the weird name; just know that it’s responsible for brain development. No other primates (or animals at all, for that matter) have it. This could pretty much only occur as a “glitch” at some point in human history. It's not a natural evolutionary development, and duplicate genes happen all the time, except they’re almost always benign.

As a matter of fact, we have a few benign copies of SRGAP2 ourselves. They’re called SRGAP2B and SRGAP2D, and they’re just some of the random genetic junk that makes up a large portion of our DNA. SRGAP2C, however, is a fully functional (and enhanced) copy of SRGAP2.

That doesn't just mean we have double the brain development power, though, because SRGAP2C actually supersedes the original gene. When implanted in mice, SRGAP2C turns off the original gene and actually kind of supercharges their brains. If you think of it like computer software, SRGAP2C is brain development version 2.0 and it has to uninstall version 1.0 to work properly.

3. It’s an Accident Caused by Walking Upright

One of the unique things about humans is that our skulls aren't fused when we're born. Baby skulls don't solidify until the age of two because otherwise it'd be way harder to push them out of the birth canal. No other primate has this, but that's because they're not bipedal and so they  have wider birth canals, so it’s not an issue for them. 

Recently, scientists studying the well-preserved cranium of an Australopithecus child discovered that the genus, one of our first ancestors to walk bipedally, had larger brains than expected and also started out with the soft skulls that we have today. It was originally thought that we didn't develop non-fused skulls until much later in human development. 

Scientists had always assumed that we developed bipedal locomotion as a result of our intelligence, since it's more efficient. Now it looks like the exact opposite may be true—we became bipedal on our own, which necessitated a reconfiguration of the birth canal, which led to the evolution of soft skulls in babies, and that accidentally led to us growing bigger brains, since the brain could now continue to grow until two years of age.

4. Our Human Ancestors Used a Lot of Drugs

One highly controversial (and definitely strange) theory about early human brains comes from Terence McKenna, an American philosopher, ecologist, and drug advocate. In the early 1990s, McKenna developed a theory popularly referred to as the “Stoned Ape” theory.

According to McKenna, early man, upon leaving the jungle and moving into the grasslands of north Africa, saw mushrooms growing on cow dung (something they hadn’t seen in the jungle) and decided to give them a try. He points out that modern apes will frequently eat dung beetles, so it’s not completely unheard of for primates to eat things typically found on or around excrement.

McKenna believes that those mushrooms, ancestors of today's “magic” mushrooms, probably increased visual ability at low doses (much like modern mushrooms), making them biologically useful. Further, at moderate doses, those same mushrooms are sexual stimulants, also handy for a burgeoning species. Lastly, large doses would promote conscious thinking and possibly assist brain growth. Thus, it was evolutionarily beneficial for humans to consume these mushrooms.

Don’t get too excited, though. McKenna’s theory has never been taken seriously by scientists or heavily studied, so there’s currently no real evidence to support it.  

5. Meat and Fire Made Our Brains Grow

While it’s obvious that fire and meat-eating were a large part of everyday life for our ancestors, it appears likely that cooked meat may have also played a huge role in our brain development. Harvard University Biological Anthropologist Richard Wrangham has developed a theory that he thinks explains exactly how it worked.

Because brains like ours use up as much as 20 percent of our caloric intake, they require high-calorie foods to keep working. Since Twinkies weren’t around yet, cooked meat was the next best thing for early man. Cooking meat releases more calories, making it even better than raw meat, which we were probably already eating (judging from our appendixes).

Cooking also makes meat faster to eat and easier to digest. Our primate cousins, meanwhile, spent significantly more time eating fewer calories by consuming fruits and veggies. Those extra calories helped grow our brains.

But even an argument as straightforward as this one is contentious—science has yet to discover evidence that humans were capable of controlling fire at the time period specified by Wrangham’s theory.

6. Early Humans Were Schizophrenic

Back in the 1970s, psychologist Julian Jaynes was fascinated by the idea of consciousness and how it came to exist, and why human beings seem to have a much more advanced self-awareness than other animals.

The theory he developed in his 1976 book, The Origin of Consciousness in the Breakdown of the Bicameral Mind, was, to put it mildly, controversial. Jaynes’ Bicameral Mind Theory (as it came to be known) claimed that ancient humans actually weren’t self-aware at all. Instead, man’s brain operated sort of like two separate organs. The left brain was responsible for everyday actions, while the right brain supplied memories and problem-solving derived from experience.

The only problem with this system is that, unlike modern humans, Jaynes thought there was no direct link between the two hemispheres, and thus no consciousness or reflection was available to our ancestors. Instead, the right half communicated to the left through a now-vestigial portion of the language center in the brain, which expressed as auditory hallucinations.

Jaynes believed that early humans may have treated these hallucinations as the voices of their ancestors or even the gods. He used two famous ancient books as examples: The Iliad and the Old Testament of the Bible. Both refer frequently to hearing voices (of the Muses and God, respectively) while their follow-ups, The Odyssey (which was probably not actually written by the same person as The Iliad) and the New Testament, reference much fewer instances of this. This led Jaynes to believe that the change in our brains must have occurred very recently in human history, probably a few centuries after we formed complex societies and consciousness became more beneficial.

Jaynes didn’t just pull this theory out of thin air, either. His specialty as a psychologist was working with schizophrenic patients, and he based Bicameralism on the way that a schizophrenic’s mind works. That aforementioned vestigial language center in the brain appears to be fully-functional in sufferers of schizophrenia. Most interesting of all is that recent advances in neuroimaging seem to support Jaynes’s theory.

Original image
iStock // Ekaterina Minaeva
arrow
technology
Man Buys Two Metric Tons of LEGO Bricks; Sorts Them Via Machine Learning
Original image
iStock // Ekaterina Minaeva

Jacques Mattheij made a small, but awesome, mistake. He went on eBay one evening and bid on a bunch of bulk LEGO brick auctions, then went to sleep. Upon waking, he discovered that he was the high bidder on many, and was now the proud owner of two tons of LEGO bricks. (This is about 4400 pounds.) He wrote, "[L]esson 1: if you win almost all bids you are bidding too high."

Mattheij had noticed that bulk, unsorted bricks sell for something like €10/kilogram, whereas sets are roughly €40/kg and rare parts go for up to €100/kg. Much of the value of the bricks is in their sorting. If he could reduce the entropy of these bins of unsorted bricks, he could make a tidy profit. While many people do this work by hand, the problem is enormous—just the kind of challenge for a computer. Mattheij writes:

There are 38000+ shapes and there are 100+ possible shades of color (you can roughly tell how old someone is by asking them what lego colors they remember from their youth).

In the following months, Mattheij built a proof-of-concept sorting system using, of course, LEGO. He broke the problem down into a series of sub-problems (including "feeding LEGO reliably from a hopper is surprisingly hard," one of those facts of nature that will stymie even the best system design). After tinkering with the prototype at length, he expanded the system to a surprisingly complex system of conveyer belts (powered by a home treadmill), various pieces of cabinetry, and "copious quantities of crazy glue."

Here's a video showing the current system running at low speed:

The key part of the system was running the bricks past a camera paired with a computer running a neural net-based image classifier. That allows the computer (when sufficiently trained on brick images) to recognize bricks and thus categorize them by color, shape, or other parameters. Remember that as bricks pass by, they can be in any orientation, can be dirty, can even be stuck to other pieces. So having a flexible software system is key to recognizing—in a fraction of a second—what a given brick is, in order to sort it out. When a match is found, a jet of compressed air pops the piece off the conveyer belt and into a waiting bin.

After much experimentation, Mattheij rewrote the software (several times in fact) to accomplish a variety of basic tasks. At its core, the system takes images from a webcam and feeds them to a neural network to do the classification. Of course, the neural net needs to be "trained" by showing it lots of images, and telling it what those images represent. Mattheij's breakthrough was allowing the machine to effectively train itself, with guidance: Running pieces through allows the system to take its own photos, make a guess, and build on that guess. As long as Mattheij corrects the incorrect guesses, he ends up with a decent (and self-reinforcing) corpus of training data. As the machine continues running, it can rack up more training, allowing it to recognize a broad variety of pieces on the fly.

Here's another video, focusing on how the pieces move on conveyer belts (running at slow speed so puny humans can follow). You can also see the air jets in action:

In an email interview, Mattheij told Mental Floss that the system currently sorts LEGO bricks into more than 50 categories. It can also be run in a color-sorting mode to bin the parts across 12 color groups. (Thus at present you'd likely do a two-pass sort on the bricks: once for shape, then a separate pass for color.) He continues to refine the system, with a focus on making its recognition abilities faster. At some point down the line, he plans to make the software portion open source. You're on your own as far as building conveyer belts, bins, and so forth.

Check out Mattheij's writeup in two parts for more information. It starts with an overview of the story, followed up with a deep dive on the software. He's also tweeting about the project (among other things). And if you look around a bit, you'll find bulk LEGO brick auctions online—it's definitely a thing!

Original image
iStock
arrow
technology
Why Your iPhone Doesn't Always Show You the 'Decline Call' Button
Original image
iStock

When you get an incoming call to your iPhone, the options that light up your screen aren't always the same. Sometimes you have the option to decline a call, and sometimes you only see a slider that allows you to answer, without an option to send the caller straight to voicemail. Why the difference?

A while back, Business Insider tracked down the answer to this conundrum of modern communication, and the answer turns out to be fairly simple.

If you get a call while your phone is locked, you’ll see the "slide to answer" button. In order to decline the call, you have to double-tap the power button on the top of the phone.

If your phone is unlocked, however, the screen that appears during an incoming call is different. You’ll see the two buttons, "accept" or "decline."

Either way, you get the options to set a reminder to call that person back or to immediately send them a text message. ("Dad, stop calling me at work, it’s 9 a.m.!")

[h/t Business Insider]

SECTIONS
BIG QUESTIONS
arrow
BIG QUESTIONS
WEATHER WATCH
BE THE CHANGE
JOB SECRETS
QUIZZES
WORLD WAR 1
SMART SHOPPING
STONES, BONES, & WRECKS
#TBT
THE PRESIDENTS
WORDS
RETROBITUARIES