CLOSE
Original image
Discovery Channel

How Scientists Built a Shark-Following Robot for Shark Week

Original image
Discovery Channel

For all we know about sharks, there's still a lot we don't know about these animals that both fascinate and terrify us. Traditional tracking methods like satellite and acoustic tags have shed some light on shark behavior, but even they have their limitations.

That's where Shark Cam, an autonomous underwater vehicle, comes in. "A few years [ago], I was working with a scientist who loved the idea of trying to find out what some of these fish that we track do when we can’t follow them because they’re out of reach or they go deep or we disturb them when we get in the water," says marine biologist Greg Skomal. "We thought it’d be really interesting to develop some kind of robot that could track marine animals, specifically sharks. One of the principals at Big Wave Productions [which produces shows for Shark Week] was super excited about the concept and propelled it upwards to Discovery, and they loved it. So with their support, we were able to actually make this come to fruition."

The autonomous underwater vehicle (AUV) was developed by Skomal and scientists at the Oceanographic Systems Laboratory at Woods Hole Oceanographic Institution. It was deployed from a boat off of Chattam, Massachussets, last year, where it followed great white sharks as they swam along the coast. Shark Cam makes its debut in the Shark Week special "Return of Jaws" tonight at 9 p.m. EST on the Discovery Channel; we talked to Skomal about developing the robot and what it revealed that traditional tracking methods did not.

How long did it take to build and deploy Shark Cam?

We started the project in 2011, and were able to do some field trials in late 2011, and we had a pretty functional vehicle by the summer of 2012. So about a year of solid development. Most of that was software modifications by the engineers who run these robotic underwater vehicles.

When you’re building something like this, are you working from an existing platform or are you starting from scratch?

The Oceanographic Systems Laboratory at Woods Hole Oceanographic Institution has an existing group of vehicles that are autonomous—they’re completely untethered to the boat, and they can be programmed to do a variety of missions. So really, all we had to do was modify the software of one their existing vehicles in order to get it to follow a live shark.

It sounds simple, but it wasn’t. It was a partnership—[between the engineers and] me, having tracked fish for years, trying to give them a sense of what we anticipate the behavior of the shark to be, so that the vehicle can adjust to it. It’s one thing to have a vehicle go in a straight line, or even mow a lawn—back and forth, back and forth—but to have it adjust to the behavior of a live animal is a most complex process.

What kind of behaviors would they be adjusting for?

Changes in three-dimensional movement. Up, down, sideways, back, forth—you name it. Very few live animals swim in a straight line at one depth. So it had to basically adapt to random movements in three-dimensional space.

What technology did you outfit the robot with?

There were four cameras on Shark Cam—it was specially designed to carry three of those, and one mounted on top. It's battery-powered, which limits its life, but that’s fine, we can expand on that. It is modular in the sense that we can add components to it that do various kinds of things that we did not do [on this mission], like collect oceanographic data. It communicates with a transponder that we put on the shark to follow it and navigate and recreate the track of the animal.

We actually added a rear-facing camera, but because of the fine balance on the vehicle itself—it’s a torpedo and it has to be extremely hydrodynamic—throwing the extra camera on slowed it down. So that’s something that we have to develop in the next phase of this operation.


Robot with a view. Photo courtesy of the Discovery Channel.

When you decided you were going to take the Shark Cam out and put in the water and send it after a shark, you guys had to go out and tag the shark first. How did the robot work in conjunction with the acoustic tags?

We’ve been tracking white sharks with a variety of technology off the coast of Cape Cod for the last four summers. So [tagging the sharks was] almost the easiest part, since we’d already done the [research and development] to get that done. Once we got the transponder on the shark, the AUV was set to go.

Most acoustic transmitters emit a ping, and the ping is picked up by people in the tracking vehicle, so we can track the fish. But this acoustic tag is a transponder, so it has two-way communication between the vehicle itself and, in essence, the shark. So we can basically have a conversation that provides for highly precise navigation and mapping of three-dimensional movement. And that really is a step forward, because it’s not just passive acoustics where you’ve got a vehicle trying to just listen for something. [The AUV] was actually listening and communicating with [the tag].

We had to program the vehicle so that it could make decisions—very simple cause and effect decisions based on where the shark was, to follow it. We ended up getting a vehicle that can give us very precise tracks of the animal.

Were there any glitches you had to work out?

There was a whole series of glitches. The transponder itself is larger than we want it, but the funding simply wasn’t there to miniaturize it. So we had to use what we had. It turns out the orientation of the existing transponder design had to be vertical in the water column, which is absolutely counter to normal hydrodynamics. We had to figure out a way to get it to tow vertically on the shark, and that took a few days working with our tagging crew and the engineers. And that would allow for a stronger signal so that the AUV could actually keep up with the shark in shallow water.

We’re also in the natural environment. Where these white sharks hang out is a very dynamic area in terms of tide and current. So in many ways, we’re up against trying to get a vehicle that can only go, you know, six miles an hour to keep up with a shark that was swimming steadily at five miles an hour. And then it was the fine-tuning of the vehicle so that it could stay with the shark and not lose it.

How did the sharks react to it?

Jokingly, I told the engineers that once this big white shark sees this vehicle, painted bright yum-yum yellow, it was going to turn around and just eat it. Most would think that this voracious animal that is considered to be one of the most dangerous one on earth would not like to be followed so closely. So these guys got nervous every time the AUV got in close proximity to a shark.

But the shark completely ignored it. [At one point,] the shark actually turned around and did a big loop and started following the AUV, which I thought was fantastic. The AUV couldn’t do anything about it—it was hearing the shark behind it, and a major limitation of the technology is that it can’t do hairpin turns and quick circles. So that made for some good humor.

What did you learn by deploying this robot that you couldn’t learn just from using acoustic tags or satellite tags?

Every tag in technology has its ups and downs, and there’s no silver bullet when it comes to tags that gives you high resolution, broad scale, and fine-scale data on movement. Satellite tags are really good for looking at broad-scale movement—where the shark goes in broad migratory patterns. It doesn’t tell you a lot about fine-scale behavior.

Acoustic tags will tell you a little bit about fine-scale behavior, but only in the sense that you know where the shark is at any given time. One of the problems with the technology of acoustic tags—prior to us doing this—was instead of sending a robot after a shark, you follow the shark with your boat. And that’s usually limited by weather considerations, fuel, compatibility of crew members, provisions, all those things that can come up and go wrong. And the boat’s track doesn’t necessarily reflect the shark’s track, because the shark is going to be somewhere within a quarter or a half a mile from the boat. And it’s really hard to get a good, precise estimate of the actual movements of the shark in three-dimensional space using traditional tracking methods.

With the ability to send robots after the shark, you’re going to increase the precision of your tracking so you’ll know exactly what the shark did in three-dimensional space—the depth of water, the depth of the shark—and you’re collecting data at the same time over that same path. The vehicles can carry instrumentation on them—the simplest being water temperature, to complex instrumentation that measures current and tide—so you can determine whether the shark is swimming upstream or downstream. You can look at dissolved oxygen, so you can get a sense of what the minimal oxygen requirements of the shark are. You can also add other kinds of instrumentation that’ll answer questions about the habitat in which the shark lives.

So it’s a huge step forward—and when you throw cameras on the whole thing, you even have the potential for real behavioral observation: To see what the shark is doing. Let’s say it stops swimming and just stays in one area. If we approach it and put divers in the water, that’s going to spook the shark—and very few divers want to jump on top of a white shark to begin with. Or you speed up on it on a boat and you try to see what the shark is doing, but what if it’s 30 feet underwater? You can’t see what it’s doing. You send Shark Cam out, and you can record what’s going on in that area.

So the robot is a proxy for what we can’t do, and I think it’s a huge step forward in terms of advancing science and adding a new tool for marine scientists.

Have you used Shark Cam since?

We have not deployed the Shark Cam since last summer. The next step is going back to the drawing board—raising funding to tweak it and take it to the next level.

What's the next level?

The next level for us is to improve upon and learn from what we’ve already done. It’s a real solid analysis of the data, it’s fine-tuning the software to take into account sudden modifications in the shark’s behavior. It’s probably to integrate the camera systems a little better with the AUV so that we may be able to control them—turn them on, turn them off. It’s energy budgeting. And it’s really miniaturizing the transponder so that we can put it on much smaller sharks and maybe broaden its applicability.

Original image
iStock // Ekaterina Minaeva
arrow
technology
Man Buys Two Metric Tons of LEGO Bricks; Sorts Them Via Machine Learning
Original image
iStock // Ekaterina Minaeva

Jacques Mattheij made a small, but awesome, mistake. He went on eBay one evening and bid on a bunch of bulk LEGO brick auctions, then went to sleep. Upon waking, he discovered that he was the high bidder on many, and was now the proud owner of two tons of LEGO bricks. (This is about 4400 pounds.) He wrote, "[L]esson 1: if you win almost all bids you are bidding too high."

Mattheij had noticed that bulk, unsorted bricks sell for something like €10/kilogram, whereas sets are roughly €40/kg and rare parts go for up to €100/kg. Much of the value of the bricks is in their sorting. If he could reduce the entropy of these bins of unsorted bricks, he could make a tidy profit. While many people do this work by hand, the problem is enormous—just the kind of challenge for a computer. Mattheij writes:

There are 38000+ shapes and there are 100+ possible shades of color (you can roughly tell how old someone is by asking them what lego colors they remember from their youth).

In the following months, Mattheij built a proof-of-concept sorting system using, of course, LEGO. He broke the problem down into a series of sub-problems (including "feeding LEGO reliably from a hopper is surprisingly hard," one of those facts of nature that will stymie even the best system design). After tinkering with the prototype at length, he expanded the system to a surprisingly complex system of conveyer belts (powered by a home treadmill), various pieces of cabinetry, and "copious quantities of crazy glue."

Here's a video showing the current system running at low speed:

The key part of the system was running the bricks past a camera paired with a computer running a neural net-based image classifier. That allows the computer (when sufficiently trained on brick images) to recognize bricks and thus categorize them by color, shape, or other parameters. Remember that as bricks pass by, they can be in any orientation, can be dirty, can even be stuck to other pieces. So having a flexible software system is key to recognizing—in a fraction of a second—what a given brick is, in order to sort it out. When a match is found, a jet of compressed air pops the piece off the conveyer belt and into a waiting bin.

After much experimentation, Mattheij rewrote the software (several times in fact) to accomplish a variety of basic tasks. At its core, the system takes images from a webcam and feeds them to a neural network to do the classification. Of course, the neural net needs to be "trained" by showing it lots of images, and telling it what those images represent. Mattheij's breakthrough was allowing the machine to effectively train itself, with guidance: Running pieces through allows the system to take its own photos, make a guess, and build on that guess. As long as Mattheij corrects the incorrect guesses, he ends up with a decent (and self-reinforcing) corpus of training data. As the machine continues running, it can rack up more training, allowing it to recognize a broad variety of pieces on the fly.

Here's another video, focusing on how the pieces move on conveyer belts (running at slow speed so puny humans can follow). You can also see the air jets in action:

In an email interview, Mattheij told Mental Floss that the system currently sorts LEGO bricks into more than 50 categories. It can also be run in a color-sorting mode to bin the parts across 12 color groups. (Thus at present you'd likely do a two-pass sort on the bricks: once for shape, then a separate pass for color.) He continues to refine the system, with a focus on making its recognition abilities faster. At some point down the line, he plans to make the software portion open source. You're on your own as far as building conveyer belts, bins, and so forth.

Check out Mattheij's writeup in two parts for more information. It starts with an overview of the story, followed up with a deep dive on the software. He's also tweeting about the project (among other things). And if you look around a bit, you'll find bulk LEGO brick auctions online—it's definitely a thing!

Original image
Cs California, Wikimedia Commons // CC BY-SA 3.0
arrow
science
How Experts Say We Should Stop a 'Zombie' Infection: Kill It With Fire
Original image
Cs California, Wikimedia Commons // CC BY-SA 3.0

Scientists are known for being pretty cautious people. But sometimes, even the most careful of us need to burn some things to the ground. Immunologists have proposed a plan to burn large swaths of parkland in an attempt to wipe out disease, as The New York Times reports. They described the problem in the journal Microbiology and Molecular Biology Reviews.

Chronic wasting disease (CWD) is a gruesome infection that’s been destroying deer and elk herds across North America. Like bovine spongiform encephalopathy (BSE, better known as mad cow disease) and Creutzfeldt-Jakob disease, CWD is caused by damaged, contagious little proteins called prions. Although it's been half a century since CWD was first discovered, scientists are still scratching their heads about how it works, how it spreads, and if, like BSE, it could someday infect humans.

Paper co-author Mark Zabel, of the Prion Research Center at Colorado State University, says animals with CWD fade away slowly at first, losing weight and starting to act kind of spacey. But "they’re not hard to pick out at the end stage," he told The New York Times. "They have a vacant stare, they have a stumbling gait, their heads are drooping, their ears are down, you can see thick saliva dripping from their mouths. It’s like a true zombie disease."

CWD has already been spotted in 24 U.S. states. Some herds are already 50 percent infected, and that number is only growing.

Prion illnesses often travel from one infected individual to another, but CWD’s expansion was so rapid that scientists began to suspect it had more than one way of finding new animals to attack.

Sure enough, it did. As it turns out, the CWD prion doesn’t go down with its host-animal ship. Infected animals shed the prion in their urine, feces, and drool. Long after the sick deer has died, others can still contract CWD from the leaves they eat and the grass in which they stand.

As if that’s not bad enough, CWD has another trick up its sleeve: spontaneous generation. That is, it doesn’t take much damage to twist a healthy prion into a zombifying pathogen. The illness just pops up.

There are some treatments, including immersing infected tissue in an ozone bath. But that won't help when the problem is literally smeared across the landscape. "You cannot treat half of the continental United States with ozone," Zabel said.

And so, to combat this many-pronged assault on our wildlife, Zabel and his colleagues are getting aggressive. They recommend a controlled burn of infected areas of national parks in Colorado and Arkansas—a pilot study to determine if fire will be enough.

"If you eliminate the plants that have prions on the surface, that would be a huge step forward," he said. "I really don’t think it’s that crazy."

[h/t The New York Times]

SECTIONS
BIG QUESTIONS
arrow
BIG QUESTIONS
SECTIONS
WEATHER WATCH
BE THE CHANGE
JOB SECRETS
QUIZZES
WORLD WAR 1
SMART SHOPPING
STONES, BONES, & WRECKS
#TBT
THE PRESIDENTS
WORDS
RETROBITUARIES