CLOSE
iStock
iStock

A Voice Recognition App Adds Sound Effects While You Read to Your Kids

iStock
iStock

Technology is coming for kids’ story time, but maybe not in the way that you think. The future of bedtime stories, as MIT Technology Review describes it, won’t involve tablets or reading off screens, but it will have sound effects.

Novel Effect is an app that uses voice recognition to track the bedtime stories you’re reading to your kids and insert sound effects and music in response to certain cue words. It’s similar to a home assistant, such as the Amazon Echo or Google Home, except instead of playing music and setting kitchen timers for you, it’s on the ear-out for keywords contained in certain kids’ books.

Four mobile app screenshots side-by-side of the Novel Effect app.
Novel Effect

The app doesn’t work for all titles, but it offers effects for popular books you probably already own, like Where the Wild Things Are, The Hungry Caterpillar, and The Cat in the Hat. When you open the app on your phone, you select which book you plan to read. As you read the physical book out loud, the app listens for where you are in the text and adds sound effects, from dramatic music to monstrous roars.

It’s not going to trigger odd sound effects every time you say the word “caterpillar,” though. (Unlike the Amazon Echos that heard the words “Alexa, buy me a dollhouse” on a TV news report and rushed to fulfill the order.) The words have to correspond to the book you’ve selected in the app, though you don’t have to read the text from the beginning or keep any specific time. The app can recognize where you are in the book no matter where you start or whether you dive off into a tangent about how cool caterpillars are before resuming the story.

Novel Effect is part of Amazon’s Alexa Accelerator for voice recognition technology, and it seems feasible that one day this kind of functionality would be a skill you could enable on your Echo or other voice-controlled assistant. According to MIT Technology Review, the company hopes to allow users to create their own sound effects sometime in the near future.

[h/t MIT Technology Review]

nextArticle.image_alt|e
WWF
arrow
Animals
Watch an Antarctic Minke Whale Feed in a First-of-Its-Kind Video
WWF
WWF

New research from the World Wildlife Fund is giving us a rare glimpse into the world of the mysterious minke whale. The WWF worked with Australian Antarctic researchers to tag minke whales with cameras for the first time, watching where and how the animals feed.

The camera attaches to the whale's body with suction cups. In the case of the video below, the camera accidentally slid down the side of the minke whale's body, providing an unexpected look at the way its throat moves as it feeds.

Minke whales are one of the smallest baleen whales, but they're still pretty substantial animals, growing 30 to 35 feet long and weighing up to 20,000 pounds. Unlike other baleen whales, though, they're small enough to maneuver in tight spaces like within sea ice, a helpful adaptation for living in Antarctic waters. They feed by lunging through the sea, gulping huge amounts of water along with krill and small fish, and then filtering the mix through their baleen.

The WWF video shows just how quickly the minke can process this treat-laden water. The whale could lunge, process, and lunge again every 10 seconds. "He was like a Pac-Man continuously feeding," Ari Friedlaender, the lead scientist on the project, described in a press statement.

The video research, conducted under the International Whaling Commission's Southern Ocean Research Partnership, is part of WWF's efforts to protect critical feeding areas for whales in the region.

If that's not enough whale for you, you can also watch the full 13-minute research video below:

nextArticle.image_alt|e
iStock
arrow
technology
AI Could Help Scientists Detect Earthquakes More Effectively
iStock
iStock

Thanks in part to the rise of hydraulic fracturing, or fracking, earthquakes are becoming more frequent in the U.S. Even though it doesn't fall on a fault line, Oklahoma, where gas and oil drilling activity doubled between 2010 and 2013, is now a major earthquake hot spot. As our landscape shifts (literally), our earthquake-detecting technology must evolve to keep up with it. Now, a team of researchers is changing the game with a new system that uses AI to identify seismic activity, Futurism reports.

The team, led by deep learning researcher Thibaut Perol, published the study detailing their new neural network in the journal Science Advances. Dubbed ConvNetQuake, it uses an algorithm to analyze the measurements of ground movements, a.k.a. seismograms, and determines which are small earthquakes and which are just noise. Seismic noise describes the vibrations that are almost constantly running through the ground, either due to wind, traffic, or other activity at surface level. It's sometimes hard to tell the difference between noise and legitimate quakes, which is why most detection methods focus on medium and large earthquakes instead of smaller ones.

But better understanding natural and manmade earthquakes means studying them at every level. With ConvNetQuake, that could soon become a reality. After testing the system in Oklahoma, the team reports it detected 17 times more earthquakes than what was recorded by the Oklahoma Geological Survey earthquake catalog.

That level of performance is more than just good news for seismologists studying quakes caused by humans. The technology could be built into current earthquake detection methods set up to alert the public to dangerous disasters. California alone is home to 400 seismic stations waiting for "The Big One." On a smaller scale, there's an app that uses a smartphone's accelerometers to detect tremors and alert the user directly. If earthquake detection methods could sense big earthquakes right as they were beginning using AI, that could afford people more potentially life-saving moments to prepare.

[h/t Futurism]

SECTIONS

arrow
LIVE SMARTER