Original image
Murdo Macleod/Warner Bros.

Namit Malhotra, the Man Who Helped Make 'Gravity' 3-D

Original image
Murdo Macleod/Warner Bros.

For fans of in-your-face moviegoing, the difference between a 2-D film and a 3-D one is simply a matter of slapping on a pair of plastic glasses. But for the movie artists behind the rendered images, the process is a bit more complicated—especially when the 3-D movie isn’t filmed in 3-D. Case in point: Alfonso Cuarón’s Gravity.

Though lauded for its groundbreaking use of 3-D, the lost-in-space blockbuster was actually filmed in two dimensions. It was up to the conversion experts at Prime Focus World—in collaboration with the VFX masters at Framestore—to change all that.

Cuarón and Gravity's producers certainly approached the right people. Before they were converting 15,531 frames of film into one of the longest stereoscopic shots in cinema history for Gravity, Prime Focus World was giving three-dimensional life to such films as Avatar, the first three episodes of Star Wars, and the recent re-release of The Wizard of Oz.

In the wake of Gravity's impressive 10 Oscar nominations—including nods for Best Picture and Best Visual Effects—we spoke with Namit Malhotra, Prime Focus World’s founder and CEO, about floating cameras, sculpting superstars, and making movies set in a galaxy far, far away.

It's become a favorite pastime of film industry insiders—and even moviegoers—to declare that 3-D is dead. Clearly you feel differently. In what ways has Gravity changed the 3-D playing field?
The film sold more 3-D tickets in its opening weekend than any other film to date (80 percent)—even more than Avatar. What this says to us is not that 3-D is dead, but that good storytelling is alive and kicking. If a story is good, and the technological and creative toolset chosen work to support that story, then the audience will love it.

At what point in the production process did you guys get involved with Gravity?

Back in 2010, executive producer Nikki Penny approached Prime Focus World with a proposition to take a single eye from a native stereo test shoot the Gravity production team had done and convert it, to allow a side-by-side comparison to be made between the native and converted footage. The test footage had been shot in a tight set, designed to replicate a space capsule, but the set was too constrictive to accommodate the bulky stereo camera rig. Director Alfonso Cuarón and producer David Heyman reviewed the results and were delighted to find that the converted shots and natively shot scenes were indistinguishable from each other. Duly impressed, the production put aside plans for a native shoot and Prime Focus World were welcomed into the creative filmmaking team as the exclusive conversion partner for the movie.

Was there any element of the planned production that gave you pause about signing on for such a complicated film?
We were in a unique position where we were brought on early in the overall process. From this early involvement, we were able to foresee the most challenging element, but also one that Prime Focus World is the most proud of tackling: the integration of the converted live action shots produced by Prime Focus World and the stereo-rendered CG that would be created by VFX supervisor Tim Webber and his team at Framestore. We were able to develop customized techniques that would make the particular technological and logistical aspects of the process easy… We were able to analyze and then streamline the immense visual and technical complexity of the sequences, integrate the processes seamlessly and virtually eliminate technical snafus, allowing everyone to concentrate all [their] attention on fulfilling Alfonso and stereo supervisor Chris Parks’ vision for the film. So, in a way, it was specifically through the opportunity to take a pause, and understand the complexity, that we were able to solve for it and ultimately make the creation of great 3-D the prime focus.

You guys pulled off the 3-D equivalent of Martin Scorsese's Goodfellas tracking shot with Gravity, converting 15,531 frames into 10 minutes and 47 seconds of screen time. What was your first thought when you learned about the task at hand?
You know when you sign on to a project with Alfonso and cinematographer Emmanuel Lubezki that you can expect those signature long, seamless shots. To us it was immediately exciting to imagine how those long, unbroken, floating camera shots would work spectacularly well in stereo space. It’s also a tremendous feeling to know Prime Focus World is able, with flexibility and innovation, to meet these record-breaking challenges. It really allows us to ask filmmakers in earnest “What do you want to create?”, knowing that whatever that is, we can make it happen.

How much harder does a conversion become when your only actors are two of the world's most recognizable superstars?
It does not become harder at all. We put a lot of time into look development at the start of the conversion process, which includes detailed sculpting and depth mapping of character faces and environments. We used the same process to convert faces like Brad Pitt’s in World War Z and Dorothy’s in The Wizard of Oz, so dealing with recognizable faces is not an issue—it's part of what makes the process fun. We get to focus on bringing even more life to these characters, more of a visceral element by bringing them into the theater in a way that most filmgoers have never before been able to see.

Is it part of your job to get the science right, or do you leave that up to the director? In other words: did you feel compelled to have a firm grasp of the physics of space in order to offer scientifically sound creative solutions of what might be (or not be) possible?
With visual effects across the board there is a tremendous emphasis on believability being the benchmark. What’s great about working with stereo as a storytelling tool is that it can help use the perspective of the viewer to communicate feelings. We could use the depth to create a contrast between the vast, unending feeling of space for the exterior shots and the claustrophobia, isolation, and loneliness of the interior shots. So the science is there, but it really becomes more than the science; it is how we experience that science according to the director’s vision.

Technical teams are a bit like film editors in that the better they are at their jobs, the less the audience notices their work (which is exactly what you want). In terms of the creative elements of a film, what are some of the aspects you guys are able to control and manipulate during conversion that the average viewer may not realize?
Our integration with Framestore really allowed for seamless transitions between the rendered stereo and converted live action sequences. Alfonso’s vision for the 3-D is for the audience to feel fully immersed, as if viewers are going into space with the characters, in the capsule. So the idea is to not think about the 3-D at all, but to feel like you are there.

You guys have been involved in some of the biggest 3-D films in recent years, including a little-known film called Avatar. How has Gravity outdone Avatar in terms of pushing moviemaking technology forward?
Like Avatar, a big part of the success with Gravity is due to the fact that 3-D was part of the director’s vision for the film from the beginning. It allowed stereo to help tell the story the director wanted to tell. That is really what all moviemaking technology is all about, and we will continue to push our technology forward to meet new visions and challenges that filmmakers can dream up.

Where do you see film technology going next?
Film technology is in a continual dance with the dreams and visions of what filmmakers want to create. In a way, it’s what filmmakers want to make that will determine how technology will evolve. Gravity is proof of that. We hope Gravity inspires filmmakers to pursue what they envision, even if they are not sure the technology is there.

Original image
iStock // Ekaterina Minaeva
Man Buys Two Metric Tons of LEGO Bricks; Sorts Them Via Machine Learning
May 21, 2017
Original image
iStock // Ekaterina Minaeva

Jacques Mattheij made a small, but awesome, mistake. He went on eBay one evening and bid on a bunch of bulk LEGO brick auctions, then went to sleep. Upon waking, he discovered that he was the high bidder on many, and was now the proud owner of two tons of LEGO bricks. (This is about 4400 pounds.) He wrote, "[L]esson 1: if you win almost all bids you are bidding too high."

Mattheij had noticed that bulk, unsorted bricks sell for something like €10/kilogram, whereas sets are roughly €40/kg and rare parts go for up to €100/kg. Much of the value of the bricks is in their sorting. If he could reduce the entropy of these bins of unsorted bricks, he could make a tidy profit. While many people do this work by hand, the problem is enormous—just the kind of challenge for a computer. Mattheij writes:

There are 38000+ shapes and there are 100+ possible shades of color (you can roughly tell how old someone is by asking them what lego colors they remember from their youth).

In the following months, Mattheij built a proof-of-concept sorting system using, of course, LEGO. He broke the problem down into a series of sub-problems (including "feeding LEGO reliably from a hopper is surprisingly hard," one of those facts of nature that will stymie even the best system design). After tinkering with the prototype at length, he expanded the system to a surprisingly complex system of conveyer belts (powered by a home treadmill), various pieces of cabinetry, and "copious quantities of crazy glue."

Here's a video showing the current system running at low speed:

The key part of the system was running the bricks past a camera paired with a computer running a neural net-based image classifier. That allows the computer (when sufficiently trained on brick images) to recognize bricks and thus categorize them by color, shape, or other parameters. Remember that as bricks pass by, they can be in any orientation, can be dirty, can even be stuck to other pieces. So having a flexible software system is key to recognizing—in a fraction of a second—what a given brick is, in order to sort it out. When a match is found, a jet of compressed air pops the piece off the conveyer belt and into a waiting bin.

After much experimentation, Mattheij rewrote the software (several times in fact) to accomplish a variety of basic tasks. At its core, the system takes images from a webcam and feeds them to a neural network to do the classification. Of course, the neural net needs to be "trained" by showing it lots of images, and telling it what those images represent. Mattheij's breakthrough was allowing the machine to effectively train itself, with guidance: Running pieces through allows the system to take its own photos, make a guess, and build on that guess. As long as Mattheij corrects the incorrect guesses, he ends up with a decent (and self-reinforcing) corpus of training data. As the machine continues running, it can rack up more training, allowing it to recognize a broad variety of pieces on the fly.

Here's another video, focusing on how the pieces move on conveyer belts (running at slow speed so puny humans can follow). You can also see the air jets in action:

In an email interview, Mattheij told Mental Floss that the system currently sorts LEGO bricks into more than 50 categories. It can also be run in a color-sorting mode to bin the parts across 12 color groups. (Thus at present you'd likely do a two-pass sort on the bricks: once for shape, then a separate pass for color.) He continues to refine the system, with a focus on making its recognition abilities faster. At some point down the line, he plans to make the software portion open source. You're on your own as far as building conveyer belts, bins, and so forth.

Check out Mattheij's writeup in two parts for more information. It starts with an overview of the story, followed up with a deep dive on the software. He's also tweeting about the project (among other things). And if you look around a bit, you'll find bulk LEGO brick auctions online—it's definitely a thing!

Original image
Nick Briggs/Comic Relief
What Happened to Jamie and Aurelia From Love Actually?
May 26, 2017
Original image
Nick Briggs/Comic Relief

Fans of the romantic-comedy Love Actually recently got a bonus reunion in the form of Red Nose Day Actually, a short charity special that gave audiences a peek at where their favorite characters ended up almost 15 years later.

One of the most improbable pairings from the original film was between Jamie (Colin Firth) and Aurelia (Lúcia Moniz), who fell in love despite almost no shared vocabulary. Jamie is English, and Aurelia is Portuguese, and they know just enough of each other’s native tongues for Jamie to propose and Aurelia to accept.

A decade and a half on, they have both improved their knowledge of each other’s languages—if not perfectly, in Jamie’s case. But apparently, their love is much stronger than his grasp on Portuguese grammar, because they’ve got three bilingual kids and another on the way. (And still enjoy having important romantic moments in the car.)

In 2015, Love Actually script editor Emma Freud revealed via Twitter what happened between Karen and Harry (Emma Thompson and Alan Rickman, who passed away last year). Most of the other couples get happy endings in the short—even if Hugh Grant's character hasn't gotten any better at dancing.

[h/t TV Guide]