Weekly News

AI software can dream up an entire digital world from a simple sketch

December 04,2018 10:47

It could also be used to auto-generate virtual environments for virtual reality or for teaching self-driving cars and robots about the world. “We can create new sketches that have never been seen before and render those,” says Bryan Catanzaro, vice ...


Creating a virtual environment that looks realistic takes time and skill. The details have to be hand-crafted using a graphics chip that renders 3D shapes, appropriate lighting, and textures. The latest blockbuster video game, Red Dead Redemption 2, for example, took a team of around 1000 developers more than eight years to create—occasionally working 100-hour weeks. That kind of workload might not be required for much longer. A powerful new AI algorithm can dream up the photorealistic details of a scene on the fly.

Recommended for You

A NASA space probe has just arrived at the asteroid Bennu to study it

DeepMind’s AI will accelerate drug discovery by predicting how proteins fold

How to check if you’re affected by the Marriott mega data breach

EXCLUSIVE: Chinese scientists are creating CRISPR babies

Despite CRISPR baby controversy, Harvard University will begin gene-editing sperm

Developed by chipmaker Nvidia, the software won’t just make life easier for software developers. It could also be used to auto-generate virtual environments for virtual reality or for teaching self-driving cars and robots about the world.  
“We can create new sketches that have never been seen before and render those,” says Bryan Catanzaro, vice president of applied deep learning at Nvidia. “We’re actually teaching the model how to draw based on real video.”
Nvidia’s researchers used a standard machine-learning approach to identify different objects in a video scene: cars, trees, buildings, and so forth. The team then used what’s known as a generative adversarial network, or GAN, to train a computer to fill in realistic 3D imagery. 
The system can then be fed the outline of a scene, showing where different objects are, and it will fill in stunning, slightly shimmering detail. The effect is impressive, even if some of these objects occasionally look a bit warped or twisted. 

Sign up for the The Algorithm
Artificial intelligence, demystified

Thank you — please check your email to complete your sign up.
Incorrect email format

By signing up you agree to receive email newsletters and notifications from MIT Technology Review. You can change your preferences at any time. View our Privacy Policy for more detail.

“Classical computer graphics render by building up the way light interacts with objects,” says Catanzaro. “We wondered what we could do with artificial intelligence to change the rendering process.”
Catanzaro says the approach could lower the barrier for game design. Besides rendering whole scenes, the approach could be used to add a real person to a video game after feeding on a few minutes of video footage of the person in real life. He suggests that the approach could also be used to help render realistic settings for virtual reality, or to provide synthetic training data for autonomous vehicles or robots. “You can’t realistically get real training data for every situation that might pop up,” he says. The work was announced today at NeurIPS, a major AI conference in Montreal.
“This is interesting and impressive work,” says Michiel van de Panne, a professor at the University of British Columbia who specializes in machine learning and computer graphics. He notes that previous work involving GANs involved synthesizing simpler elements such as individual images or character motions.
“The work points the way to a very different way of creating animated imagery,” van de Panne says. “One with a different set of capabilities,” that are both less computationally intensive and could be interactive.
The Nvidia algorithm is just the latest in a dizzying procession of advances involving GANs. Invented by a Google researcher only a few years ago, GANs have emerged as a remarkable tool for synthesizing realistic, and often eerily strange imagery and audio. This trend promises to revolutionize computer graphics and special effects, and help artists and musicians imagine or develop new ideas. But it could also undermine public trust in video and audio evidence (see “Fake America great again”).
Catanzaro admits it could be misused. “This is a technology that could be used for a lot of things,” he says.

world of tanks world of warcraft world world of warships world trade center world map world press photo world war 3 worldsnooker world of warplanes

Share this article

Related videos

How computers are learning to be creative | Blaise Agüera y Arcas
How computers are learning to be creative | Bla...
Autodesk Generative Design
Autodesk Generative Design

DON'T MISS THIS STORIES