Arguments that the algorithm driving the Uber self-driving car that struck and killed a pedestrian in Arizona this month may not have been at fault did not matter to Arizona Governor Doug Ducey, when he took away the company’s ability to continue testing the technology in the state Monday, calling the incident “an unquestionable failure.”
Failure or not, what if such tests came with zero risk to humans? That’s the promise of Nvidia’s new product.
Nvidia temporarily suspended tests of its own self-driving cars on public roads "to learn from the Uber incident," a company spokesman told Reuters.
At the company's annual GTC Summit in San Jose, California, Nvidia founder and CEO Jensen Huang announced a two-server solution that simulates data generated by a self-driving car and trains a driving algorithm using that data. The promise is to move self-driving car testing off the public roads and into data centers before the technology is proven out.
Nvidia announced it was working on the technology at last year’s summit. This time, it’s announcing the actual product, which it expects to make available to early access partners in the third quarter.
The basic idea isn’t new. For example, Intel Labs, together with the Toyota Research Institute and the Computer Vision Center in Barcelona, designed an open source driving simulator for training self-driving cars called CARLA (Car Learning to Act). Another open source simulator was released by Udacity, the online education platform. Autonomous-driving companies have also been using simulators built for gaming to train their algorithms.
Besides the benefit of safety, virtualizing the road test makes it possible to train the algorithm much faster, the company said. The technology can simulate the billions of driving miles necessary to make self-driving cars a reality much faster than it would take to have physical cars drive them.
“We’re able to deploy this in the cloud and scale it up to generate billions of miles that ultimately will be able to showcase and statistically show how … these self-driving algorithms” perform, Danny Shapiro, Nvidia’s senior director, automotive, said in a phone briefing with reporters.
Training autonomous vehicles using simulation is in early stages, Shapiro said in a press Q&A at the summit Tuesday. Simulation is not likely to replace physical driving in training and validating the algorithms completely in the future, but it will substantially speed up development of the technology. "We’re not saying that it will necessarily totally replace it, but you’re going to see a lot more simulated miles," he said.
Alphabet’s self-driving car unit Waymo announced last month that its test vehicles had achieved 5 million self-driven miles since 2009, when the company started the tests. Waymo uses simulation in addition to physical testing. Its software “drove 2.7 billion miles in the virtual world” last year.
One of the servers in Nvidia’s new Drive Constellation Simulation System, powered by the company’s GPUs, runs Nvidia-developed Drive Sim software, which simulates cameras, LIDAR, RADAR – sensors a self-driving car uses to observe its physical environment.
The second server is Nvidia’s Drive Pegasus AI car computer that runs a full autonomous-vehicle software stack, processing the simulated data the same way it would process data from a real self-driving car. The server sends driving commands back to the simulator, completing what the company calls the “hardware-in-the-loop cycle.”
“This happens 30 times per second,” Shapiro said.
"With just 10,000 Constellations we can cover 3 billion miles per year," Huang said.
The simulation software generates “photoreal data streams,” simulating things like varying weather conditions (rainstorms, snowstorms), blinding glare at different times of day, reduced visibility at nighttime, different terrains and road surfaces.
The company envisions engineers scripting “potentially hazardous scenarios without putting anyone in harm’s way,” Shapiro said.