Automakers are investing heavily in autonomous driving technologies, using simulation and digital twins to train, test and validate the deep neural networks running inside the vehicle.
Designing software-defined vehicles is a distinctly time-consuming and costly process, requiring input from teams around the world. Traditionally, product development cycles in the automotive industry stretch over many years to accommodate these efforts.
Scalable, open platforms are creating new ways to streamline the design process to increase workflow efficiency and productivity. Designers and engineers can conduct interactive, collaborative concept testing and evaluate vehicle models and simulations with high accuracy.
Teams can identify problems early in the design process and make faster decisions on critical factors such as performance and appearance.
By reviewing vehicle designs on the virtual platform, project teams can both reduce costs and accelerate production schedules for the design and manufacturing process.
Virtual proving grounds
In addition to a comprehensive design process, autonomous vehicles (AVs) require large-scale development and testing in a wide range of scenarios before they can be deployed on public roads.
To deliver on the vast potential safety benefits, AVs must be able to respond to incredibly diverse situations on the road, such as emergency vehicles, pedestrians, poor weather conditions, and a virtually infinite number of other obstacles—including scenarios that are too dangerous to test in the real world.
There is no feasible way to physically road test vehicles in all these situations, nor is road testing sufficiently controllable, repeatable, exhaustive, or fast enough. That is why simulation is so important as it allows developers to test all of these possibilities in the virtual world before deploying AVs in the real world.
With recent breakthroughs in AI, developers can now build simulations directly from real world data, improving accuracy while saving valuable time and cost.
The new AI pipeline, known as a Neural Reconstruction Engine, automatically extracts the key components needed for simulation, including the environment, 3D assets and scenarios.
These pieces are then reconstructed into simulation scenes that have the realism of data recordings, but are fully reactive and can be manipulated as needed. Achieving this level of detail and diversity by hand is costly, time consuming and not scalable.
Synthetic data generation is also a key component to accelerating AV development in simulation. By generating physically based sensor data for camera, radar, lidar and ultrasonics, along with the corresponding ground-truth, it is now possible to then use this data for training AI perception networks for AVs.
Using synthetic data reduces time and cost, is always accurate, and produces ground truth that humans can’t label, such as depth, velocity, and occluded objects. It also generates training data for rare and dangerous scenes to augment real data for a targeted approach to solving some of AVs’ biggest challenges.
With physically accurate simulation platforms capable of generating ground-truth synthetic data, AV developers can improve productivity, efficiency, and test coverage, thereby accelerating time-to-market while minimizing real-world driving.
A safer, smarter future
At the end of the day, safety needs to be the No. 1 priority.
When we are talking about human lives, we want to make sure we are not only getting it right, but also that we are never getting it wrong.
The next generation of transportation is autonomous, so it will be crucial to develop self-driving technology that delivers a safer, more convenient and enjoyable experience for all by incorporating safety into every step — including design, production and vehicle operation.