March 24, 2023
By Edward Griffor
Many of us are enthusiastic about the prospect of self-driving cars or automated vehicles (AVs). They promise to free us from the stress of driving. Automated cars may also reduce accidents caused by distraction or poor judgment.
Cars now come with more automated features—such as headlight activation, emergency braking, detection and avoidance, and more. Many vehicles now include artificial intelligence to help vehicles drive safely.
So why aren’t fleets of self-driving cars already for sale and out on the road? There’s a lot we don’t really know about how to:
* automate driving,
* measure driving performance, and
* make sure automated vehicles drive safely.
The Human Aspect Of Driving Is Challenging to Automate
Experienced drivers make many decisions instinctively, such as stopping quickly when they spot a deer in the road. We still don’t know how to describe that instinct to a machine, even with its artificial intelligence and sensors. Modeling instinct, or teaching a car to drive like a human, is an even bigger engineering and scientific challenge.
Driving is not just an individual skill; it’s a collective effort. Each driver communicates with other drivers, pedestrians, and cyclists to get from point A to point B safely. Additionally, drivers interact with a huge infrastructure that is focused on safe and effective vehicle operations, such as traffic lights, signage, and changing traffic patterns.
All these aspects of human driving—instinctive decisions, driver collaborations and interacting with road infrastructure—must be replicated for automated driving.
Many experts believe humans still have an edge over today’s technologies, but technology is rapidly improving.
Industry, government, and other partners are working toward replicating these aspects of human driving for automated vehicles and building a supporting infrastructure. The effort will require human factors engineering (a field that considers humans’ abilities and limitations), further technical development, and local and regional planning. The goal is to deploy automated vehicles that operate safely and effectively.
Automated Vehicles Require Thorough Testing to Ensure Safety
Compared to human-driven vehicles, driverless vehicles can be tested more thoroughly and less expensively before they hit the road because the driver is not a person; it’s software. Their simulations allow more testing in a shorter time, compared to test track or public road testing.
These simulations include models of other vehicles and all critical elements of the driving environment. Such simulations can model scenarios that automated vehicles may encounter, including vehicle failures or cyberattacks. These simulations can increase confidence in AVs.
Manufacturers of traditional vehicles test parts and systems on cars—like steering and brakes. Since these parts are proprietary, they don’t typically share the results of comprehensive testing. Regulators and other organizations responsible for safety can’t confirm the results or even the measurement and testing methods used.
NIST Is Playing a Key Role in the Drive Toward Autonomous Vehicles
In the case of automated vehicles, that’s where NIST (National Institute of Standards and Technology, part of the U.S. Department of Commerce) comes in. NIST develops and standardizes measurement methods. These measurements and methods will be vital in the development of autonomous vehicles.
NIST is doing what it does best: building a consensus among interested parties on the measurements needed to evaluate automated vehicle safety, and developing reliable measurement methods.
In a series of workshops, NIST partnered with automated vehicle stakeholders, including the U.S. Department of Transportation, its National Highway Traffic Safety Administration, industry representatives, and other research institutions.
NIST and its partners began by examining industry’s concept of an AV’s “operational design domain (ODD),” a vocabulary that includes names for the various conditions that the vehicle can operate in, such as operating in precipitation. Operational design domains distinguish type of precipitation, such as snow and rainfall, and even types of rainfall, heavy or light. (Think of it as the vehicle’s resume, describing its capabilities.) They indicate what types of testing should be included for that vehicle.
NIST and our partners proceeded to develop a concept that complements industry’s operational design domain, called the operating envelope specification (OES) of an AV. It includes all the factors in the vehicle’s ODD and more. It includes criteria for assessing the performance of the car.
The OES provides a framework for measuring the automated vehicle’s operating environment. Critical vehicle behaviors, such as lane changing and navigating intersections, are specified as behavioral competencies. Automated vehicle developers, regulators, and researchers can then use this framework to create testing scenarios. Think of the OES as the job description and associated metrics that all autonomous vehicles should meet.
NIST worked with leading vehicle system developers to create a co-simulation platform for studying automated vehicle safety. The platform is called co-simulation because it brings together best-in-class simulations for braking, engines, sensors, and much more. This platform evaluates multiple safety factors and vehicle models at the same time. Testers swap out vehicle models, level of automation, driving scenarios, and evaluation methods during co-simulation.
Using the platform, stakeholders can address such questions as:
* Will automated braking work in time to avoid a collision?
* Will automated cars obey speed limits and other traffic laws?
* Can an automated vehicle safely pass another vehicle on a busy highway?
The OES will help inform the tester and the vehicle, but what about testing the automated driver? This is the most significant challenge in automated vehicles. We know how to test human drivers and give them a driver’s license. We don’t yet know how to test a software driver. While the OES supports testing of vehicles, in the future it may also help us test automated “drivers.”
Additionally, NIST held a workshop last year, asking AV community stakeholders how NIST can support automated vehicle development. NIST identified the need to study:
* interactions between the systems on the car;
* sensing and perception, such as recognizing and responding to objects in or around the road;
* cybersecurity research related to adversarial machine learning;
* risk associated with AI components; and
* automated vehicle communications.
Once facilities, equipment, and expertise are in place, NIST will work with automotive and government experts to identify critical automated vehicle behaviors and how to best measure these behaviors.
The goal of NIST’s efforts will be to set the standards for AV testing nationwide so these vehicles can operate safely.
What Will a Driverless Future Look Like?
We are exploring the use of our simulation tools to help test the safety of automated vehicles. Government, manufacturing and technology companies, researchers, and other stakeholders can contribute their best-in-class models. These contributions will help NIST explore trustworthiness metrics, including safety metrics, that will be vital to automated vehicle success.
As a driverless future nears, replacing human drivers with automated systems still raises critical questions. For example, perception by humans or machines will never be perfect, so investment in infrastructure may take up some of the slack. The NIST team will hold a workshop with infrastructure stakeholders this year to gather perspectives on how to measure the impact of these investments.
There are important questions to address regarding automated vehicles and their software drivers. Are they safe, secure, and trustworthy? Should safety be at least as good as the best human drivers? How can we compare the two?
Answering such questions will be a collaborative effort among experts in science, transportation, and human factors engineering.
Automated vehicles must be safe, with their risks understood and measured. Researchers, developers, and government agencies must share their safety assessments with each other. Just as drivers collaborate to keep everyone safe on the road, many organizations will be involved in keeping the autonomous vehicle future safe for everyone.
As drivers, we know how to drive, but it’s difficult to describe or teach to others. Building a robotic or a self-driving car that does what we humans do is a challenge.
And while fully automated vehicles may still be far down the road, when they do arrive, you can be sure NIST will be under the hood.
About the Author
Edward Griffor serves as associate director for cyber-physical systems (CPS) and Internet of Things (IoT) in the Smart Connected Systems Division at NIST. He has 10 years of experience as chief scientist in the auto industry, where he led the development of technologies for self-driving vehicles.
Griffor is adjunct professor at the Center for Molecular Medicine and Genetics at Wayne State University and at the University of Grenoble Alpes in France for artificial intelligence. His books include the Handbook of System Safety and Security, the Handbook of Computability, and the Mathematical Theory of Domains.
Griffor has a Ph.D. in mathematics from MIT and a habilitation (European academic designation) in electrical engineering from the University of Oslo. His current work combines methods of physics, computing, mathematics, and biosciences to the assurance of CPS and IoT systems, including automated driving system (ADS), reactor protection system (RPS), and biomedical system assurance.
This article was originally published by NIST on its blog, Taking Measure.
Get the manufacturing industry news and features you need for free in a format you like.