Editor’s Note:

Consumer interest in autonomous vehicles is on the rise across the world, largely due to their potential for making transportation and mobility safer. Yet many consumers will remain wary until the vehicles are demonstrated, through rigorous road and highway testing, to have an impeccable safety record. For the automotive industry, achieving a future of “zero crashes” is a herculean task that requires multi-disciplinary research and development by some of the world’s best engineers.

MIT researchers are currently working to develop models and systems that can be used to improve the safety of driverless vehicles. In the two reports that follow, MIT News writer Rob Matheson describes two research projects aimed at helping autonomous vehicles avoid collisions in areas where they’re most vulnerable to risk—at traffic intersections, corners, and other areas where cars or pedestrians might suddenly emerge.

 

Better Autonomous ‘Reasoning’ at Tricky Intersections

MIT and Toyota researchers have designed a new model that weighs various uncertainties and risks to help autonomous vehicles determine when it’s safe to merge into traffic at intersections with objects obstructing views, such as buildings blocking the line of sight. Image courtesy of the researchers.

Model alerts driverless cars when it’s safest to merge into traffic at intersections with obstructed views.  

By Rob Matheson | MIT News

CAMBRIDGE, Mass.—MIT and Toyota researchers have designed a new model to help autonomous vehicles determine when it’s safe to merge into traffic at intersections with obstructed views.

Navigating intersections can be dangerous for driverless cars and humans alike. In 2016, roughly 23 percent of fatal and 32 percent of nonfatal U.S. traffic accidents occurred at intersections, according to a 2018 Department of Transportation study. Automated systems that help driverless cars and human drivers steer through intersections can require direct visibility of the objects they must avoid. When their line of sight is blocked by nearby buildings or other obstructions, these systems can fail.

The researchers developed a model that instead uses its own uncertainty to estimate the risk of potential collisions or other traffic disruptions at such intersections. It weighs several critical factors, including all nearby visual obstructions, sensor noise and errors, the speed of other cars, and even the attentiveness of other drivers. Based on the measured risk, the system may advise the car to stop, pull into traffic, or nudge forward to gather more data.

“When you approach an intersection, there is potential danger for collision. Cameras and other sensors require line of sight. If there are occlusions, they don’t have enough visibility to assess whether it’s likely that something is coming,” said Daniela Rus, director of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science. “In this work, we use a predictive-control model that’s more robust to uncertainty, to help vehicles safely navigate these challenging road situations.”

The researchers tested the system in more than 100 trials of remote-controlled cars turning left at a busy, obstructed intersection in a mock city, with other cars constantly driving through the cross street.

The researchers tested the system in more than 100 trials of remote-controlled cars turning left at a busy, obstructed intersection in a mock city, with other cars constantly driving through the cross street. Experiments involved fully autonomous cars and cars driven by humans but assisted by the system. In all cases, the system successfully helped the cars avoid collision from 70 to 100 percent of the time, depending on various factors. Other similar models implemented in the same remote-control cars sometimes couldn’t complete a single trial run without a collision.

Risk is visualized here by vertical bars. Higher vertical bars indicate higher likelihood that that specific spot in the intersection is occupied by another vehicle, so it’s unsafe to pull into the road. Instead, the vehicle must wait for a safe gap or nudge forward to gather more data. Image courtesy of the researchers.

Joining Rus on the paper are first author Stephen G. McGill, Guy Rosman, and Luke Fletcher of the Toyota Research Institute (TRI); graduate students Teddy Ort and Brandon Araki, researcher Alyssa Pierson, and postdoc Igor Gilitschenski, all of CSAIL; Sertac Karaman, an MIT associate professor of aeronautics and astronautics; and John J. Leonard, the Samuel C. Collins Professor of Mechanical and Ocean Engineering of MIT and a TRI technical advisor.

Modeling Road Segments

The model is specifically designed for road junctions in which there is no stoplight and a car must yield before maneuvering into traffic at the cross street, such as taking a left turn through multiple lanes or roundabouts. In their work, the researchers split a road into small segments. This helps the model determine if any given segment is occupied to estimate a conditional risk of collision.

Autonomous cars are equipped with sensors that measure the speed of other cars on the road. When a sensor clocks a passing car traveling into a visible segment, the model uses that speed to predict the car’s progression through all other segments. A probabilistic “Bayesian network” also considers uncertainties — such as noisy sensors or unpredictable speed changes — to determine the likelihood that each segment is occupied by a passing car.

Because of nearby occlusions, however, this single measurement may not suffice. Basically, if a sensor can’t ever see a designated road segment, then the model assigns it a high likelihood of being occluded. From where the car is positioned, there’s increased risk of collision if the car just pulls out fast into traffic. This encourages the car to nudge forward to get a better view of all occluded segments. As the car does so, the model lowers its uncertainty and, in turn, risk.

But even if the model does everything correctly, there’s still human error, so the model also estimates the awareness of other drivers. “These days, drivers may be texting or otherwise distracted, so the amount of time it takes to react may be a lot longer,” McGill said. “We model that conditional risk, as well.”

That depends on computing the probability that a driver saw or didn’t see the autonomous car pulling into the intersection. To do so, the model looks at the number of segments a traveling car has passed through before the intersection. The more segments it had occupied before reaching the intersection, the higher the likelihood it has spotted the autonomous car and the lower the risk of collision.

The model sums all risk estimates from traffic speed, occlusions, noisy sensors, and driver awareness.

The model sums all risk estimates from traffic speed, occlusions, noisy sensors, and driver awareness. It also considers how long it will take the autonomous car to steer a preplanned path through the intersection, as well as all safe stopping spots for crossing traffic. This produces a total risk estimate.

That risk estimate gets updated continuously for wherever the car is located at the intersection. In the presence of multiple occlusions, for instance, it’ll nudge forward, little by little, to reduce uncertainty. When the risk estimate is low enough, the model tells the car to drive through the intersection without stopping. Lingering in the middle of the intersection for too long, the researchers found, also increases risk of a collision.

Assistance and Intervention

Running the model on remote-control cars in real-time indicates that it’s efficient and fast enough to deploy into full-scale autonomous test cars in the near future, the researchers say. (Many other models are too computationally heavy to run on those cars.) The model still needs far more rigorous testing before being used for real-world implementation in production vehicles.

The model would serve as a supplemental risk metric that an autonomous vehicle system can use to better reason about driving through intersections safely. The model could also potentially be implemented in certain “advanced driver-assistive systems” (ADAS), where humans maintain shared control of the vehicle.

Next, the researchers aim to include other challenging risk factors in the model, such as the presence of pedestrians in and around the road junction.

Reprinted with permission of MIT News (http://news.mit.edu/).

 

Helping Autonomous Vehicles See Around Corners

By sensing tiny changes in shadows, a new system identifies approaching objects that may cause a collision.

By Rob Matheson | MIT News

To improve the safety of autonomous systems, MIT engineers have developed a system that can sense tiny changes in shadows on the ground to determine if there’s a moving object coming around the corner.

Autonomous cars could one day use the system to quickly avoid a potential collision with another car or pedestrian emerging from around a building’s corner or from in between parked cars. In the future, robots that may navigate hospital hallways to make medication or supply deliveries could use the system to avoid hitting people.

In a paper presented at the International Conference on Intelligent Robots and Systems (IROS), the researchers describe successful experiments with an autonomous car driving around a parking garage and an autonomous wheelchair navigating hallways. When sensing and stopping for an approaching vehicle, the car-based system beats traditional LiDAR — which can only detect visible objects — by more than half a second.

That may not seem like much, but fractions of a second matter when it comes to fast-moving autonomous vehicles, the researchers said.

“For applications where robots are moving around environments with other moving objects or people, our method can give the robot an early warning that somebody is coming around the corner, so the vehicle can slow down, adapt its path, and prepare in advance to avoid a collision,” added co-author Daniela Rus, director of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science. “The big dream is to provide ‘X-ray vision’ of sorts to vehicles moving fast on the streets.”

Currently, the system has only been tested in indoor settings. Robotic speeds are much lower indoors, and lighting conditions are more consistent, making it easier for the system to sense and analyze shadows.

Joining Rus on the paper are first author Felix Naser (SM ’19), a former CSAIL researcher; Alexander Amini, a CSAIL graduate student; Igor Gilitschenski, a CSAIL postdoc; recent graduate Christina Liao (’19); Guy Rosman of the Toyota Research Institute; and Sertac Karaman, an associate professor of aeronautics and astronautics at MIT.

Extending ShadowCam

For their work, the researchers built on their system, called “ShadowCam,” that uses computer-vision techniques to detect and classify changes to shadows on the ground. MIT professors William Freeman and Antonio Torralba, who are not co-authors on the IROS paper, collaborated on the earlier versions of the system, which were presented at conferences in 2017 and 2018.

For input, ShadowCam uses sequences of video frames from a camera targeting a specific area, such as the floor in front of a corner. It detects changes in light intensity over time, from image to image, that may indicate something moving away or coming closer. Some of those changes may be difficult to detect or invisible to the naked eye, and can be determined by various properties of the object and environment. ShadowCam computes that information and classifies each image as containing a stationary object or a dynamic, moving one. If it gets to a dynamic image, it reacts accordingly.

Adapting ShadowCam for autonomous vehicles required a few advances. The early version, for instance, relied on lining an area with augmented reality labels called “AprilTags,” which resemble simplified QR codes. Robots scan AprilTags to detect and compute their precise 3D position and orientation relative to the tag. ShadowCam used the tags as features of the environment to zero in on specific patches of pixels that may contain shadows. But modifying real-world environments with AprilTags is not practical.

The researchers developed a novel process that combines image registration and a new visual-odometry technique. Often used in computer vision, image registration essentially overlays multiple images to reveal variations in the images.

The researchers developed a novel process that combines image registration and a new visual-odometry technique. Often used in computer vision, image registration essentially overlays multiple images to reveal variations in the images. Medical image registration, for instance, overlaps medical scans to compare and analyze anatomical differences.

Visual odometry, used for Mars Rovers, estimates the motion of a camera in real-time by analyzing pose and geometry in sequences of images. The researchers specifically employ “Direct Sparse Odometry” (DSO), which can compute feature points in environments similar to those captured by AprilTags. Essentially, DSO plots features of an environment on a 3D point cloud, and then a computer-vision pipeline selects only the features located in a region of interest, such as the floor near a corner. (Regions of interest were annotated manually beforehand.)

As ShadowCam takes input image sequences of a region of interest, it uses the DSO-image-registration method to overlay all the images from same viewpoint of the robot. Even as a robot is moving, it’s able to zero in on the exact same patch of pixels where a shadow is located to help it detect any subtle deviations between images.

Next is signal amplification, a technique introduced in the first paper. Pixels that may contain shadows get a boost in color that reduces the signal-to-noise ratio. This makes extremely weak signals from shadow changes far more detectable. If the boosted signal reaches a certain threshold — based partly on how much it deviates from other nearby shadows — ShadowCam classifies the image as “dynamic.” Depending on the strength of that signal, the system may tell the robot to slow down or stop.

“By detecting that signal, you can then be careful. It may be a shadow of some person running from behind the corner or a parked car, so the autonomous car can slow down or stop completely,” Naser said.

Tag-free Testing

In one test, the researchers evaluated the system’s performance in classifying moving or stationary objects using AprilTags and the new DSO-based method. An autonomous wheelchair steered toward various hallway corners while humans turned the corner into the wheelchair’s path. Both methods achieved the same 70-percent classification accuracy, indicating AprilTags are no longer needed.

In a separate test, the researchers implemented ShadowCam in an autonomous car in a parking garage, where the headlights were turned off, mimicking nighttime driving conditions. They compared car-detection times versus LiDAR. In an example scenario, ShadowCam detected the car turning around pillars about 0.72 seconds faster than LiDAR. Moreover, because the researchers had tuned ShadowCam specifically to the garage’s lighting conditions, the system achieved a classification accuracy of around 86 percent.

Next, the researchers are developing the system further to work in different indoor and outdoor lighting conditions. In the future, there could also be ways to speed up the system’s shadow detection and automate the process of annotating targeted areas for shadow sensing.

This work was funded by the Toyota Research Institute.

Reprinted with permission of MIT News (http://news.mit.edu/).

Subscribe Now

Design-2-Part Magazine

Get the manufacturing industry news and features you need for free in a format you like.

FREE Print, Digital, or Both »

You have Successfully Subscribed!