ASME.MVC.Models.DynamicPage.ContentDetailViewModel ContentDetailViewModel
Self-Driving Cars Could Soon Have Their Own Special Memories

Self-Driving Cars Could Soon Have Their Own Special Memories

Detection systems for future autonomous vehicles are using an algorithm that enables learning to collect data while driving or parked so self-driving cars get safer and smarter over time.
Many of us have fond memories of the cars we’ve owned: the ‘55 Bel Air that served a successful courtship, the ‘67 VW van you and your pals drove across country on an unforgettable road trip, the late 90’s Toyota Sienna that quickly became a loyal family member. Self-driving cars can now create their own special memories, ones that could help drivers safely navigate new routes in bad weather and unfamiliar environments.
  
The main problem with detection systems used to develop future autonomous vehicles is that their neural networks don’t store information the car’s sensors pick up during each trip, no matter how many times the vehicle travels the same route. The systems also tend to treat every scene as an unknown and ignore potentially valuable information that could make detection far more accurate, according to researchers at Cornell University, who recently wrote two papers to help solve those and other problems.
 
“Most of the time the systems work surprisingly well, especially in good conditions,” said Kilian Weinberger, a senior author of the papers and professor of computer science at Cornell. 

Related Reading: Defining the 6 Levels of Self-Driving Autonomy

“The issue is that if car designs change over time, or if the weather changes, or if you go to a different destination, things suddenly look different to the vehicle’s detection system. If you want self-driving cars everywhere, then this method does not scale up.”
 
At the IEEE Conference in June, researchers, led by doctoral student Yurong You, presented their papers, “Hindsight Is 20/20: Leveraging Past Traversals To Aid 3D Perception” and “Exploiting Playbacks in Unsupervised Domain Adaptation for 3D Object Detection in Self-Driving Cars”.
 
“The car manufacturer teaches the algorithm to learn, but once the car’s deployed there is no more learning,” Weinberger said. “We came up with a safe method that enables learning throughout.”

Autonomous vehicles employ a combination of machine-learning algorithms, sensors, radar, and LiDAR, which uses pulses of light to sense the surrounding scene by capturing an object’s location and shape, also known as a 3D point cloud, to detect movement around a vehicle, avoid accidents, read signs and traffic lights, and identify objects like trees and pedestrians. Car manufacturers and detection system developers typically pay people to record and “label” millions of images from different areas. Data from those coded images are then added to the algorithms to help the vehicle recognize and identify certain objects and safely navigate them.
 
The systems, however, have trouble accurately identifying objects in heavy snow and rain, on snow-covered roads, and from a distance. The systems don’t “remember” or record what they detect and do not put the information into context, Weinberger said, adding that they also don’t yet share, receive, and store information collected by other vehicles in real time.

The new research—a combination of two new frameworks developed by the Cornell team—is designed to improve those limitations.
 
“By putting the collected information into context, we’re showing that self-driving cars can get better and better over time,” Weinberger said. “They can improve significantly as they drive around because the car still learns while it’s in operation.”

More Like This: Creating an Autonomous Driving Ecosystem

The researchers’ first paper focuses on HINDSIGHT, a trainable framework that collects rich, contextual information from past travels that current systems often disregard. The information is added to a data set the system can easily query. The second paper focuses on MODEST, a framework that improves HINDSIGHT by detecting, aggregating, and sharing unlabeled LiDAR information collected from vehicles traveling the same route. The vehicle also uses the MODEST information to improve its detection accuracy while it is parked or offline.
 
To test HINDSIGHT, the researchers, led by doctoral student Carlos Andres Diaz, drove a car equipped with LiDAR sensors 40 times over 18 months around a nine-mile loop in Ithaca, N.Y., compiling a dataset of more than 600,000 images in a variety of environments and weather conditions. In the most challenging scenarios, especially when detecting objects at a far distance, HINDSIGHT improved the existing detection system’s precision by more than 300 percent, the researchers wrote. MODEST underwent similar tests.
 
The frameworks are compatible with most modern 3D detection systems at no extra cost and requires no additional equipment than what’s typically found in modern vehicles. The HINDSIGHT code is available at https://github.com/YurongYou/Hindsight. The researchers are working on a third paper aimed at further improving both frameworks.

“As researchers, our goal is to identify open problems, find solutions, and make those solutions available to improve society for everybody,” Weinberger said. “The thing that's really exciting about HINDSIGHT is that it improves the perception and accuracy in every scenario, with every single system we tested it on. We couldn't find a single setting in which it didn't actually do better than the same system without HINDSIGHT.”

You Might Also Like: Giving Autonomous Vehicles 3D Vision
 
The large amount of detailed information HINDSIGHT collected showed the researchers how much existing detection systems underutilize data that is essentially free. With so many people driving the same routes over and over, the researchers began to think about how to share the data between vehicles, improve its accuracy, and optimize how the detection system uses it. They came up with the MODEST framework.
 
MODEST takes HINDSIGHT to the next level by removing any reliance on human annotated data. By driving some routes repeatedly, it detects objects no matter where the car is driven. The car then uses the data it collects to autonomously retrain itself while not in use. The researchers call it “dreaming.”
 
The system works by taking the best or “confident” detections of nearby objects and estimates their states, such as locations, size, and motion speeds. It then compares the information to past data collected on that route to discover and predict challenging situations the detection system may have missed or misread. While the idle car plays back the data, the system autonomously labels the frames that were missed.
 
“That’s powerful because it unlocks the option to have self-driving cars everywhere,” Weinberger said.
 
Jeff O’Heir is a science and technology writer in Huntington, NY.
 

You are now leaving ASME.org