ASME.MVC.Models.DynamicPage.ContentDetailViewModel ContentDetailViewModel
Autonomous Vehicles with Depth Perception, Part 1

When we look at something, we see quite a bit more than the object we’re focusing on thanks to our peripheral vision. And we can immediately and intuitively tell how far away that something is. Not so with robots.

While imaging systems that include cameras and software allow robots to “see,” they need quite complicated and involved vision systems to see with panoramic vision and to perceive depth.

But the wide-screen views and depth perception they typically lack would give robots such as drones and self-driving cars much more feedback to use as they navigate their worlds, says Donald Dansereau, a Stanford University postdoctoral fellow in electrical engineering.

Dansearu was part of a research team at two California universities that developed a 4D camera that includes capabilities unseen in robotic-vision, single-lens image-capture systems.

 

Researchers with a prototype of their single-lens, panoramic-view camera for easier robotic navigation. Image credit: L.A. Cicero

 

The single-lens panoramiclight-field camera gives robots a 138-degree field of view and can quickly calculate the distance to objects—lending depth perception to robotic sight, say researchers at Stanford University and at the University of California, San Diego, which teamed for the project.

The capabilities will help robots move through crowded areas and across semi-transparent landscapes obscured by sleet and snow.

“As autonomous cars take to the streets and delivery drones to the skies, it's crucial that we endow them with truly reliable vision,” Dansereau says.

Current robots have to move and shift through their environment while their onboard imaging systems gatherdifferent images and pieces them together to create an entire view gleaned from separate perspectives, he adds.

With the new 4D camera, robots could gather the same information from a single image, says Gordon Wetzstein, Stanford assistant professor of electrical engineering. Wetzstein’s lab collaborated with University of San Diego electrical engineering professor Joseph Ford lab on the project.

The extra dimension is the wider field of view.

Humans typically have about a 135-degree vertical and a 155-degree horizontal field of visual view. Those numbers refer to the total area across which humans can see objects with their peripheral vision as they focus on a central point, according to the researchers.

For a robot, the difference between the new lens and a typically used lens compares to the difference when looking through a window or a peephole, Dansereau says.

Researchers in Ford’s lab designed the new camera’s spherical lens, which gives the camera a sightline of nearly one third of the circle around itself. The camera no longer needs the fiber bundles used for a previous version of the camera that the lab developed earlier. Instead, the new lens calls upon several smaller lenses nested behind it and also uses digital signal processing technology.

All of this solved one set of issues. But how would the researchers add another critical component: depth perception? To find out, read Part 2.

Jean Thilmany is an independent writer.

As autonomous cars take to the streets and delivery drones to the skies, it's crucial that we endow them with truly reliable vision.Prof. Donald Dansereau, Stanford University

You are now leaving ASME.org