SUBSCRIBE TO OUR FREE NEWSLETTER
PUBLISHED |

Using computer vision to help autonomous vehicles see around corners

Using computer vision to help autonomous vehicles see around corners article image

Computer vision researchers say they have devised a new technique to enable autonomous vehicles and other machine intelligence systems to “see around corners.”

The researchers at Carnegie Mellon University, the University of Toronto and University College London used special sources of light, combined with sensors and computer vision processing, to reconstruct the shapes of unseen objects in great detail.

This means future autonomous vehicle and other machine intelligence systems might not need line-of-sight to gather incredibly detailed image data.

It is the first time researchers have been able to compute millimetre-and micrometre-scale shapes of curved objects, said Ioannis Gkioulekas, an assistant professor in Carnegie Mellon's Robotics Institute.

This provides an important new component to a larger suite of non-line-of-sight (NLOS) imaging techniques now being developed by computer vision researchers.

However, there are some limitations. So far, researchers working on the project have only been able to use this technique effectively for “relatively small areas,” according to CMU Robotics Institute Professor Srinivasa Narasimhan.

But that limitation could be mitigated by employing this technique alongside others used in the NLOS computer vision research – including an advanced auto pilot system developed by Tesla for its vehicles.

"It is exciting to see the quality of reconstructions of hidden objects get closer to the scans we're used to seeing for objects that are in the line of sight," said Professor Narasimhan.

"This paper makes significant advances in non-line-of-sight reconstruction – in essence, the ability to see around corners," the award citation says. "It continues to push the boundaries of what is possible in computer vision."

How it works

Most of what people see – and what cameras detect – comes from light that reflects off an object and bounces directly to the eye or the lens. But light also reflects off the objects in other directions, bouncing off walls and objects. A faint bit of this scattered light ultimately might reach the eye or the lens, but is washed out by more direct, powerful light sources.

NLOS techniques try to extract information from scattered light – naturally occurring or otherwise – and produce images of scenes, objects or parts of objects not otherwise visible.

"Other NLOS researchers have already demonstrated NLOS imaging systems that can understand room-size scenes, or even extract information using only naturally occurring light," Gkioulekas said. "We're doing something that's complementary to those approaches – enabling NLOS systems to capture fine detail over a small area."

In this case, the researchers used an ultrafast laser to bounce light off a wall to illuminate a hidden object. By knowing when the laser fired pulses of light, the researchers could calculate the time the light took to reflect off the object, bounce off the wall on its return trip and reach a sensor.

Similar to lidars used by self-driving cars

"This time-of-flight technique is similar to that of the lidars often used by self-driving cars to build a 3D map of the car's surroundings," said Shumian Xin, a Ph.D. student in robotics.

Previous attempts to use these time-of-flight calculations to reconstruct an image of the object have depended on the brightness of the reflections off it.

But in this study, Gkioulekas said the researchers developed a new method based purely on the geometry of the object, which in turn enabled them to create an algorithm for measuring its curvature.

The researchers used an imaging system that is effectively a lidar capable of sensing single particles of light to test the technique on objects such as a plastic jug, a glass bowl, a plastic bowl and a ball bearing. They also combined this technique with an imaging method called optical coherence tomography to reconstruct the images of US silver coins.

In addition to seeing around corners, the technique proved effective in seeing through diffusing filters, such as thick paper.

The researchers are part of a larger collaborative team, which includes researchers from Stanford University, the University of Wisconsin Madison, the University of Zaragosa, Politecnico di Milano and the French-German Research Institute of Saint-Louis, that is developing a suite of complementary techniques for NLOS imaging.

related

comments

Leave A Comment
SUBSCRIBE TO OUR FREE NEWSLETTER

Featured Products