A new system works by analyzing light at the edge of walls, which is impacted by the reflections of objects around the corner from the camera (via MIT CSAIL)

 

Self-driving cars, like people, are limited by their line of sight. But while humans have developed fixes—my favorite: the shapely periscope—smart vehicles still struggle to see around corners.

So, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) created an imaging system that uses light reflections to detect objects in a hidden scene.

Imagine you’re walking down an L-shaped hallway, where your friend is pacing around the corner. He or she reflects a small amount of light onto the ground—nearly invisible to the naked eye—creating a sort of fuzzy shadow, or “penumbra.”

By filming those reflections and magnifying the colors, MIT’s “CornerCameras” system reveals changes in the lighting. Those changes are then used to construct a series of one-dimensional images, stitched together to reveal details about the unseen body.

“Even though those objects aren’t actually visible to the camera, we can look at how their movements affect the penumbra to determine where they are and where they’re going,” Katherine Bouman, lead author on a paper about the system, said in a statement.
 

 

“In this way, we show that walls and other obstructions with edges can be exploited as naturally occurring ‘cameras’ that reveal the hidden scenes beyond them,” she continued.

The technology—which works in a wide range of indoor and outdoor environments and with off-the-shelf cameras—could prove useful to firefights looking for people in burning builders and motorists detecting pedestrians in their blind spots.

“If a little kid darts into the street, a driver might not be able to react in time,” Bouman said. “While we’re not there yet, a technology like this could one day be used to give drivers a few seconds of warning time and help in a lot of life-or-death situations.”

Not yet ready for primetime, CSAIL’s CornerCameras system must overcome obstacles like low or changing light conditions and a weakening signal as it moves away from the corner.

The team will soon test their tech on a wheelchair, in hopes of eventually adapting it for various types of vehicle, including autonomous cars.

The full study—written by Bouman, with MIT professors Bill Freeman, Antonio Torralba, Greg Wornell, and Fredo Durand, master’s student Vickie Ye, and PhD student Adam Yedidia—is available online. Bouman will present the work at this month’s International Conference on Computer Vision in Italy.
 

 
www.pdf24.org    Send article as PDF