Computation bringing 3-D imaging to light

18 May 2015

Photography used to be limited to flat, two-dimensional images. Today, the art and science of image capture is spurred by optics, computation and electronics, and Cornell engineers are working on the cutting edges of 3-D imaging.

Suren Jayasuriya, a graduate student in the lab of Alyosha Molnar, associate professor of electrical and computer engineering, is developing a 3-D camera with specially designed image sensors that could lead to previously unimagined applications, from smart cars to medical imaging to visually stunning computer graphics.

The sensors, which are made of pixels that can detect both the intensity and incident angle of light, can digitally refocus a photograph after an image is taken, get different perspective views of a scene from a single shot, and compute an image depth map.

In support of the work, Jayasuriya recently received a $100,000 Qualcomm Innovation Fellowship for his joint proposal with Achuta Kadambi, a doctoral student in Ramesh Raskar's MIT Media Lab Camera Culture group. Their proposal is called ''Nanophotography: Computational CMOS Sensor Design for 3-D Imaging.''

''What's exciting about angle-sensitive pixels is that it's innovating on the detector side, to help motivate new applications in computer graphics and vision, where we're giving more dimensionality to our data, at a cost of computation,'' Jayasuriya said. ''But the way things are scaling, with Moore's Law and [graphics processing units] and parallel computing, computation is becoming less and less of a problem. The age of big data is here. Now, it's more like, what data do we present to these algorithms to make them smarter?''

 
A prototype angle-sensitive pixel camera is on the left. In the center, data is recorded and processed to recover a high-resolution light field. At right, details can be recovered from a single camera image./ Image vredit: Suren Jayasuriya  
In other words, image capture is no longer about just taking a picture. It's about capturing an image, and using machine learning and computation to post-process the image, in the blink of an eye.

For the Qualcomm project, they're working on a depth sensor that is based on an imaging technique called ''time of flight,'' which is increasingly popular and used, notably, in Microsoft Kinect cameras.

Time of flight imaging measures the time it takes photons to reflect off objects in a scene. The researchers are adding time of flight coding to enable their imaging system to visualize light as it travels through a scene, and to see around corners. By capturing light in flight, the researchers can make a camera that performs – effectively – at 1 billion frames per second, through post-processing computation.

The angle-sensitive pixel image sensors are made from what's called a complementary metal oxide semiconductor (CMOS) process, a well-established chip-making technique. That's one of the advantages Jayasuriya brings to the project; his adviser, Molnar, has many years of experience designing CMOS-based chips for imaging, biomedical and radio frequency applications.

Jayasuriya and Kadambi's project was one of eight Qualcomm Fellowship winners, out of 146 applicants. They will share the $100,000 prize.