New camera chip provides superfine 3-D resolution
04 Apr 2015
Imagine you need to have an almost exact copy of an object. Now imagine that you can just pull your smartphone out of your pocket, take a snapshot with its integrated 3-D imager, send it to your 3-D printer, and within minutes you have reproduced a replica accurate to within microns of the original object.
A 3D image produced by the new NCI chip. The image, taken from roughly half a meter (1.5 feet) away, shows the height of a US penny at various points. Image credit: Ali Hajimiri/Caltech |
This feat may soon be possible because of a new, tiny high-resolution 3-D imager developed at Caltech.
Any time you want to make an exact copy of an object with a 3-D printer, the first step is to produce a high-resolution scan of the object with a 3-D camera that measures its height, width, and depth. Such 3-D imaging has been around for decades, but the most sensitive systems generally are too large and expensive to be used in consumer applications.
A cheap, compact yet highly accurate new device known as a nanophotonic coherent imager (NCI) promises to change that. Using an inexpensive silicon chip less than a millimeter square in size, the NCI provides the highest depth-measurement accuracy of any such nanophotonic 3-D imaging device.
The work, done in the laboratory of Ali Hajimiri, the Thomas G. Myers Professor of Electrical Engineering in the Division of Engineering and Applied Science, is described in the February 2015 issue of Optics Express.
In a regular camera, each pixel represents the intensity of the light received from a specific point in the image, which could be near or far from the camera-meaning that the pixels provide no information about the relative distance of the object from the camera. In contrast, each pixel in an image created by the Caltech team's NCI provides both the distance and intensity information. "Each pixel on the chip is an independent interferometer-an instrument that uses the interference of light waves to make precise measurements-which detects the phase and frequency of the signal in addition to the intensity," says Hajimiri.
Three dimensional map of the hills and valleys on a U.S. penny obtained with the nano-photonic coherent imager at the distance of 0.5 meters.
The new chip utilizes an established detection and ranging technology called LIDAR, in which a target object is illuminated with scanning laser beams. The light that reflects off of the object is then analyzed based on the wavelength of the laser light used, and the LIDAR can gather information about the object's size and its distance from the laser to create an image of its surroundings. "By having an array of tiny LIDARs on our coherent imager, we can simultaneously image different parts of an object or a scene without the need for any mechanical movements within the imager," Hajimiri says.
Such high-resolution images and information provided by the NCI are made possible because of an optical concept known as coherence. If two light waves are coherent, the waves have the same frequency, and the peaks and troughs of light waves are exactly aligned with one another. In the NCI, the object is illuminated with this coherent light. The light that is reflected off of the object is then picked up by on-chip detectors, called grating couplers, that serve as "pixels," as the light detected from each coupler represents one pixel on the 3-D image. On the NCI chip, the phase, frequency, and intensity of the reflected light from different points on the object is detected and used to determine the exact distance of the target point.
Because the coherent light has a consistent frequency and wavelength, it is used as a reference with which to measure the differences in the reflected light. In this way, the NCI uses the coherent light as sort of a very precise ruler to measure the size of the object and the distance of each point on the object from the camera. The light is then converted into an electrical signal that contains intensity and distance information for each pixel-all of the information needed to create a 3-D image.
The incorporation of coherent light not only allows 3-D imaging with the highest level of depth-measurement accuracy ever achieved in silicon photonics, it also makes it possible for the device to fit in a very small size. "By coupling, confining, and processing the reflected light in small pipes on a silicon chip, we were able to scale each LIDAR element down to just a couple of hundred microns in size-small enough that we can form an array of 16 of these coherent detectors on an active area of 300 microns by 300 microns," Hajimiri says.
The first proof of concept of the NCI has only 16 coherent pixels, meaning that the 3-D images it produces can only be 16 pixels at any given instance. However, the researchers also developed a method for imaging larger objects by first imaging a four-pixel-by-four-pixel section, then moving the object in four-pixel increments to image the next section. With this method, the team used the device to scan and create a 3-D image of the "hills and valleys" on the front face of a U.S. penny-with micron-level resolution-from half a meter away.
In the future, Hajimiri says, that the current array of 16 pixels could also be easily scaled up to hundreds of thousands. One day, by creating such vast arrays of these tiny LIDARs, the imager could be applied to a broad range of applications from very precise 3-D scanning and printing to helping driverless cars avoid collisions to improving motion sensitivity in superfine human machine interfaces, where the slightest movements of a patient's eyes and the most minute changes in a patient's heartbeat can be detected on the fly.
"The small size and high quality of this new chip-based imager will result in significant cost reductions, which will enable thousands new of uses for such systems by incorporating them into personal devices such as smartphones," he says.
The study was published in a paper titled, "Nanophotonic coherent imager." In addition to Hajimiri, other Caltech coauthors include former postdoctoral scholar and current assistant professor at the University of Pennsylvania, Firooz Aflatouni, graduate student Behrooz Abiri, and Angad Rekhi (BS '14). This work was partially funded by Caltech Innovation Initiative.