New technique accurately digitizes transparent object

22 Sep 2017

A new imaging technique makes it possible to precisely digitize clear objects and their surroundings, an achievement that has eluded current state-of-the-art 3D rendering methods.

The ability to create detailed, 3D digital versions of real-world objects and scenes can be useful for movie production, creating virtual reality experiences, improving design or quality assurance in the production of clear products and even for preserving rare or culturally significant objects.

"By more accurately digitising transparent objects, our method helps move us closer to eliminating the barrier between the digital and physical world," says Jonathan Stets, Technical University of Denmark, and co-leader of the research team that developed the pipeline. "For example, it could allow a designer to place a physical object into a digital reality and test how changes to the object would look."

Transparent objects are challenging to digitise because their appearance comes almost completely from their surroundings. Although a CT scanner can acquire a clear object's shape, this requires removing the object from its surroundings and lighting, which must also be captured to accurately recreate the object's appearance.

The researchers detail their approach to digitising transparent objects in The Optical Society journal Applied Optics. A key innovation in developing the new method was the use of a robotic arm to record the precise locations of two cameras used to image scenes containing a clear object. Having this detailed spatial information allowed the researchers to take photographs of the scene, remove the object and scan it in a CT scanner and then place it back into the scene - both digitally and in real life - to accurately compare the real-life scene and its virtual reconstruction.

Pixel-by-pixel comparison: "The robotic arm allows us to obtain a photograph and a 2D computed, or rendered, image that can be compared pixel by pixel to measure how well the images match," said Alessandro Dal Corso, co-leader of the research team. "This quantitative comparison was not possible with previous techniques and requires extremely precise alignment between the digital rendering and photograph."

Once the digital versions of the objects are finalised, the method provides information about the object's material properties that are distinct from its shape. "This allows the scanned glass objects to still look realistic when placed in a completely different digital environment," explained Jeppe Frisvad, a member of the research team. "For example, it could be placed on a table in a digital living room or on the counter in a virtual kitchen."

Using an optical setup containing readily available components, the researchers tested their new workflow by digitising three scenes, each containing a different glass object on a table with a white and gray checkerboard backdrop.

They began by acquiring structured light scans of the scene, an imaging method that uses the deformation of a projected pattern to calculate the depth and surfaces of objects in the scene.

They also used a chrome sphere to acquire a 360-degree image of the surroundings. The scene was illuminated with LEDs arranged in an arc to capture how light coming from different angles interacted with the opaque parts of the scene. The researchers also separately scanned the glass objects in a CT scanner, which provided information to reconstruct the object's surface. Finally, the digital version of the scene and the rendered glass object were combined to produce a 3D representation of the whole scene.

Quantitative analysis showed that the images of the digital scene and the real-world scene matched well and that each step of the new imaging workflow contributed to the similarity between the rendered images and the photographs.

"Because the photographs are taken under controlled conditions, we can make quantitative comparisons that can be used to improve the reconstruction," said Frisvad. "For example, it is difficult to judge by eye if the object surface reconstructed from the CT scan is accurate, but if the comparison shows errors, then we can use that information to improve the algorithms that reconstruct the surface from the CT scan."

A new way to measure optical properties The approach also provides a non-contact way to measure a material's optical properties. This makes the technique potentially useful for a wide range of applications beyond movies and virtual reality.

For example, the approach could allow researchers to create a digital rendering of an object and then tweak a parameter, such as the index of refraction, to better understand the properties of the real-life material. While previous technologies sometimes require chipping off a piece of the object to measure its optical properties, the new technique could be useful for analyzing rare or valuable transparent objects without harming the object. The technique could also be applied to help engineers refine the design or manufacture of clear products.

The researchers want to expand their approach to other challenges in 3D rendering, such as rendering objects that exhibit a metallic shine or that are translucent. They are also working on ways to speed up acquisition of the various images and scans so that the approach could be used for quality assurance in the production of clear products.