Depth Image Rendering

By
Dave
Project
Published
24 Oct 2012 19:01
Last Modified
13 Jan 2013 17:15

There are numerous ways to render depth data captured from Kinect. One option is to use a point-cloud, where each depth value is represented by a pixel positioned in 3D space. In the absence of a 3D display, one of the ways to convey depth for still images is the use of stereograms, as shown below in Figure 1.

Point-cloud stereogram

Figure 1. Point-cloud stereogram.1.

In case you are wondering, I'm holding a wireless keyboard to control the image capture. Next I needed to map the texture from the color camera onto the point-cloud, as shown below in Figure 2.

Point-cloud color stereogram

Figure 2. Point-cloud stereogram1 with color mapping.

Another approach to simulating 3D without special display hardware (but which does require special glasses2), which avoids the degree of training involved to "see" images such as stereograms, is the use of anaglyphs, as shown below in Figure 3.

Point-cloud color anaglyph

Figure 3. Point-cloud color anaglyph.2.

Anaglyphs can be adjusted to move the image plane "forwards" or "backwards" in relation to the screen, as shown by the grayscale anaglyphs in Figures 4-6 below.

Point-cloud grayscale anaglyph Point-cloud grayscale anaglyph Point-cloud grayscale anaglyph

Figures 4-6. Point-cloud grayscale anaglyphs2 "behind", "co-planar" with, and "in-front" of screen plane.

1In order to perceive a 3D image the viewer must decouple convergence and focusing of their eyes. Looking "through" the image results in four images. The eyes are correctly converged when the two centre images "overlap". At this point the eyes must be refocussed without changing their convergence.

2In order to perceive a 3D image the viewer must use coloured filters for each eye, in this case red (left) and cyan (right).

Add Comment

*
*
*
Captcha
*
*Required