I previously discussed a approach for visualising 3D on a Microsoft Surface device using autostereograms. This had the advantage of supporting more than a single user, since simultaneous depth-perception is posible from opposite sides of the device. However, it suffered from disadvantages that there was a degree of training involved to "see" the image (in particular when the image is animated and using a random dot pattern), and that these type of autostereograms are unable to convey color.
I thought I'd start a new project to explore the use of Microsoft Kinect to work with 3D.
Kinect is a great example of the powerful combination of both hardware (e.g. the depth camera) and software (skeletal tracking). Intriguingly, one way to think about how the depth sensor in Kinect actually works is to compare it to an autostereogram. These images allow depth perception since the human brain has a remarkable ability to infer depth from a random dot pattern when shifted in a particular way. The depth sensor in Kinect also uses shifts in position of a random dot pattern (due to parallax between the emitter and receiver) to infer depth values.
Capturing depth images using Kinect is straightforward, as demonstrated extensively in the Software Development Kit.