Surface 3D Demo Part 3

By
Dave
Project
Published
18 Mar 2011 20:32
Last Modified
13 Jan 2013 18:24

In Part 1 and Part 2 I described an interactive autostereogram viewer. In this post, I thought I start by including a link to a video.

The video shows how multitouch gestures on Microsoft Surface can be used to interact with the models. A depth-map shader then processes the model, and a second-pass converts the depth-map to an autostereogram. As described previously, to avoid distracting the eye when animating a tile-based autostereogram, a randon-dot pattern is used which is regenerated each frame. Unfortunately the video doesn't have the required resolution to see much detail when animating the autostereogram, but the 3D-effect seen when the animation stops is maintained when viewing the application.

Here are some further screenshots:

SIRDS

Figure 1. Depth map

SIRDS

Figure 2. SIRDS for animation

SIRDS

Figure 3. Texture for static image

Surface 3D Demo Part 2

By
Dave
Project
Published
7 Aug 2010 11:03
Last Modified
13 Jan 2013 18:23

In Part 1 I discussed an simple shader which generated autostereograms from depth maps. The next step was to make the application interactive, so that instead of generating an autostereogram from a static depth-map, an interactive 3D model could be used to generate dynamic autostereograms.

To create a depth-map from a model, I added a pixel shader to return the depth value. A similar example can be found in How To: Create a Depth Texture. In order to obtain a sensible range of depth values, I need to calculate the bounds of the model using bounding spheres. I can then set the near and far planes in my projection matrix to correspond to the closest and furthest points in the model respectively. The depth texture can be extracted by rendering to a render target. It can then be passed as a shader parameter to allow autostereograms to be rendered in real time as the model is manipulated.

One of the main issues with animating an autostereogram is that manipulating the model results in changes across areas of the image which do not correspond to the location of the model itself.1 One way around this distracting side-effect is to use a SIRDS, or Single Image Random Dot Stereogram, and alter the pattern each frame in which the model is manipulated.

Instead of passing a pre-defined pattern texture, I generate a random texture on the CPU as follows:

private Texture2D CreateStaticMap(int resolution)
{
    Random random = new Random();
    Color[] colors = new Color[resolution * resolution];
    for (int x = 0; x < resolution; x++)
        for (int y = 0; y < resolution; y++)
            colors[x + y * resolution] = new Color(new Vector3((float)random.NextDouble()));

    Texture2D texture = new Texture2D(GraphicsDevice, resolution, resolution, 1, TextureUsage.None, SurfaceFormat.Color);
    texture.SetData(colors);
    return texture;
}

The resultant grayscale image is shown below in Figure 1.

SIRDS

Figure 1. SIRDS for animation

1This is exacerbated by my current simplistic stereogram shader, since it doesn't yet implmenent features such as hidden-surface removal (where a part of the model should only be visible by one eye) which in turn leads to additional "echoes" in the image.

Surface 3D Demo Part 1

By
Dave
Project
Published
1 Aug 2010 19:19
Last Modified
13 Jan 2013 18:22

I've always been fascinated by stereoscopic images and their ability to convey depth from just two dimensions, and I was interested to explore their effectiveness in implementing a Surface application for 3D visualisation.

As a multiuser and multidiretional platform, Microsoft Surface is ideal for viewing 2D content. Since stereoscropic images can viewed1 from angles orthogonal to an axis designed to align with the plane of the viewer's eyes, they enable depth perception from two opposing viewing positions on either side of a Surface device.

The first step was to generate a stereoscopic image from a depth-map by implementing a pixel shader to render an autostereogram. My initial algorithm was very basic, and produced images as per the example shown below in Figure 1.

Initial rendering

Figure 1. Initial autostereogram rendering from depth map.

The initial shader takes the following parameters:

  • Depth texture
  • Tile texture
  • Size of tile (number of vertical strips)
  • Depth factor

1In order to perceive a 3D image the viewer must decouple convergence and focusing of their eyes. In order to aid convergence, a disc and ring have been placed at the correct separation. Looking "through" the image results in four shapes (a disc and ring from each eye). The eyes are correctly converged when the two centre shapes "overlap" to give a disc within a ring. At this point the eyes must be refocussed without changing their convergence. Bright light can help, since a contracted iris gives a decreased depth of field.