Normal Mapping

25 Nov 2012 18:40
Last Modified
13 Jan 2013 17:12

There are some excellent solutions to surface-reconstuction using Kinect, such as Kinect Fusion, however I was still keen to understand the feasibility of extracting a basic normal map from depth data.

In order to determine the normal vector for a given depth pixel, I simply sample surrounding pixels and look at the local surface gradient. However, as depth values are stepped, for small sample areas and particularly at larger depth values, this will result in a lot of forward-facing normals from the surfaces of the depth "planes", as shown below in Figure 1. Using a larger sample size improves things significantly, as shown in the second image.

Normal Map Normal Map

Figure 1. Normal maps from raw depth data, using smaller and larger sample areas.

The normal map then enables the point cloud to be rendered using directional lighting, as shown below in Figure 3.

Normal Map Normal Map

Figure 3. Diffuse and specular lighting applied to point cloud.

Note that the images above are still rendered as point clouds, rather than a surface mesh.

Add Comment