Kinect Animated GIFs

By
Dave
Project
Published
25 Nov 2013 02:44
Last Modified
25 Nov 2013 06:12

Animated GIFs seem to becoming popular again, and I thought it would be fun to create some using depth data captured from Kinect.

Of course, this could also be done using an ordinary (non-depth) camera. However, using Kinect I can capture data from a single viewpoint (i.e. without moving the camera or the subject), then do "Bullet Time"-style animation by moving a virtual-camera around the recorded data. Occluded pixels in the original data will be missing, however the flexibility of a virtual camera may outweigh this depending on the desired effect. The figures below show simple camera animations.

Kinect Princess Leia

Figure 1. Princess Leia.1

Kinect Wave

Figure 2. Wave.

1Many thanks to Mojo Jones for both creating the costume and playing Princess Leia.

Additional Datasets

By
Dave
Project
Published
27 Aug 2013 22:23
Last Modified
16 Sep 2013 17:05
I wanted to add some additional content to the project, and from a geographic-perspective a set of airline flight paths was a convenient choice, as shown below in Figure 1. Note that the routes are plotted as minor arcs of great circles rather than actual flight-paths, and that the height has been exaggerated for clarity.

Globe flight pathsGlobe flight paths

Figure 1. Flight-path1 data plotted over earth-lights2, filtered by airline as British Airways. Shown in both color and red-cyan anaglyph.

One of the implications of this was that I had to implement the a tile-based earth-renderer for sufficient levels of detail. Fortunately I already had some code for this from another project which required minimal modification.

1Airport, airline and route data credit OpenFlights.

2Earth light data from image created using Suomi NPP VIIRS (http://npp.gsfc.nasa.gov/viirs.html) data provided courtesy of Chris Elvidge (NOAA National Geophysical Data Center, http://www.ngdc.noaa.gov/dmsp/dmsp.html). Suomi NPP is the result of a partnership between NASA, NOAA, and the Department of Defense. Caption by Mike Carlowicz.

CAPTCHA Update

By
Dave
Project
Published
27 Aug 2013 19:04
Last Modified
27 Aug 2013 19:18

Even though I previously added a CAPTCHA, I've recently been getting a lot more comment spam.

While I'm tempted to turn off commenting altogether because of the nuisance caused by spammers, I thought I'd first try and tighten-up the implementation. Hopefully it will reduce the comment spam, and shouldn't make posting genuine comments any less inconvenient that it is now.

NUIverse for Windows

By
Dave
Project
Published
2 Apr 2013 15:05
Last Modified
2 Apr 2013 22:31

As previously discussed, my manipulation processor now supports WM_TOUCH messages, which means that I can do native multitouch on both Windows 7 and Windows 8. I have therefore updated NUIverse for a Windows-release, as shown below in Figure 1.

NUIverse for Windows

Figure 1. NUIverse for Windows (running windowed).

There are some key differences to the PixelSense-release, as follows:

  • No support for tagged objects, since it does not use the Surface 2.0 runtime, nor require PixelSense hardware (though it will run on the latter outside of the Surface Shell).
  • Since horizontal form-factor multitouch hardware is generally less-common than vertical form-factors, I have added a single-orientation configuration setting. This is true by default, since even if mounted horizontally, many touchscreens will not deliver the multitouch performance required for simultaneous multi-user interaction.
  • Since the Surface Shell added chrome to close the application and by default the application runs full-screen, either drag in a menu control (see note below) and use the exit menu, or press ESC if a keyboard is present.

One of the current issues when running a full-screen desktop app on Windows 8 is that the operating system captures initial touches used for edge-swipes. If touch is maintained after an initial edge-swipe, further edge-swipes are not captured and therefore will add NUIverse controls to the screen. An alternative is to touch the screen and simultaneously edge-swipe, or to use two fingers when edge-swiping.

Several key configuration settings (in NUIverse.exe.config) are worth mentioning. Note that there is no graphical interface for these settings, and that the configuration file needs to be edited by hand (I would recommend saving a copy first):

  • PixelWidth and PixelHeight control the resolution used for both windowed and full-screen mode.
  • FullScreen controls whether the application runs full-screen (true) or windowed (false).
  • For the configuration settings specified in mm to work correctly, set PixelsPerMm to the appropriate value, taking account physical screen size and either PixelWidth or PixelHeight (square pixels are assumed).

To install NUIverse for Windows, proceed as folows:

  1. Install the XNA 4.0 runtime.
  2. Download and extract NUIverse for Windows (2.75Mb) to a suitable location.
  3. Low-resolution textures for several planets and moons are included, but extras can be created or downloaded from http://nuiverse.com.

Kinect Fusion

By
Dave
Project
Published
1 Apr 2013 15:20
Last Modified
9 Apr 2013 15:43

Kinect Fusion was released along with version 1.7 of the Kinect for Windows SDK, and allows reconstruction a 3D surface based on Kinect data from multiple angles. The SDK samples, available in both WPF and Direct2D, support saving the scan as either an .STL or .OBJ file.

The scan itself does not currently include color information, however it is possible to add it by post-processing with additional tools. Figures 1-4 below show editing a Kinect Fusion scan using MeshLab, "an open source, portable, and extensible system for the processing and editing of unstructured 3D triangular meshes", which can be used to re-project multiple 2D color images onto the model.

Fusion Scan Fusion Scan Fusion Scan Fusion Scan

Figures 1-4. Kinect Fusion scan, showing raw output, normal mapping, ambient occlusion, and color re-projection.

Further scans will be posted in the gallery.

Windows Multitouch and XNA

By
Dave
Project
Published
9 Feb 2013 23:56
Last Modified
10 Apr 2013 00:08

I wanted to support multitouch in an XNA application on Windows, without using the Microsoft Surface 2.0 SDK and runtime. Unlike Windows Phone however, touch input on Windows is not natively supported by the XNA framework. I have therefore followed the recommended approach and added a new input source to my manipulation processor for WM_TOUCH messages. It therefore works on both Windows 7 and Windows 8.

I hook the Windows message loop to a managed function pointer using GetFunctionPointerForDelegate and Get/SetWindowLongPtr. I then register the window for multitouch using RegisterTouchWindow, and process WM_TOUCHDOWN, WM_TOUCHMOVE and WM_TOUCHUP messages.

While no longer requiring the Microsoft Surface 2.0 runtime, one option to inject WM_TOUCH messages on non-touch hardware is to use the Microsoft Surface 2.0 SDK Input Simulator, as shown below in Figure 1. Note that this method for simulating input will only work on Windows 7, however.

WM_TOUCH manipulation

Figure 1. WM_TOUCH manipulation, using the Surface 2.0 SDK Input Simulator.

Curiously, WM_TOUCH messages are only provided when not using a mouse. If the mouse is used, WM_TOUCHUP messages are fired for all current touches, so mixed-mode interactions when the mouse is used simultaneously are not possible.

Screen Orientation

By
Dave
Project
Published
29 Jan 2013 20:04

Screens have traditionally been oriented vertically to face the user. However, an increasing number of screens are available in horizontal ("table-top") form factors. When desiging a 3D user interface, one of the first tasks is to define the coordinate space (i.e. which direction is "up", "right", and "forward"?).

If we pick a coordinate space that corresponds to the real world, then for vertical screen "up" and "right" are along the screen edges, and "forward" (in a right-hand coordinate system) is out of the screen toward the viewer. A representation of a horizontal surface in the real-world disappears "into" the screen, as shown below in Figure 1.

Vertical

Figure 1. User interface for vertical screen with horizontal plane "into" the screen.

In contrast, for a horizontal screen "up" is out of the screen and "right" and "forward" are along the edges. A representation of a horizontal surface in the real-world is in the same plane as the screen, as shown below in Figure 2.

Horizontal

Figure 2. User interface for horizontal screen with horizontal plane "on" the screen.

In this way, a particular screen orientation therefore lends itself more naturally to certain experiences. For example, a vertical screen behaves as a "virtual window" onto the world such as in a first-person 3D game. A horizontal screen on the other hand, behaves as a "virtual table" for viewing and manipulating objects, or as a view from a third-person game looking "down" onto the world.

This project aims to build an orientation-independant user interface for a holographic display, which would naturally lend itself to the type of display being used. Many apps now support both landscape and portrait modes. I wanted support both horizontal and vertical modes, and since I specify orientation using vectors I can also specify any intermediate angle.

Holographic Display

By
Dave
Project
Published
24 Jan 2013 15:40
Last Modified
15 Mar 2013 23:51

While continuing to explore natural user interaction with 3D data on a 2D display, I also wanted to start a new project to explore interaction with 3D data on a 3D display.

Interaction with a user interface can be in 2 or 3 dimensions. 2D interaction is pervasive with the availability of multitouch-enabled displays, and 3D interaction is becomming more accessible with products such as Microsoft's Kinect for Windows and the Leap Motion controller.

3D displays utilising active shutter glasses have been available as consumer products for some time, though many users remain uncomfortable with wearing 3D glasses. Displays are beginning to emerge for which 3D glasses are not required, such as the Nintendo 3DS. An example of a display used for this project is shown below in Figure 1.

Holographic Display

Figure 1. 3D Display

I've added a couple more images to the gallery.

Extensibility Model Part 2

By
Dave
Project
Published
20 Jan 2013 10:20
Last Modified
20 Jan 2013 21:31

I previously mentioned that I had implemented an extensibility model, and thought it useful to discuss an example of adding a simple model to earth orbit, as shown below in Figure 1 (further images of which are in the gallery).

Model

Figure 1. Model added to earth orbit. Colonial Raptor model (based on new TV series) by Coxxon.

The "extra" is defined as a folder containing the following items:

  • A model in XNB format. XNA has built-in content importers for .x and .fbx (2009.1) formats.
  • An optional pair of textures for both diffuse and emissive textures. These are standard image files.
  • An XML file defining the "extra", in this case as shown below in Listing 1.
<?xml version="1.0" encoding="utf-8"?>
<system name="solar">
  <planet name="earth">
     <!-- satellite
       name = object name
       box = box for pick tests, normalized relative xyz size (currently not used)
       size = maximum length, km
       description = label text
       specintensity = specular intensity, default 0.50 in app.config
       specpower = specular power, default 10 in app.config
       scale = scale factor to unit length, default 1
       model = path to model .xnb file
       texture = path to texture file
       emissive = path to emmissive texture file
       rotation = rotation = sidereal rotation period, days
    -->
    <satellite name="raptor" box="1,1,1" size="0.0086" description="raptor" specintensity="0.1" specpower="10" scale="1" model="colrap1cox.xnb" texture="texture/colrap1cox.jpg" emissive="emissive/colrap1cox.jpg" rotation="1000" >
      <!-- orbit
         a = semi-major axis, km
         e = eccentricity
         w = argument of perifocus, degrees (aka longitude of perihelion, argument of perigee)
         i = inclination to xy plane, degrees
         node = longitude of ascending node, degrees
         M = mean anomaly, degrees (J2000.0)
         P = period, days
         plane = orbital plane (Ecliptic, Equatorial, Laplace), default Ecliptic 
      -->
      <orbit a="6871" e="0" i="0" node="0" w="0" M="0" P="1000" plane="Equatorial" />
    </satellite>
  </planet>
</system>

Listing 1. XML configuration file for satellite model "extra"

This configuration file specifies that the model should be added to the planetoid "earth" in the "solar" system, both of which are defined in system.xml configuration file.

In order to scale the model correctly, a scale factor is applied to normalize the model to unit length. This can either be applied in the XML scale attribute, or specified in the XNA content processor scale attribute, in which case the XML attribute can be set to 1. A size attribute then defines the maximum length of the model in km. The Colonial Raptor shown in Figure 1 was defined with a size of 8.6m.

The textures are defined in sub-folders "texture" and "emissive". If an emissive texture is not available, an all-black image (e.g. JPEG file) can be used.

The rotation period defines how long it takes for the model to rotate while orbiting the planetoid. If this is the same as the P orbital element, then the same face of the model is presented to the planetoid throughout the orbit. The remaining standard orbital elements specify that the model is in a circular equatorial orbit at an altitude of 500km (the earth has a radius of 6,371km).

The extra is included automatically when added to the /data/extras folder.

Blog Indexes

By
Dave
Project
Published
17 Jan 2013 13:31
Last Modified
17 Jan 2013 13:32

As the number of blog posts grow it becomes increasingly important to provide flexible ways to access individual entries. I've now added a basic index by both date and project, using the following URL structure:

  • http://{domain}/index/blog/{year}/{month}/{day}/ for posts, optionally by year, month, and day
  • http://{domain}/index/blog/category/{slug} for posts, optionally by category
Page