Vanity URLs Part 2

By
Dave
Project
Published
22 Apr 2012 11:54
Last Modified
13 Jan 2013 18:14

I previously described an approach for supporting vanity URLs through the use of the MVC routing table. Since my hosting company now supports IIS7 Remote Administration, an alternative approach is to use IIS HTTP Redirection.

I simply add a folder to my web site, connect using IIS Manger and add an HTTP Refirect to the original URL. The advantage of this approach is that I can redirect URLs from the root web site (will not be parsed by the MVC routing table), such as:

http://{domain}/{slug}

NUIverse Video

By
Dave
Project
Published
16 Feb 2012 16:02
Last Modified
13 Jan 2013 17:48

One of my colleagues came up with the name 'NUIverse', and since a major motivation behind creating this app was the opportunity to explore the use of Natural User Interface (NUI), and the name is a nice play on the spelling of 'Universe' the name has stuck.

While there are still a heap of tweaks and new features on my TODO list, I though it was a good time to share a short video captured from the app.

Video 1. NUIverse on Microsoft Surface 2.0.

Note that rather than using an overhead camera, I used FRAPS to capture video. Touch-points are highlighed as translucent circular markers.

Credits:

  • Data provided by NASA
  • Music by iStockphoto®, ©AudioQuattro, Sky
  • Battlestar Galactica models (based on new TV series) by Coxxon

Update: I thought I'd also include a link to the other video mentioned in the comments, which films the app being used rather than capturing the screen.

This video has now moved, so I've udated the link below.

Video 2. NUIverse on Microsoft Surface 2.0.

Equirectangular or TOAST

By
Dave
Project
Published
16 Feb 2012 15:05
Last Modified
16 Feb 2012 15:16

I previously discussed the use of Hierarchical Triangular Mesh (HTM) geometry with a Tessellated Octahedral Adaptive Subdivision Transform (TOAST) projection, and a geometry based on latitude and longitude with an Equirectangular projection. Currently the project is using the following:

  • HTM for background stars (both for spatial indexing and deciding which cells to render).
  • Equirectangular projections for texturing on planetary bodies.

I thought it would be useful to discuss the combination of geometry and projection with respect to the following:

  • Polar Distortion
  • Tile Distribution
  • Virtual Textures
  • Normal Mapping
  • Texture Scaling
  • Cloud Shadowing

Polar distortion

TOAST projections have the advantage of suffering less distortion than equirectangular projections near to the poles. Currently it appears to be easier to find data in equirectangular projection. The Sphere TOASTER tool for WorldWide Telescope allows re-projection from equirectangular to TOAST. However, since the source maps already suffer from polar distortion, the resultant TOAST projections also have poor resolution in polar regions. Finding undistorted data would solve this issue.

Tile Distribution

Spatial mapping using grid cells based on HTM has a more even distribution than cells based on latitude and longitude. I am therefore using HTM for background stars. This gives an even spatial index, and a more even number of cells to render with respect to camera orientation.

Virtual Textures

It is easy to generate level-of-detail tiles from an equirectangular source maps using a simple tool. Generating tiles for a TOAST projection from an equirectangular projeciton is possible using the Sphere TOASTER tool for WorldWide Telescope.

Normal mapping

Normal mapping using equirectangular projections is straightforward, however leads to texture seams when using TOAST. Since I wanted to use normal mapping for specular lighting, equirectangular maps were more appropriate.

Texture scaling

While waiting for virtual texture tiles to load from a cache, I take a parent tile and divide and scale it accordingly to give a lower-resoultion texture. In the case of an equirectangular projection this is a simple algorithm.

Cloud shadowing

I also wanted to cast shadows from clouds. In the case of an equirectangular projection, there is simple algorithm in the pixel shader for converting between texture and model space.

Camera Control

By
Dave
Project
Published
31 Jan 2012 17:58
Last Modified
13 Jan 2013 17:26

Currently the app supports a number of different types of manipulation, depending on the camera mode. These are demonstrated in the following short video and summarised below.

Video 1. Touch Manipulation used for camera control, recorded on a Microsoft Surface 2.0.

Free Camera

  • Pitch and Yaw by moving one or more touchpoints on the background
  • Roll by rotating two or more touchpoints on the background

Orbit Camera

  • Orbit body by moving one or more touchpoints on the background
  • Roll by rotating two or more touchpoints on the background
  • Adjust distance to body by pinch-zooming two or more touchpoints on the background

Geosync Camera

  • Roll by rotating two or more touchpoints on the background
  • Adjust distance to body by pinch-zooming two or more touchpoints on the background

Tracking Camera

  • Roll by rotating two or more touchpoints on the background

As shown in the video, when time is running, an orbit camera travels with the body, maintaining constant orientation so that the background stars remain fixed. The geosync camera always points at a specific coordinate on the body, and maintains a constant north bearing.

Smoothing

Touch resolutions oftern correspond to approximately the screen resolution in pixels, hence smoothing is necessary to avoid "jumps" in orientation or position. Also of importance is the use of momentum and inertia to provide a more "natural" touch experience.

I initially used a simple spring algorithm to add smoothing, momentum and inertia, and tracked manipulation speed to continue inertia when manipulation ended. This worked well at high framerates, but when using vertical sync (i.e. 60fps) the experience degraded.

Switching to a simple linear interpolation (LERP) of position and spherical linear interpolation (SLERP) for orientation bahaves well at lower frame-rates, and also gives the impression of momentum and inertia. I no longer track manipulation speed, nor continue inertia when the interpolation is complete.

HDR Tone Mapping

By
Dave
Project
Published
13 Jan 2012 15:54
Last Modified
8 Dec 2012 16:30

The XNA4 HiDef profile supports a SurfaceFormat.HdrBlendable format, allowing color values in a higher dynamic range than can be displayed. One of the most important steps in High Dynamic Range (HDR) post-processing is mapping color values to a displayable range, such as that shown in Figure 1 below.

Tone-mapping

Figure 1. Tone-mapped rendering of North America.

I initially scale the pixel luminance using the average luminance (1). If the average luminance greater or less than the mid-gray value, the pixel is scaled accordingly.

Scaled Luminance(1)

Scaled Luminance

Figure 2. Scaled Luminance by Average Luminance.

I then compress the luminance using the scaled and maximum luminance (2). If the average luminance is the same as the mid-gray value, the pixel values are compressed to within the 0-1 range.

Compressed Luminance(2)

Tone-mapped Luminance

Figure 3. Tone-Mapped Luminance by Average Luminance.

I can then downsample, apply the tone mapping algorithm, and reject any color values below a given threshold for a bright-pass filter. After applying appropriate post-processing filters I can then recombine with the original tone-mapped image. An example is shown below in Figure 4, where an emissive texture on the ship's engines has exceeded the bloom threshold after tone-mapping.

HDR Image

Figure 4. HDR Image showing engine bloom. Battlestar Galactica models (based on new TV series) by Coxxon.

Geosynchonous Camera

By
Dave
Project
Published
4 Dec 2011 19:55
Last Modified
13 Jan 2013 17:28

I've added a new geosynchronous camera type. This allows me to keep focus on a planet or moon at a specific latitude and longitude, while allowing adjustment of height, field-of-view, and bearing.

This required calculation of the north bearing to counteract changes in direction as the camera tracks an object over time due to the object's obliquity. I initially tried the design shown below in Figures 1-2.

Compass Compass

Figures 1-2. Compass designs.

In this design, there is an outer radial scale for bearing. The inner scales show latitude and longitude, and always pass through the origin of the enclosing circle. The latitude scale remains straight. The longitude scale shows a circle of constant latitude. The bearing is shown in degrees, latitude and longitude in degrees, minutes, and seconds, and altitude in km.

I then tried the following design shown below in Figures 3-4.

Compass Compass

Figures 3-4. Compass designs.

In this design, there is again an outer radial scale for bearing. Inside the circle is a spherical representation of the orientation of the planet or moon, which may be more intuitive than the previous design. A screenshot including a planet is shown below in Figure 5, which I've also uploaded to the project gallery.

Compass

Figure 5. Compass with planet.

Texture Caching

By
Dave
Project
Published
12 Nov 2011 19:10
Last Modified
13 Jan 2013 17:39

Since I am dynamically loading Level of Detail textures, I needed to control the number of Texture2D objects being used.

In order to support a texture cache, I initially create a specifc number of Texture2D objects for each level of detail. When a new texture is required, I queue the image for loading using a background process as described previously. If the texture is still required when the image has loaded, I find the next available texture which has not been set on the GraphicsDevice and create the texture from the image data.

In order to maximise the work done on the background thread, and minimise the amount of Texture2D objects set on the GraphicsDevice, I combine both surface and specular maps in a single Texture2D by using the alpha channel for specularity (see Figure 1 below). In a similar way, I can combine normal and cloud maps in another Texture2D. A third texture is used for night-maps (see Figure 2 below), with the alpha channel still available for future use.

Specular texture map

Figure 1. Rendering of North America with specular map (without atmosphere).

Specular and night texture maps

Figure 2. Rendering of Australia with specular amd night maps (without atmosphere).

If a texture is required in the render loop but not yet cached, I check the cache for a parent texture and scale it in the shader until the child texture is available. If a texture is no longer required by the time that a corresponding set of image data has been loaded, I periodically expire the data to conserve memory. In addition, I only start loading image data when a texture has been requested repeatedly for a configurable interval. This means that I won't be loading data un-necessarily during fast flybys.

Satellites

By
Dave
Project
Published
12 Nov 2011 19:09
Last Modified
13 Jan 2013 17:29

Now that I have implemented Level of Detail texturing on the planetary bodies, I have sufficient ground resolution to render planetary satellites.

One challenge to overcome is the fact that satellites are so much smaller than planetary bodies, which has implications both for the near plane of the projection matrix and the resolution of the depth buffer. I opted for rendering satellites to a render target with a different projection matrix and overlaying the result. Figures 1-3 below show a low-poly, untextured model of the International Space Station (ISS)1, approximatly 100m in size and at an alitiude of 400km. In contrast, the Earth has a diameter of over 12,000km.

Satellite

Satellite

Satellite

Figures 1-3. International Space Station model1.

Another interesting challenge is the fact that satellites move very quickly with respect to a camera fixed in space. The International Space Station, for example, has an orbital speed of over 27,000km/h. While I can adjust the time control, slowing time by a factor of more than 1/1,000 exceeds the precision used by the GameTime timer, and results in orbital positions updating at a lower frequency than drawing.

The level of detail on the planet surface is currently shown to a maximum (0-based) level of 5. For a tiling system using an equirectangular projection and 256px-square tiles, this equates to a 16,384x8192px ("16k") image in 64x32=2048 tiles. Tiles which tesselate to this level can have a single vertex buffer with an index of type short (0-based L5=23082 vertices), which allows me to use the XNA Reach Profile. I could easily switch to the HiDef Profile, use an index of type int, and support higher levels of detail.

1 3D model of International Space Station provided by NASA

Tile Generation

By
Dave
Project
Published
12 Nov 2011 18:30
Last Modified
13 Jan 2013 17:30

I needed to create image tiles to provide textures for level of detail rendering of planetary bodies. After looking around for a bit, I decided to write a simple tool for the job. For equirectangular1 projections, all I needed was to load big images and chop them up into a number of tiles according to a naming convention.

I decided not to support image resizing, as there are plenty of tools available which can do the job with an appropriate filter for the type of image.

The System.Drawing namespace can load big images using Bitmap.FromFile(), providing the image fits into memory, which in practical terms means Windows 64bit. The Graphics.DrawImageUnscaled() method can then be used to draw a tile, providing I maintain the dpi of the source image.

The command-line version is available for download. Usage is as follows:

C:\Tiles>tilegen /?
Generates a directory of tiles from a source image equirectangular projection

TILEGEN [drive:][path]filename level [/S=size] [/D=directory] [/F=filename]

   [drive:][path]filename   Soure image
   level                    Level of detail (0-based)
   /S=size                  Size of tile, default 256px
   /D=directory             Directory format, default level{0}
   /F=filename              Filename format, default {col}_{row}

Note that the source image is not scaled so must be correct level and size,
i.e. width (px) = 2 * 2^level * size, height (px) = 2^level * size

C:\Tiles>

To generate 8x256px tiles at level 1 from an appropriately-named 1024x512px image, I use:

C:\Tiles>tilegen 1024x512.jpg 1
Loading source image...done, 0.02s
Creating folder...done
Generating tile 8/8...done, 0.05s

C:\Tiles>

The optional parameters allow generation of different tile sizes, and output of custom directory and file path names.

1 Equirectangular is also known as Simple Cylindrical and Plate Carrée.

Migration to XNA 4.0

By
Dave
Project
Published
5 Nov 2011 18:40
Last Modified
13 Jan 2013 17:31

I decided to update the app from XNA 3.1 to 4.0. As ever this provided an opportunity to tidy up some code and make use of some new features of the updated framework, however there were several things that required changing.

  • The current starfield shader makes use of Point Sprites, which are no longer available in version 4 (see Shawn Hargreave's post on Point sprites in XNA Game Studio 4.0). I've switched to a set of 4 indexed vertices per star.
  • Removed VertexDeclation code
  • Updated VertexBuffer and IndexBuffer constructors, and code to set VertexBuffers.
  • Replaced effect.Begin(), effect.CurentTechnique.Passes[...].Begin() and .End() with effect.CurentTechnique.Passes[...].Apply()
  • Reduced GPU buffer sizes, e.g. switching VertexPositionColor structures to a custom VertexPosition structure (since color information was being set using a shader parameter), and using packed vector classes such as Short2 where appropriate.
  • The new dynamic audio features of XNA 4.0 gave me an opportunity to synthesise the sound effects for which I previously had to find appropriate source files.
  • Added support for the new multi-touch APIs targetting Surface 2.0, which are based on .NET 4.
Page