Filters Part 2

By
Dave
Project
Published
12 May 2012 18:29
Last Modified
13 Jan 2013 17:21

I previously discussed a method for rendering background images over specific areas of the screen using filters. In order to combine these images with the entire starfield, I followed a similar level-of-detail approach to planetary bodies. Figures 1 and 2 below show example screenshots.

H-ALpha

Figure 1. H-alpha emmision1 filter, 225,000km from Jupiter. The field-of-view is 90°.

WISE

Figure 2. Wide-field Infrared Survey2 (WISE) filter, 20,000km from Earth. The field-of-view is 70°.

Now that I am using an extensibility model, once I have generated the image tiles and added the configuration file, I simply need copy the data to the application data directory for use.

Check out the gallery for more images.

1Composite of the Virginia Tech Spectral line Survey (VTSS) and the Southern H-Alpha Sky Survey Atlas (SHASSA), Credit Douglas Finkbeiner
2Credit NASA/JPL-Caltech/WISE Team

Tile Generation Part 2

By
Dave
Project
Published
12 May 2012 18:15
Last Modified
13 Jan 2013 17:22

I previously discussed a command-line tool for generating image tiles. I've updated the tool to include an additional parameter for image format. Usage is as follows, and the tool is available for download:

C:\tilegen /?
Generates a directory of tiles from a source image equirectangular projection

TILEGEN [drive:][path]filename level [/S=size] [/D=directory] [/F=filename] [/I=image]

   [drive:][path]filename   Soure image
   level                    Level of detail (0-based)
   /S=size                  Size of tile, default 256px
   /D=directory             Directory format, default level{0}
   /F=filename              Filename format, default {col}_{row}
   /I=image                 Image format (JPEG, PNG, TIFF), default JPEG (Compression=90)

Note that the source image is not scaled so must be correct level and size,
i.e. width (px) = 2 * 2^level * size, height (px) = 2^level * size

C:\>

Planetary Body Shader Part 3

By
Dave
Project
Published
22 Apr 2012 11:56
Last Modified
8 Dec 2012 16:23

It's been a while since I've updated my planet shader. One of the items on my list was to make the shader more modular so that I could cope with different "types" of planet or moon, without proliferating the number of techniques. This is particularly important now that I am using an extensibility model for adding or updating data.

The planetary shader currently deals with the following options:

  • Color map
  • Normal map
  • Specular map
  • Night map
  • Cloud map
  • Atmosphere
  • Rings (with shadows on both the body and ring)

One approach to support multiple combinations of these options is to use seperate shader techniques. The number of combinations when picking r items from n options is defined by Equation 1.

Combinations(1)

Since I could use any number of these options, the number is given by Equation 2.

Shader Combinations(2)

For 7 shader options, this therefore gives 127 shader techniques. Clearly this would not be viable, so I opted to use static shader branching. Since certain aspects of my rendering required Shader Model 3.0 anyway, this seemed a good approach, and resulted in acceptable performance.

Figures 1-7 below show individual shader options for color, specular mapping, normal mapping, night, atmosphere, and clouds, with a cumulative combination alongside.

Color

Figure 1. Color Map.

Specular Color, Specular

Figure 2. Specular Map.

Normals Color, Specular, Normals

Figure 3. Normal Map.

Night Color, Specular, Normals, Night

Figure 4. Night Map.

Atmosphere Color, Specular, Normals, Night, Atmosphere

Figure 5. Atmosphere.

Clouds Color, Specular, Normals, Night, Atmosphere, Clouds

Figure 6. Clouds.

Figures 7 and 8 below show individual shader options for color and rings with a cumulative combination alongside.

Color

Figure 7. Color.

Ring Color, Ring

Figure 8. Rings.

Extensibility Model

By
Dave
Project
Published
22 Apr 2012 11:56
Last Modified
20 Jan 2013 10:23

I wanted to provide an easy way to extend the application with additional bodies, textures etc. I had previously used a single configuration file for visual elements, so while it was possible to configure existing or add additional elements it required a configuration change.

I wanted to support a "drag-and-drop" approach, so that additional items could be added, or existing items updated, just by copying files to a directory. This has the additional benefit of allowing the app to "ship" with minimal data (hence size) and be easily updated.

I'm using an XML-based configuration to define star systems. New files added to an "extras" folder can add or amend as many nodes of the XML configuraiton as they wish. In this way, one or more new/existing objects can be added/amended per file, with additional (e.g. texture) data, being referenced by a relative path. The application will recursively search for multiple .xml files, only following a particular branch of subdirectories until an .xml file is found.

I'm currently defining a set of sample extras to serve as examples.

NUIverse Video

By
Dave
Project
Published
16 Feb 2012 16:02
Last Modified
13 Jan 2013 17:48

One of my colleagues came up with the name 'NUIverse', and since a major motivation behind creating this app was the opportunity to explore the use of Natural User Interface (NUI), and the name is a nice play on the spelling of 'Universe' the name has stuck.

While there are still a heap of tweaks and new features on my TODO list, I though it was a good time to share a short video captured from the app.

Video 1. NUIverse on Microsoft Surface 2.0.

Note that rather than using an overhead camera, I used FRAPS to capture video. Touch-points are highlighed as translucent circular markers.

Credits:

  • Data provided by NASA
  • Music by iStockphoto®, ©AudioQuattro, Sky
  • Battlestar Galactica models (based on new TV series) by Coxxon

Update: I thought I'd also include a link to the other video mentioned in the comments, which films the app being used rather than capturing the screen.

This video has now moved, so I've udated the link below.

Video 2. NUIverse on Microsoft Surface 2.0.

Equirectangular or TOAST

By
Dave
Project
Published
16 Feb 2012 15:05
Last Modified
16 Feb 2012 15:16

I previously discussed the use of Hierarchical Triangular Mesh (HTM) geometry with a Tessellated Octahedral Adaptive Subdivision Transform (TOAST) projection, and a geometry based on latitude and longitude with an Equirectangular projection. Currently the project is using the following:

  • HTM for background stars (both for spatial indexing and deciding which cells to render).
  • Equirectangular projections for texturing on planetary bodies.

I thought it would be useful to discuss the combination of geometry and projection with respect to the following:

  • Polar Distortion
  • Tile Distribution
  • Virtual Textures
  • Normal Mapping
  • Texture Scaling
  • Cloud Shadowing

Polar distortion

TOAST projections have the advantage of suffering less distortion than equirectangular projections near to the poles. Currently it appears to be easier to find data in equirectangular projection. The Sphere TOASTER tool for WorldWide Telescope allows re-projection from equirectangular to TOAST. However, since the source maps already suffer from polar distortion, the resultant TOAST projections also have poor resolution in polar regions. Finding undistorted data would solve this issue.

Tile Distribution

Spatial mapping using grid cells based on HTM has a more even distribution than cells based on latitude and longitude. I am therefore using HTM for background stars. This gives an even spatial index, and a more even number of cells to render with respect to camera orientation.

Virtual Textures

It is easy to generate level-of-detail tiles from an equirectangular source maps using a simple tool. Generating tiles for a TOAST projection from an equirectangular projeciton is possible using the Sphere TOASTER tool for WorldWide Telescope.

Normal mapping

Normal mapping using equirectangular projections is straightforward, however leads to texture seams when using TOAST. Since I wanted to use normal mapping for specular lighting, equirectangular maps were more appropriate.

Texture scaling

While waiting for virtual texture tiles to load from a cache, I take a parent tile and divide and scale it accordingly to give a lower-resoultion texture. In the case of an equirectangular projection this is a simple algorithm.

Cloud shadowing

I also wanted to cast shadows from clouds. In the case of an equirectangular projection, there is simple algorithm in the pixel shader for converting between texture and model space.

Camera Control

By
Dave
Project
Published
31 Jan 2012 17:58
Last Modified
13 Jan 2013 17:26

Currently the app supports a number of different types of manipulation, depending on the camera mode. These are demonstrated in the following short video and summarised below.

Video 1. Touch Manipulation used for camera control, recorded on a Microsoft Surface 2.0.

Free Camera

  • Pitch and Yaw by moving one or more touchpoints on the background
  • Roll by rotating two or more touchpoints on the background

Orbit Camera

  • Orbit body by moving one or more touchpoints on the background
  • Roll by rotating two or more touchpoints on the background
  • Adjust distance to body by pinch-zooming two or more touchpoints on the background

Geosync Camera

  • Roll by rotating two or more touchpoints on the background
  • Adjust distance to body by pinch-zooming two or more touchpoints on the background

Tracking Camera

  • Roll by rotating two or more touchpoints on the background

As shown in the video, when time is running, an orbit camera travels with the body, maintaining constant orientation so that the background stars remain fixed. The geosync camera always points at a specific coordinate on the body, and maintains a constant north bearing.

Smoothing

Touch resolutions oftern correspond to approximately the screen resolution in pixels, hence smoothing is necessary to avoid "jumps" in orientation or position. Also of importance is the use of momentum and inertia to provide a more "natural" touch experience.

I initially used a simple spring algorithm to add smoothing, momentum and inertia, and tracked manipulation speed to continue inertia when manipulation ended. This worked well at high framerates, but when using vertical sync (i.e. 60fps) the experience degraded.

Switching to a simple linear interpolation (LERP) of position and spherical linear interpolation (SLERP) for orientation bahaves well at lower frame-rates, and also gives the impression of momentum and inertia. I no longer track manipulation speed, nor continue inertia when the interpolation is complete.

HDR Tone Mapping

By
Dave
Project
Published
13 Jan 2012 15:54
Last Modified
8 Dec 2012 16:30

The XNA4 HiDef profile supports a SurfaceFormat.HdrBlendable format, allowing color values in a higher dynamic range than can be displayed. One of the most important steps in High Dynamic Range (HDR) post-processing is mapping color values to a displayable range, such as that shown in Figure 1 below.

Tone-mapping

Figure 1. Tone-mapped rendering of North America.

I initially scale the pixel luminance using the average luminance (1). If the average luminance greater or less than the mid-gray value, the pixel is scaled accordingly.

Scaled Luminance(1)

Scaled Luminance

Figure 2. Scaled Luminance by Average Luminance.

I then compress the luminance using the scaled and maximum luminance (2). If the average luminance is the same as the mid-gray value, the pixel values are compressed to within the 0-1 range.

Compressed Luminance(2)

Tone-mapped Luminance

Figure 3. Tone-Mapped Luminance by Average Luminance.

I can then downsample, apply the tone mapping algorithm, and reject any color values below a given threshold for a bright-pass filter. After applying appropriate post-processing filters I can then recombine with the original tone-mapped image. An example is shown below in Figure 4, where an emissive texture on the ship's engines has exceeded the bloom threshold after tone-mapping.

HDR Image

Figure 4. HDR Image showing engine bloom. Battlestar Galactica models (based on new TV series) by Coxxon.

Geosynchonous Camera

By
Dave
Project
Published
4 Dec 2011 19:55
Last Modified
13 Jan 2013 17:28

I've added a new geosynchronous camera type. This allows me to keep focus on a planet or moon at a specific latitude and longitude, while allowing adjustment of height, field-of-view, and bearing.

This required calculation of the north bearing to counteract changes in direction as the camera tracks an object over time due to the object's obliquity. I initially tried the design shown below in Figures 1-2.

Compass Compass

Figures 1-2. Compass designs.

In this design, there is an outer radial scale for bearing. The inner scales show latitude and longitude, and always pass through the origin of the enclosing circle. The latitude scale remains straight. The longitude scale shows a circle of constant latitude. The bearing is shown in degrees, latitude and longitude in degrees, minutes, and seconds, and altitude in km.

I then tried the following design shown below in Figures 3-4.

Compass Compass

Figures 3-4. Compass designs.

In this design, there is again an outer radial scale for bearing. Inside the circle is a spherical representation of the orientation of the planet or moon, which may be more intuitive than the previous design. A screenshot including a planet is shown below in Figure 5, which I've also uploaded to the project gallery.

Compass

Figure 5. Compass with planet.

Texture Caching

By
Dave
Project
Published
12 Nov 2011 19:10
Last Modified
13 Jan 2013 17:39

Since I am dynamically loading Level of Detail textures, I needed to control the number of Texture2D objects being used.

In order to support a texture cache, I initially create a specifc number of Texture2D objects for each level of detail. When a new texture is required, I queue the image for loading using a background process as described previously. If the texture is still required when the image has loaded, I find the next available texture which has not been set on the GraphicsDevice and create the texture from the image data.

In order to maximise the work done on the background thread, and minimise the amount of Texture2D objects set on the GraphicsDevice, I combine both surface and specular maps in a single Texture2D by using the alpha channel for specularity (see Figure 1 below). In a similar way, I can combine normal and cloud maps in another Texture2D. A third texture is used for night-maps (see Figure 2 below), with the alpha channel still available for future use.

Specular texture map

Figure 1. Rendering of North America with specular map (without atmosphere).

Specular and night texture maps

Figure 2. Rendering of Australia with specular amd night maps (without atmosphere).

If a texture is required in the render loop but not yet cached, I check the cache for a parent texture and scale it in the shader until the child texture is available. If a texture is no longer required by the time that a corresponding set of image data has been loaded, I periodically expire the data to conserve memory. In addition, I only start loading image data when a texture has been requested repeatedly for a configurable interval. This means that I won't be loading data un-necessarily during fast flybys.

Page