Geosynchonous Camera

By
Dave
Project
Published
4 Dec 2011 19:55
Last Modified
13 Jan 2013 17:28

I've added a new geosynchronous camera type. This allows me to keep focus on a planet or moon at a specific latitude and longitude, while allowing adjustment of height, field-of-view, and bearing.

This required calculation of the north bearing to counteract changes in direction as the camera tracks an object over time due to the object's obliquity. I initially tried the design shown below in Figures 1-2.

Compass Compass

Figures 1-2. Compass designs.

In this design, there is an outer radial scale for bearing. The inner scales show latitude and longitude, and always pass through the origin of the enclosing circle. The latitude scale remains straight. The longitude scale shows a circle of constant latitude. The bearing is shown in degrees, latitude and longitude in degrees, minutes, and seconds, and altitude in km.

I then tried the following design shown below in Figures 3-4.

Compass Compass

Figures 3-4. Compass designs.

In this design, there is again an outer radial scale for bearing. Inside the circle is a spherical representation of the orientation of the planet or moon, which may be more intuitive than the previous design. A screenshot including a planet is shown below in Figure 5, which I've also uploaded to the project gallery.

Compass

Figure 5. Compass with planet.

Texture Caching

By
Dave
Project
Published
12 Nov 2011 19:10
Last Modified
13 Jan 2013 17:39

Since I am dynamically loading Level of Detail textures, I needed to control the number of Texture2D objects being used.

In order to support a texture cache, I initially create a specifc number of Texture2D objects for each level of detail. When a new texture is required, I queue the image for loading using a background process as described previously. If the texture is still required when the image has loaded, I find the next available texture which has not been set on the GraphicsDevice and create the texture from the image data.

In order to maximise the work done on the background thread, and minimise the amount of Texture2D objects set on the GraphicsDevice, I combine both surface and specular maps in a single Texture2D by using the alpha channel for specularity (see Figure 1 below). In a similar way, I can combine normal and cloud maps in another Texture2D. A third texture is used for night-maps (see Figure 2 below), with the alpha channel still available for future use.

Specular texture map

Figure 1. Rendering of North America with specular map (without atmosphere).

Specular and night texture maps

Figure 2. Rendering of Australia with specular amd night maps (without atmosphere).

If a texture is required in the render loop but not yet cached, I check the cache for a parent texture and scale it in the shader until the child texture is available. If a texture is no longer required by the time that a corresponding set of image data has been loaded, I periodically expire the data to conserve memory. In addition, I only start loading image data when a texture has been requested repeatedly for a configurable interval. This means that I won't be loading data un-necessarily during fast flybys.

Satellites

By
Dave
Project
Published
12 Nov 2011 19:09
Last Modified
13 Jan 2013 17:29

Now that I have implemented Level of Detail texturing on the planetary bodies, I have sufficient ground resolution to render planetary satellites.

One challenge to overcome is the fact that satellites are so much smaller than planetary bodies, which has implications both for the near plane of the projection matrix and the resolution of the depth buffer. I opted for rendering satellites to a render target with a different projection matrix and overlaying the result. Figures 1-3 below show a low-poly, untextured model of the International Space Station (ISS)1, approximatly 100m in size and at an alitiude of 400km. In contrast, the Earth has a diameter of over 12,000km.

Satellite

Satellite

Satellite

Figures 1-3. International Space Station model1.

Another interesting challenge is the fact that satellites move very quickly with respect to a camera fixed in space. The International Space Station, for example, has an orbital speed of over 27,000km/h. While I can adjust the time control, slowing time by a factor of more than 1/1,000 exceeds the precision used by the GameTime timer, and results in orbital positions updating at a lower frequency than drawing.

The level of detail on the planet surface is currently shown to a maximum (0-based) level of 5. For a tiling system using an equirectangular projection and 256px-square tiles, this equates to a 16,384x8192px ("16k") image in 64x32=2048 tiles. Tiles which tesselate to this level can have a single vertex buffer with an index of type short (0-based L5=23082 vertices), which allows me to use the XNA Reach Profile. I could easily switch to the HiDef Profile, use an index of type int, and support higher levels of detail.

1 3D model of International Space Station provided by NASA

Tile Generation

By
Dave
Project
Published
12 Nov 2011 18:30
Last Modified
13 Jan 2013 17:30

I needed to create image tiles to provide textures for level of detail rendering of planetary bodies. After looking around for a bit, I decided to write a simple tool for the job. For equirectangular1 projections, all I needed was to load big images and chop them up into a number of tiles according to a naming convention.

I decided not to support image resizing, as there are plenty of tools available which can do the job with an appropriate filter for the type of image.

The System.Drawing namespace can load big images using Bitmap.FromFile(), providing the image fits into memory, which in practical terms means Windows 64bit. The Graphics.DrawImageUnscaled() method can then be used to draw a tile, providing I maintain the dpi of the source image.

The command-line version is available for download. Usage is as follows:

C:\Tiles>tilegen /?
Generates a directory of tiles from a source image equirectangular projection

TILEGEN [drive:][path]filename level [/S=size] [/D=directory] [/F=filename]

   [drive:][path]filename   Soure image
   level                    Level of detail (0-based)
   /S=size                  Size of tile, default 256px
   /D=directory             Directory format, default level{0}
   /F=filename              Filename format, default {col}_{row}

Note that the source image is not scaled so must be correct level and size,
i.e. width (px) = 2 * 2^level * size, height (px) = 2^level * size

C:\Tiles>

To generate 8x256px tiles at level 1 from an appropriately-named 1024x512px image, I use:

C:\Tiles>tilegen 1024x512.jpg 1
Loading source image...done, 0.02s
Creating folder...done
Generating tile 8/8...done, 0.05s

C:\Tiles>

The optional parameters allow generation of different tile sizes, and output of custom directory and file path names.

1 Equirectangular is also known as Simple Cylindrical and Plate Carrée.

Migration to XNA 4.0

By
Dave
Project
Published
5 Nov 2011 18:40
Last Modified
13 Jan 2013 17:31

I decided to update the app from XNA 3.1 to 4.0. As ever this provided an opportunity to tidy up some code and make use of some new features of the updated framework, however there were several things that required changing.

  • The current starfield shader makes use of Point Sprites, which are no longer available in version 4 (see Shawn Hargreave's post on Point sprites in XNA Game Studio 4.0). I've switched to a set of 4 indexed vertices per star.
  • Removed VertexDeclation code
  • Updated VertexBuffer and IndexBuffer constructors, and code to set VertexBuffers.
  • Replaced effect.Begin(), effect.CurentTechnique.Passes[...].Begin() and .End() with effect.CurentTechnique.Passes[...].Apply()
  • Reduced GPU buffer sizes, e.g. switching VertexPositionColor structures to a custom VertexPosition structure (since color information was being set using a shader parameter), and using packed vector classes such as Short2 where appropriate.
  • The new dynamic audio features of XNA 4.0 gave me an opportunity to synthesise the sound effects for which I previously had to find appropriate source files.
  • Added support for the new multi-touch APIs targetting Surface 2.0, which are based on .NET 4.

Level of Detail Part 3

By
Dave
Project
Published
5 Nov 2011 17:54
Last Modified
13 Jan 2013 17:39

I described in Part 1 and Part 2 the basis of an approach for a planetary renderer with Level of Detail (LOD) support, and I've been working on integrating this into the project, as shown below in Figure 1.

LOD tiles

Figure 1. Level of Detail (LOD) tiles for Earth and Moon.

I previously thought that my background process for loading LOD textures was not locking the rendering loop, however it turns out this was not the case it was using Texture2D.FromFile to load a LOD texture which locks the GraphicsDevice1.

I therefore needed to minimise the time spent loading textures, and tried the following:

  • Pre-processing image textures using the Content Pipeline.
  • Using an HTM mesh and TOAST projection.
  • Pre-loading image data on a background thread.

Content Pipeline

Running some LOD tiles for Earth through the Content Processor. L0-L5 tiles for texture, specularity, normals and clouds (10,920 files) took just over 48 minutes to process on my machine, not a problem given that I only needed to do this once. However, it resulted in 10.6Gb of .xnb files wasn't a practical approach nor significantly reduced lock time on the GraphicsDevice.

HTM and TOAST

Switching from an equirectangular to a Tessellated Octahedral Adaptive Subdivision Transform (TOAST) projection, as I descibed previously, provides a more even coverage of texture tiles across the surface of a sphere, thus minimising texture loads. Pressure on IO was further reduced by using smaller tile sizes (256px square).

Background Image Loads

Loading image data on a background thread can be done independently from the GraphicsDevice. The data can then be set on a Texture2D from memory, locking the GraphicsDevice for minimal time. I load the System.Drawing.Bitmap as follows:

int[] pixels = new int[256 * 256];
using (Bitmap bitmap = (Bitmap)Bitmap.FromFile(path))
{
   // PixelFormat.Format32bppArgb
   BitmapData data = bitmap.LockBits(new Rectangle(0, 0, 256, 256), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);

   // copy bitmap data into buffer
   System.Runtime.InteropServices.Marshal.Copy(data.Scan0, pixels, 0, pixels.Length);

   // unlock bitmap data
   bitmap.UnlockBits(data);
}

I can then use Texture2D.SetData(int[] pixels) to create the texture.

Loading images in this way currently provides an acceptable lock time.

1 See Shawn Hargreaves' blog entry on Lock contention during load screen animations

Spatial Indexing Part 3

By
Dave
Project
Published
20 Aug 2011 00:20
Last Modified
13 Jan 2013 17:38

In part 1 and part 2 I discussed a basic approach to indexing stars, deep-sky objects and their corresponding labels.

I've now changed the spatial indexing algorithm from an approach based on right-ascension and declination to a Hierarchical Triangular Mesh (HTM). This has a far more even distribution of index cells for a given field of view, as can be seen by figures 1-3. Each image has a 50° field of view and a declination of 90°, 60°, and 0° respectively. In each case the number of index cells is the same. Compare this to the images in part 1, where the number of index cell varied singificantly with declination.

Spatial Grid

Spatial Grid

Spatial Grid

Figures 1-3. Spatial indexing using Hierachical Triangular Mesh.

As before, the images are shown with highlighted index cells for a reduced field of view based on the central reticle, rather than the entire view frustum. I've also included a right-ascension and declination grid in the background, and the cell ids for reference.

I needed to calculate the HTM trixels overlapped by the current field of view. This can be done recursively by using intersection tests between HTM trixels and the view frustum. The previous approach, based on right-ascension and declination, calculated cells based on an circle intersection, with the circle centered on the field of view. However, with increasing widescreen aspect ratios, this leads to cell selection outside of the field of view. While view-frustum culling is a slower algorithm, rendering less cells should be more performant overall.

Star Selection Part 3

By
Dave
Project
Published
20 Jul 2011 22:29
Last Modified
13 Jan 2013 17:33

I discussed in parts 1 and 2 an approach to allow efficient selection of nearby stars to a reticle. I originally used an index which divided the sphere volume into 50 equal divisions along each axis, giving 25,000 cubes. This gave a distribution of items per key as shown in Figure 1.

Spatial Index

Figure 1. Hipparcos Cartesian Spatial Index.

Switching to a Hierarchical Triangular Mesh (HTM), also known as Quaternary Triangular Mesh (QTM), gives a more even distribtion with less cells containing very few stars, as shown for subdisivion level 5 in Figure 2. This results in a more consistent behaviour in selecting nearby stars.

Spatial Index

Figure 2. Hipparcos HTM L5 Spatial Index.

Using HTM also makes it easy to index at multiple levels of detail, such that an appropriate index can be used at a given field of view.

HTM On TOAST

By
Dave
Project
Published
19 Jul 2011 23:12
Last Modified
8 Dec 2012 16:32

I decided to switch from Equirectangular to Tessellated Octahedral Adaptive Subdivision Transform (TOAST) projections. TOAST is an extension of the Hierarchical Triangular Mesh (HTM) proposed by Jonathan Fay, chief architect and developer of Microsoft's WorldWide Telescope (WWT). HTM is a representation of a sphere proposed by astronomers in the Sloan Digital Sky Survey (SDSS), which recursively subdivides an octohedron to approximate a sphere with a highly-tesselated polyhedron. The TOAST projection folds the subdivided octahedron out into a square that is very convenient for use in an image pyramid.

Tesselating an Equirectangular projection into a set of texture tiles corresponds to areas on the surface of a sphere bounded by lines of latitude and longitude. The sphere can therefore be approximated using "Slices and Stacks", as shown below in Figure 1. In order to switch to a TOAST projection, the first thing I needed to do was generate the underlying HTM geometry, as shown below in Figure 2. Note that while the fist level is an octohedron in both cases, subsequent levels of Slices and Stacks begin clustering tiles around the poles whereas HTM levels retain a more even distribution.

Equirectangular Equirectangular Equirectangular Equirectangular

Figure 1. Slices & Stacks L1-L4

HTM HTM HTM HTM

Figure 2. HTM L1-L4

Once this was done, I needed to add the relevant texture coordinates to each indexed vertex to map the corresponding TOAST texture tile. Each texture tile maps to two triangles, or HTM "trixels". The texture mapping for an Equirectangular projection is shown below in Figure 3, with the underlying geometry smoothed to more closely approximate a sphere. Figure 4 shows the texture mapping for the TOAST projection, again with the underlying geometry smoothed to more closely approximate a sphere.

Equirectangular Equirectangular Equirectangular Equirectangular

Figure 3. Equirectangular L1-L4 (mapped to Slices & Stacks L5)

HTM HTM HTM HTM

Figure 4. TOAST L1-L4 (mapped to HTM L5)

Constellations

By
Dave
Project
Published
17 Jun 2011 21:53
Last Modified
17 Jun 2011 22:00

It's been a while since I added support for constellation boundaries. I've now added support for drawing constellations and asterisms, along with their names.

Since there are no 'official' definitions of constellation patterns, I've followed the constellations defined by the International Astronomical Union.

I've initially placed names at the average position of the component stars. I may add support for a manual position if this results in too many overlapping labels.

Constellations

Figure 1. Constellations and names.

Constellations

Figure 2. Constellations, boundaries and names.

Page