Level of Detail Part 3

By
Dave
Project
Published
5 Nov 2011 17:54
Last Modified
13 Jan 2013 17:39

I described in Part 1 and Part 2 the basis of an approach for a planetary renderer with Level of Detail (LOD) support, and I've been working on integrating this into the project, as shown below in Figure 1.

LOD tiles

Figure 1. Level of Detail (LOD) tiles for Earth and Moon.

I previously thought that my background process for loading LOD textures was not locking the rendering loop, however it turns out this was not the case it was using Texture2D.FromFile to load a LOD texture which locks the GraphicsDevice1.

I therefore needed to minimise the time spent loading textures, and tried the following:

  • Pre-processing image textures using the Content Pipeline.
  • Using an HTM mesh and TOAST projection.
  • Pre-loading image data on a background thread.

Content Pipeline

Running some LOD tiles for Earth through the Content Processor. L0-L5 tiles for texture, specularity, normals and clouds (10,920 files) took just over 48 minutes to process on my machine, not a problem given that I only needed to do this once. However, it resulted in 10.6Gb of .xnb files wasn't a practical approach nor significantly reduced lock time on the GraphicsDevice.

HTM and TOAST

Switching from an equirectangular to a Tessellated Octahedral Adaptive Subdivision Transform (TOAST) projection, as I descibed previously, provides a more even coverage of texture tiles across the surface of a sphere, thus minimising texture loads. Pressure on IO was further reduced by using smaller tile sizes (256px square).

Background Image Loads

Loading image data on a background thread can be done independently from the GraphicsDevice. The data can then be set on a Texture2D from memory, locking the GraphicsDevice for minimal time. I load the System.Drawing.Bitmap as follows:

int[] pixels = new int[256 * 256];
using (Bitmap bitmap = (Bitmap)Bitmap.FromFile(path))
{
   // PixelFormat.Format32bppArgb
   BitmapData data = bitmap.LockBits(new Rectangle(0, 0, 256, 256), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);

   // copy bitmap data into buffer
   System.Runtime.InteropServices.Marshal.Copy(data.Scan0, pixels, 0, pixels.Length);

   // unlock bitmap data
   bitmap.UnlockBits(data);
}

I can then use Texture2D.SetData(int[] pixels) to create the texture.

Loading images in this way currently provides an acceptable lock time.

1 See Shawn Hargreaves' blog entry on Lock contention during load screen animations

Spatial Indexing Part 3

By
Dave
Project
Published
20 Aug 2011 00:20
Last Modified
13 Jan 2013 17:38

In part 1 and part 2 I discussed a basic approach to indexing stars, deep-sky objects and their corresponding labels.

I've now changed the spatial indexing algorithm from an approach based on right-ascension and declination to a Hierarchical Triangular Mesh (HTM). This has a far more even distribution of index cells for a given field of view, as can be seen by figures 1-3. Each image has a 50° field of view and a declination of 90°, 60°, and 0° respectively. In each case the number of index cells is the same. Compare this to the images in part 1, where the number of index cell varied singificantly with declination.

Spatial Grid

Spatial Grid

Spatial Grid

Figures 1-3. Spatial indexing using Hierachical Triangular Mesh.

As before, the images are shown with highlighted index cells for a reduced field of view based on the central reticle, rather than the entire view frustum. I've also included a right-ascension and declination grid in the background, and the cell ids for reference.

I needed to calculate the HTM trixels overlapped by the current field of view. This can be done recursively by using intersection tests between HTM trixels and the view frustum. The previous approach, based on right-ascension and declination, calculated cells based on an circle intersection, with the circle centered on the field of view. However, with increasing widescreen aspect ratios, this leads to cell selection outside of the field of view. While view-frustum culling is a slower algorithm, rendering less cells should be more performant overall.

Star Selection Part 3

By
Dave
Project
Published
20 Jul 2011 22:29
Last Modified
13 Jan 2013 17:33

I discussed in parts 1 and 2 an approach to allow efficient selection of nearby stars to a reticle. I originally used an index which divided the sphere volume into 50 equal divisions along each axis, giving 25,000 cubes. This gave a distribution of items per key as shown in Figure 1.

Spatial Index

Figure 1. Hipparcos Cartesian Spatial Index.

Switching to a Hierarchical Triangular Mesh (HTM), also known as Quaternary Triangular Mesh (QTM), gives a more even distribtion with less cells containing very few stars, as shown for subdisivion level 5 in Figure 2. This results in a more consistent behaviour in selecting nearby stars.

Spatial Index

Figure 2. Hipparcos HTM L5 Spatial Index.

Using HTM also makes it easy to index at multiple levels of detail, such that an appropriate index can be used at a given field of view.

HTM On TOAST

By
Dave
Project
Published
19 Jul 2011 23:12
Last Modified
8 Dec 2012 16:32

I decided to switch from Equirectangular to Tessellated Octahedral Adaptive Subdivision Transform (TOAST) projections. TOAST is an extension of the Hierarchical Triangular Mesh (HTM) proposed by Jonathan Fay, chief architect and developer of Microsoft's WorldWide Telescope (WWT). HTM is a representation of a sphere proposed by astronomers in the Sloan Digital Sky Survey (SDSS), which recursively subdivides an octohedron to approximate a sphere with a highly-tesselated polyhedron. The TOAST projection folds the subdivided octahedron out into a square that is very convenient for use in an image pyramid.

Tesselating an Equirectangular projection into a set of texture tiles corresponds to areas on the surface of a sphere bounded by lines of latitude and longitude. The sphere can therefore be approximated using "Slices and Stacks", as shown below in Figure 1. In order to switch to a TOAST projection, the first thing I needed to do was generate the underlying HTM geometry, as shown below in Figure 2. Note that while the fist level is an octohedron in both cases, subsequent levels of Slices and Stacks begin clustering tiles around the poles whereas HTM levels retain a more even distribution.

Equirectangular Equirectangular Equirectangular Equirectangular

Figure 1. Slices & Stacks L1-L4

HTM HTM HTM HTM

Figure 2. HTM L1-L4

Once this was done, I needed to add the relevant texture coordinates to each indexed vertex to map the corresponding TOAST texture tile. Each texture tile maps to two triangles, or HTM "trixels". The texture mapping for an Equirectangular projection is shown below in Figure 3, with the underlying geometry smoothed to more closely approximate a sphere. Figure 4 shows the texture mapping for the TOAST projection, again with the underlying geometry smoothed to more closely approximate a sphere.

Equirectangular Equirectangular Equirectangular Equirectangular

Figure 3. Equirectangular L1-L4 (mapped to Slices & Stacks L5)

HTM HTM HTM HTM

Figure 4. TOAST L1-L4 (mapped to HTM L5)

Constellations

By
Dave
Project
Published
17 Jun 2011 21:53
Last Modified
17 Jun 2011 22:00

It's been a while since I added support for constellation boundaries. I've now added support for drawing constellations and asterisms, along with their names.

Since there are no 'official' definitions of constellation patterns, I've followed the constellations defined by the International Astronomical Union.

I've initially placed names at the average position of the component stars. I may add support for a manual position if this results in too many overlapping labels.

Constellations

Figure 1. Constellations and names.

Constellations

Figure 2. Constellations, boundaries and names.

Motion Blur

By
Dave
Project
Published
17 Jun 2011 21:51
Last Modified
13 Jan 2013 17:33

I wanted to experiment with the use of motion blur for the following:

  • When moving between bodies, since this can result in a fast "fly-by".
  • When observing bodies move past the camera at high speed, such as when using high time-multipliers.
  • For fast orbital motion of satellites or moons. In this case it is quite distracting to see a label jump-around an orbit when it is too fast to move smoothly. In this case motion blur "smoothes" out the motion, and reduces the distraction to the eye.
  • When panning the background stars.

I started with Shawn Hargreaves' post on Motion Blur. Some initial screenshots are shown below.

Motion Blur

Figure 1. Moving towards saturn.

Motion Blur

Figure 2. Moving past saturn.

Motion Blur

Figure 3. Earth rotation.

Orientation

By
Dave
Project
Published
17 May 2011 20:48
Last Modified
13 Jan 2013 17:34

Since I wanted to support running the application on a Microsoft Surface device, it was important to cater for multi-directional interaction. User interface (UI) components can eiher be directional or not. For example, the camera view is non-directional in the sense that it is valid for all users regardless of direction (it is perfectly acceptable to be "upside down" when rolling through 180° in a 3D environment). Text labels, on the other hand, are directional and most beneficial when directly facing a user.

The following screenshots illustrate how the user interface supports multiple directions and multiple users.

Orientation

Figure 1. User interface shown orientated for single user.

Orientation

Figure 2. User interface components are shown with varying orientation to support multiple users.

Figure 2 shows a menu and reticle for each of two users on opposite sides. While there can be many instances of menus and reticles at any one time, a given instance is typically used by one user at a time. It is therefore possible to orient them to the relevant user, either by using the orientation of a Surface tag, or by using the orienation reported by the API for a touch event.

For items such as background labels which are shared between multiple users, it is necessary to pick a consistent orientation for all instances. This "system orientation" can either be determined in code (e.g. by examining the orientation of other directional UI components) or by a user via a menu setting. In Figure 2 the orienation has been chosen to face one of the two users.

While the system orientation is an analogue value (i.e. the background labels, for example, can face any consistent direction), it makes sense to axis-align the orientation of items such as the clock to a side of the screen.

Orbital Motion

By
Dave
Project
Published
15 May 2011 00:09
Last Modified
13 Jan 2013 17:34

Rather than simply using a fixed set of positions for planatary bodies, I wanted to calculate their positions. My initial apprach would only calculate approximate positions, however it was important to consider the following, regardless of the position algorithm used:

  • Calculating positions efficiently.
  • Displaying the system time.
  • Controlling the system time.
  • Camera behaviour when the point of interest changes position.
  • Camera behaviour when a new point of interest is chosen.

Position calculation

My first approach to optimising calculating positions is to only process items which may be visible in the current view. For items in an elliptical orbit, the item may be visible if the orbit is greater than a certain size in the current field of view. The semi-major axis is used to estimate orbital size.

An additional problem which arises is the requirement to correctly draw orbits, which are pre-calculated using a specified number of points. As a body moves around the orbit, drawing the orbit so a vertex exactly coincides with the selected object requires dynamically calculating vertices, at least near to the selected object in each frame.

Time display and control

Date and time is shown using a digital clock. The orientation of the time display is dertermined from the system orientation, rounded to the nearest 90° (i.e. aligned with an edge of the screen).

A dial menu is used to specify a positive or negative factor for adjusting time. A button on the menu can also be used to reset the time display to real-time (i.e. the current time and a factor of +1).

Time control and display

Figure 1. Time control and display.

Camera behaviour

The first consideration was how to adjust the absolute position of each camera type with respect to a moving body. For orbital cameras, the same relative position between the camera and the selected body is maintained. This also has the effect of keeping the background stars fixed. A free camera can operate with or without tracking the current point of interest (if any). When tracking is enabled, the camera maitains its position in space, but adjusts its orientation to keep the point of interest fixed.

Another implication for camera behaviour concerns how a camera transitions from one object to another. Previously the objects were fixed and a linear interpolation between initial and target positions could be used. When the target is moving a velocity-based approach can be used, which accelerates the camera towards the target while tracking its current position, until the camera has reached a given distance from the target and has synchronised velocity. Another option is a distance-based linear interpolation, which reduces the distance to the target over a specified time. While less physically realistic, it is easy to implement and has the benefit of a fixed time to move between objects. I am initially using the latter approach, combined with a simple SLERP of orientation.

Digital Clock

By
Dave
Project
Published
7 May 2011 17:03
Last Modified
13 Jan 2013 18:21

I was thinking about approaches for displaying the date and time in XNA applications. The easiest approach is simply to use the SpriteBatch class and render a string, however I wanted to be able to support a display of arbitrary sizes, which can be problemmatic with rasterized fonts.

I could use a set of vector fonts, however I would then have to calculate the vertices dynamically when the date or time changed.

An interesting alternative is to pre-generate the vertices for a digital clock. In this way, I have all the vertices I need in a VertexBuffer and can decide the colors for each segment on the GPU after passing the date and time as an effect parameter. An initial screenshot is shown in Figure 1.

Digital Clock

Figure 1. Digital Clock.

The "on" and "off" colors are set with effect parameters, and in Figure 1 I have used an "off" color to simulate light-bleed in a physical display.

The geometry is configurable with parameters for segment thickness, size of gap between segments, character spacing, seperator width and slant.

Star Selection Part 2

By
Dave
Project
Published
16 Apr 2011 11:50
Last Modified
13 Jan 2013 17:35

In the part 1 I descrbed an approach for picking stars using a reticle. I wanted to extend this approach to include other objects such as deep-sky objects, planets etc. Background stars have the advantage of being point sources, and selecting a nearby background star is shown in Figure 1.

Reticle for star selection

Figure 1. Reticle selecting the nearest star.

When non-point sources are involved, an intersection test is required and the item is selected if there are no other point sources within a minimum distance, or if the reticle is closer to the center of the non-point source than any other item. The object is highlighted using a rectangular border, as shown below in Figure 2 for M31.

Reticle for deep sky object

Figure 2. Reticle selecting a deep sky object by intersection.

Deep-sky objects of less than a minimum size in the current field of view are treated similarly to point sources such as background stars (selected based on a minimum distance and highlighted using a circle and marker line), as shown below in Figure 3 for M32.

Reticle for deep sky object

Figure 3. Reticle selecting a deep sky object of less than a minimum size.

Page