Motion Blur

By
Dave
Project
Published
17 Jun 2011 21:51
Last Modified
13 Jan 2013 17:33

I wanted to experiment with the use of motion blur for the following:

  • When moving between bodies, since this can result in a fast "fly-by".
  • When observing bodies move past the camera at high speed, such as when using high time-multipliers.
  • For fast orbital motion of satellites or moons. In this case it is quite distracting to see a label jump-around an orbit when it is too fast to move smoothly. In this case motion blur "smoothes" out the motion, and reduces the distraction to the eye.
  • When panning the background stars.

I started with Shawn Hargreaves' post on Motion Blur. Some initial screenshots are shown below.

Motion Blur

Figure 1. Moving towards saturn.

Motion Blur

Figure 2. Moving past saturn.

Motion Blur

Figure 3. Earth rotation.

Orientation

By
Dave
Project
Published
17 May 2011 20:48
Last Modified
13 Jan 2013 17:34

Since I wanted to support running the application on a Microsoft Surface device, it was important to cater for multi-directional interaction. User interface (UI) components can eiher be directional or not. For example, the camera view is non-directional in the sense that it is valid for all users regardless of direction (it is perfectly acceptable to be "upside down" when rolling through 180° in a 3D environment). Text labels, on the other hand, are directional and most beneficial when directly facing a user.

The following screenshots illustrate how the user interface supports multiple directions and multiple users.

Orientation

Figure 1. User interface shown orientated for single user.

Orientation

Figure 2. User interface components are shown with varying orientation to support multiple users.

Figure 2 shows a menu and reticle for each of two users on opposite sides. While there can be many instances of menus and reticles at any one time, a given instance is typically used by one user at a time. It is therefore possible to orient them to the relevant user, either by using the orientation of a Surface tag, or by using the orienation reported by the API for a touch event.

For items such as background labels which are shared between multiple users, it is necessary to pick a consistent orientation for all instances. This "system orientation" can either be determined in code (e.g. by examining the orientation of other directional UI components) or by a user via a menu setting. In Figure 2 the orienation has been chosen to face one of the two users.

While the system orientation is an analogue value (i.e. the background labels, for example, can face any consistent direction), it makes sense to axis-align the orientation of items such as the clock to a side of the screen.

Orbital Motion

By
Dave
Project
Published
15 May 2011 00:09
Last Modified
13 Jan 2013 17:34

Rather than simply using a fixed set of positions for planatary bodies, I wanted to calculate their positions. My initial apprach would only calculate approximate positions, however it was important to consider the following, regardless of the position algorithm used:

  • Calculating positions efficiently.
  • Displaying the system time.
  • Controlling the system time.
  • Camera behaviour when the point of interest changes position.
  • Camera behaviour when a new point of interest is chosen.

Position calculation

My first approach to optimising calculating positions is to only process items which may be visible in the current view. For items in an elliptical orbit, the item may be visible if the orbit is greater than a certain size in the current field of view. The semi-major axis is used to estimate orbital size.

An additional problem which arises is the requirement to correctly draw orbits, which are pre-calculated using a specified number of points. As a body moves around the orbit, drawing the orbit so a vertex exactly coincides with the selected object requires dynamically calculating vertices, at least near to the selected object in each frame.

Time display and control

Date and time is shown using a digital clock. The orientation of the time display is dertermined from the system orientation, rounded to the nearest 90° (i.e. aligned with an edge of the screen).

A dial menu is used to specify a positive or negative factor for adjusting time. A button on the menu can also be used to reset the time display to real-time (i.e. the current time and a factor of +1).

Time control and display

Figure 1. Time control and display.

Camera behaviour

The first consideration was how to adjust the absolute position of each camera type with respect to a moving body. For orbital cameras, the same relative position between the camera and the selected body is maintained. This also has the effect of keeping the background stars fixed. A free camera can operate with or without tracking the current point of interest (if any). When tracking is enabled, the camera maitains its position in space, but adjusts its orientation to keep the point of interest fixed.

Another implication for camera behaviour concerns how a camera transitions from one object to another. Previously the objects were fixed and a linear interpolation between initial and target positions could be used. When the target is moving a velocity-based approach can be used, which accelerates the camera towards the target while tracking its current position, until the camera has reached a given distance from the target and has synchronised velocity. Another option is a distance-based linear interpolation, which reduces the distance to the target over a specified time. While less physically realistic, it is easy to implement and has the benefit of a fixed time to move between objects. I am initially using the latter approach, combined with a simple SLERP of orientation.

Digital Clock

By
Dave
Project
Published
7 May 2011 17:03
Last Modified
13 Jan 2013 18:21

I was thinking about approaches for displaying the date and time in XNA applications. The easiest approach is simply to use the SpriteBatch class and render a string, however I wanted to be able to support a display of arbitrary sizes, which can be problemmatic with rasterized fonts.

I could use a set of vector fonts, however I would then have to calculate the vertices dynamically when the date or time changed.

An interesting alternative is to pre-generate the vertices for a digital clock. In this way, I have all the vertices I need in a VertexBuffer and can decide the colors for each segment on the GPU after passing the date and time as an effect parameter. An initial screenshot is shown in Figure 1.

Digital Clock

Figure 1. Digital Clock.

The "on" and "off" colors are set with effect parameters, and in Figure 1 I have used an "off" color to simulate light-bleed in a physical display.

The geometry is configurable with parameters for segment thickness, size of gap between segments, character spacing, seperator width and slant.

Star Selection Part 2

By
Dave
Project
Published
16 Apr 2011 11:50
Last Modified
13 Jan 2013 17:35

In the part 1 I descrbed an approach for picking stars using a reticle. I wanted to extend this approach to include other objects such as deep-sky objects, planets etc. Background stars have the advantage of being point sources, and selecting a nearby background star is shown in Figure 1.

Reticle for star selection

Figure 1. Reticle selecting the nearest star.

When non-point sources are involved, an intersection test is required and the item is selected if there are no other point sources within a minimum distance, or if the reticle is closer to the center of the non-point source than any other item. The object is highlighted using a rectangular border, as shown below in Figure 2 for M31.

Reticle for deep sky object

Figure 2. Reticle selecting a deep sky object by intersection.

Deep-sky objects of less than a minimum size in the current field of view are treated similarly to point sources such as background stars (selected based on a minimum distance and highlighted using a circle and marker line), as shown below in Figure 3 for M32.

Reticle for deep sky object

Figure 3. Reticle selecting a deep sky object of less than a minimum size.

Star Selection

By
Dave
Project
Published
9 Apr 2011 16:28
Last Modified
13 Jan 2013 17:36

I wanted to be able to select individual stars for further information. In order to do fast lookups of stars near to a cursor or reticle I added a further spatial index to my background stars. Due to the inaccuracies of using right-ascension and declination near to the celestial poles, I used an indexing scheme based on cartesian coordinates. A screenshot is shown below in Figure 1.

Reticle for star selection

Figure 1. Reticle for star selection. Note that the reticle is shown at 2x scale for clarity.

More information on the star is shown after a configurable delay, and hidden when the reticle is no longer focussed on the object. This avoids rendering data when it is not required, and allows higher-latency data (such as from the internet) to be shown when ready. Going forward, I should consider a hierarchical indexing scheme such as a Hierarchical Triangular Mesh1, which would give me the ability to find the nearest star at a given level of resolution.

1 "Indexing the Sphere with the Hierarchical Triangular Mesh", Alexander S. Szalay, Jim Gray, George Fekete, Peter Z. Kunszt, Peter Kukol, and Ani Thakar, August 2005

Level of Detail Part 2

By
Dave
Project
Published
24 Mar 2011 22:17
Last Modified
13 Jan 2013 17:40

I described in Part 1 the basis of an approach for a planetary renderer with Level of Detail (LOD) support. I've now added the following:

  • Support for a simple cylindrical tiling scheme, using 512-pixel, square tiles and 2 tiles at LOD 0.
  • A background process to load tiles as required, without locking the rendering loop.
  • Datasets for surface texture, normals, specularity and clouds to LOD 5 (2048 tiles with an overall size of 32,768 x 16,384 pixels).

Figures 1-3 below show the mapping of surface texture, normals and specularity.

Normal and Specular LOD

Normal and Specular LOD

Normal and Specular LOD

Figures 1-3. View the French Alps showing normal and specular mapping at LOD 1, 3 and 5 respectively.

The next step is to adapt the atmospheric shaders for a tile-based approach. Adapting the cloud-shadowing algorithm will be the biggest challenge, since this will require more than one cloud tile in order to calculate the correct shadows for a single ground tile.

Level of Detail

By
Dave
Project
Published
20 Mar 2011 18:49
Last Modified
13 Jan 2013 17:40

The current implementation of the planetary renderer uses a single texture, or level of detail (LOD). In order to provide more realistic views when moving closer to the planet's surface I need to vary the LOD according to the view. This is a well-studied and potentially complex topic, however it is useful to start by considering the following:

  • A tile system for spherical textures.
  • An approach for culling geometry that is not visible.
  • An algorithm for picking LOD based on the current view.
  • Availability of data.

Tile System

There are several tiling systems available, however I thought I'd start with a simple quad-tree. Bing maps currently uses 256-pixel, square tiles based on a Mercator projection. The first level has 4 tiles, the second level has 16 tiles etc. Simple cylindrical datasets are also available which use different tile sizes and arrangements, for example 1024-pixel, square tiles with the first level having 2 tiles for east and west, the second having having 8 tiles etc.

Culling

I thought I'd start with a simple quad-tree approach, based on culling the results of recursive intersection tests between tile bounding boxes and the view frustum. Figure 1 shows an initial screenshot of this approach. Each of the tiles in a quad is a different primary color, and tiles are drawn for LOD 3 (64 tiles in this tiling scheme). Bounding boxes are shown when there is an intersection at a given LOD with the view frustum. Each child within the bounding box is then checked. If the child is outside of the view frustum, it is culled. If it intersects, the processes recurses to a lower LOD.

AABB Intersections with View Frustum

Figure 1. AABB Intersections of Tiles with View Frustum

LOD Calculation

The amount of planet surface visible is a function of the distance from the planet, and the field of view and direction of the camera. I therefore needed to work out the size of each tile (they vary by latitude) so that the appropriate LOD can be rendered. I thought I'd start by measuring the maximum distance along a line of longitude within the tile. Figure 2 shows a rendering of the sphere where the upper tiles are rendered at a higher LOD (since for a given LOD they are smaller due to their lattitude).

LOD

Figure 2. Tiles rendered at varying LOD

The geometry used to create the tiles uses a constant number of vertices per degree of latitude and degree of longitude. In addition to minimising vertices, this also ensures that adjacent tiles of different LOD "line-up" correctly to a certain level. For example, if I use 64 vertices per 360 degrees of longitude and 32 vertices per 180 degrees of lattitude, the tiles tesselete without gaps up to LOD 6 in this tiling scheme (since these tiles are the first to have only 4 vertices).

Data

The current planetary renderer uses the following data sources:

  • Surface texture
  • Normal map
  • Cloud map

Going forward, I'd also like to include specularity maps (seperately or as part of the alpha channel of the texture) to deal with the different reflectivity between land and water. In supporting a LOD approach, I don't want to sacrifice realism when rendering lower LOD. Since I would therefore also need normal, cloud (and optionally specularity) maps, I will use the simple cylindrical datasets.

Surface 3D Demo Part 3

By
Dave
Project
Published
18 Mar 2011 20:32
Last Modified
13 Jan 2013 18:24

In Part 1 and Part 2 I described an interactive autostereogram viewer. In this post, I thought I start by including a link to a video.

The video shows how multitouch gestures on Microsoft Surface can be used to interact with the models. A depth-map shader then processes the model, and a second-pass converts the depth-map to an autostereogram. As described previously, to avoid distracting the eye when animating a tile-based autostereogram, a randon-dot pattern is used which is regenerated each frame. Unfortunately the video doesn't have the required resolution to see much detail when animating the autostereogram, but the 3D-effect seen when the animation stops is maintained when viewing the application.

Here are some further screenshots:

SIRDS

Figure 1. Depth map

SIRDS

Figure 2. SIRDS for animation

SIRDS

Figure 3. Texture for static image

Filters

By
Dave
Project
Published
14 Mar 2011 23:19
Last Modified
13 Jan 2013 17:36

I wanted to be able to view the sky and planets using different wavelengths of light, and a convenient way to achieve this was by the use of "filters", as shown below in Figure 1.

H-Alpha Filter

Figure 1. H-Alpha Emmision Filter.

This enables me to either adjust the camera orientation while keeping the filter in fixed position, or move the filter around a while keeping the camera orientation fixed (or a combination of both), and see the relevant portion of the sky in a different wavelength. By ajusting the opacity scale it is possible to see how items relate to each other in different wavelengths (for example an area of H-Alpha emmision originating from particular star or deep-sky object in the visible spectrum).

The approach I am currently taking is to render the view in each wavelength, and create opacity masks which I can use to blend the views together. In this way I can use multiple filters of arbitrary shape and size.

Page