Orbital Motion

By
Dave
Project
Published
15 May 2011 00:09
Last Modified
13 Jan 2013 17:34

Rather than simply using a fixed set of positions for planatary bodies, I wanted to calculate their positions. My initial apprach would only calculate approximate positions, however it was important to consider the following, regardless of the position algorithm used:

  • Calculating positions efficiently.
  • Displaying the system time.
  • Controlling the system time.
  • Camera behaviour when the point of interest changes position.
  • Camera behaviour when a new point of interest is chosen.

Position calculation

My first approach to optimising calculating positions is to only process items which may be visible in the current view. For items in an elliptical orbit, the item may be visible if the orbit is greater than a certain size in the current field of view. The semi-major axis is used to estimate orbital size.

An additional problem which arises is the requirement to correctly draw orbits, which are pre-calculated using a specified number of points. As a body moves around the orbit, drawing the orbit so a vertex exactly coincides with the selected object requires dynamically calculating vertices, at least near to the selected object in each frame.

Time display and control

Date and time is shown using a digital clock. The orientation of the time display is dertermined from the system orientation, rounded to the nearest 90° (i.e. aligned with an edge of the screen).

A dial menu is used to specify a positive or negative factor for adjusting time. A button on the menu can also be used to reset the time display to real-time (i.e. the current time and a factor of +1).

Time control and display

Figure 1. Time control and display.

Camera behaviour

The first consideration was how to adjust the absolute position of each camera type with respect to a moving body. For orbital cameras, the same relative position between the camera and the selected body is maintained. This also has the effect of keeping the background stars fixed. A free camera can operate with or without tracking the current point of interest (if any). When tracking is enabled, the camera maitains its position in space, but adjusts its orientation to keep the point of interest fixed.

Another implication for camera behaviour concerns how a camera transitions from one object to another. Previously the objects were fixed and a linear interpolation between initial and target positions could be used. When the target is moving a velocity-based approach can be used, which accelerates the camera towards the target while tracking its current position, until the camera has reached a given distance from the target and has synchronised velocity. Another option is a distance-based linear interpolation, which reduces the distance to the target over a specified time. While less physically realistic, it is easy to implement and has the benefit of a fixed time to move between objects. I am initially using the latter approach, combined with a simple SLERP of orientation.

Star Selection Part 2

By
Dave
Project
Published
16 Apr 2011 11:50
Last Modified
13 Jan 2013 17:35

In the part 1 I descrbed an approach for picking stars using a reticle. I wanted to extend this approach to include other objects such as deep-sky objects, planets etc. Background stars have the advantage of being point sources, and selecting a nearby background star is shown in Figure 1.

Reticle for star selection

Figure 1. Reticle selecting the nearest star.

When non-point sources are involved, an intersection test is required and the item is selected if there are no other point sources within a minimum distance, or if the reticle is closer to the center of the non-point source than any other item. The object is highlighted using a rectangular border, as shown below in Figure 2 for M31.

Reticle for deep sky object

Figure 2. Reticle selecting a deep sky object by intersection.

Deep-sky objects of less than a minimum size in the current field of view are treated similarly to point sources such as background stars (selected based on a minimum distance and highlighted using a circle and marker line), as shown below in Figure 3 for M32.

Reticle for deep sky object

Figure 3. Reticle selecting a deep sky object of less than a minimum size.

Star Selection

By
Dave
Project
Published
9 Apr 2011 16:28
Last Modified
13 Jan 2013 17:36

I wanted to be able to select individual stars for further information. In order to do fast lookups of stars near to a cursor or reticle I added a further spatial index to my background stars. Due to the inaccuracies of using right-ascension and declination near to the celestial poles, I used an indexing scheme based on cartesian coordinates. A screenshot is shown below in Figure 1.

Reticle for star selection

Figure 1. Reticle for star selection. Note that the reticle is shown at 2x scale for clarity.

More information on the star is shown after a configurable delay, and hidden when the reticle is no longer focussed on the object. This avoids rendering data when it is not required, and allows higher-latency data (such as from the internet) to be shown when ready. Going forward, I should consider a hierarchical indexing scheme such as a Hierarchical Triangular Mesh1, which would give me the ability to find the nearest star at a given level of resolution.

1 "Indexing the Sphere with the Hierarchical Triangular Mesh", Alexander S. Szalay, Jim Gray, George Fekete, Peter Z. Kunszt, Peter Kukol, and Ani Thakar, August 2005

Level of Detail Part 2

By
Dave
Project
Published
24 Mar 2011 22:17
Last Modified
13 Jan 2013 17:40

I described in Part 1 the basis of an approach for a planetary renderer with Level of Detail (LOD) support. I've now added the following:

  • Support for a simple cylindrical tiling scheme, using 512-pixel, square tiles and 2 tiles at LOD 0.
  • A background process to load tiles as required, without locking the rendering loop.
  • Datasets for surface texture, normals, specularity and clouds to LOD 5 (2048 tiles with an overall size of 32,768 x 16,384 pixels).

Figures 1-3 below show the mapping of surface texture, normals and specularity.

Normal and Specular LOD

Normal and Specular LOD

Normal and Specular LOD

Figures 1-3. View the French Alps showing normal and specular mapping at LOD 1, 3 and 5 respectively.

The next step is to adapt the atmospheric shaders for a tile-based approach. Adapting the cloud-shadowing algorithm will be the biggest challenge, since this will require more than one cloud tile in order to calculate the correct shadows for a single ground tile.

Level of Detail

By
Dave
Project
Published
20 Mar 2011 18:49
Last Modified
13 Jan 2013 17:40

The current implementation of the planetary renderer uses a single texture, or level of detail (LOD). In order to provide more realistic views when moving closer to the planet's surface I need to vary the LOD according to the view. This is a well-studied and potentially complex topic, however it is useful to start by considering the following:

  • A tile system for spherical textures.
  • An approach for culling geometry that is not visible.
  • An algorithm for picking LOD based on the current view.
  • Availability of data.

Tile System

There are several tiling systems available, however I thought I'd start with a simple quad-tree. Bing maps currently uses 256-pixel, square tiles based on a Mercator projection. The first level has 4 tiles, the second level has 16 tiles etc. Simple cylindrical datasets are also available which use different tile sizes and arrangements, for example 1024-pixel, square tiles with the first level having 2 tiles for east and west, the second having having 8 tiles etc.

Culling

I thought I'd start with a simple quad-tree approach, based on culling the results of recursive intersection tests between tile bounding boxes and the view frustum. Figure 1 shows an initial screenshot of this approach. Each of the tiles in a quad is a different primary color, and tiles are drawn for LOD 3 (64 tiles in this tiling scheme). Bounding boxes are shown when there is an intersection at a given LOD with the view frustum. Each child within the bounding box is then checked. If the child is outside of the view frustum, it is culled. If it intersects, the processes recurses to a lower LOD.

AABB Intersections with View Frustum

Figure 1. AABB Intersections of Tiles with View Frustum

LOD Calculation

The amount of planet surface visible is a function of the distance from the planet, and the field of view and direction of the camera. I therefore needed to work out the size of each tile (they vary by latitude) so that the appropriate LOD can be rendered. I thought I'd start by measuring the maximum distance along a line of longitude within the tile. Figure 2 shows a rendering of the sphere where the upper tiles are rendered at a higher LOD (since for a given LOD they are smaller due to their lattitude).

LOD

Figure 2. Tiles rendered at varying LOD

The geometry used to create the tiles uses a constant number of vertices per degree of latitude and degree of longitude. In addition to minimising vertices, this also ensures that adjacent tiles of different LOD "line-up" correctly to a certain level. For example, if I use 64 vertices per 360 degrees of longitude and 32 vertices per 180 degrees of lattitude, the tiles tesselete without gaps up to LOD 6 in this tiling scheme (since these tiles are the first to have only 4 vertices).

Data

The current planetary renderer uses the following data sources:

  • Surface texture
  • Normal map
  • Cloud map

Going forward, I'd also like to include specularity maps (seperately or as part of the alpha channel of the texture) to deal with the different reflectivity between land and water. In supporting a LOD approach, I don't want to sacrifice realism when rendering lower LOD. Since I would therefore also need normal, cloud (and optionally specularity) maps, I will use the simple cylindrical datasets.

Filters

By
Dave
Project
Published
14 Mar 2011 23:19
Last Modified
13 Jan 2013 17:36

I wanted to be able to view the sky and planets using different wavelengths of light, and a convenient way to achieve this was by the use of "filters", as shown below in Figure 1.

H-Alpha Filter

Figure 1. H-Alpha Emmision Filter.

This enables me to either adjust the camera orientation while keeping the filter in fixed position, or move the filter around a while keeping the camera orientation fixed (or a combination of both), and see the relevant portion of the sky in a different wavelength. By ajusting the opacity scale it is possible to see how items relate to each other in different wavelengths (for example an area of H-Alpha emmision originating from particular star or deep-sky object in the visible spectrum).

The approach I am currently taking is to render the view in each wavelength, and create opacity masks which I can use to blend the views together. In this way I can use multiple filters of arbitrary shape and size.

Virtual Universe Gallery

By
Dave
Project
Published
31 Dec 2010 00:41
Last Modified
21 Dec 2012 10:43

Now that I've added a gallery feature to my blog engine, I've created a gallery for this project and included a couple of screenshots I had lying around to get started.

There is a link to all galleries in the main menu, and this specific gallery can be found at http://drdave.co.uk/gallery/nuiverse.

A Matter of Controls Part 3

By
Dave
Project
Published
20 Dec 2010 00:29
Last Modified
13 Jan 2013 17:37

In Part 1 and Part 2 I showed some screenshots of a radial dial for controlling settings. I've made the following changes:

  1. Enabled the control for multi-touch. All buttons now support multi-touch input (per item and per category), and the dial can be "rotated" using multiple contacts.
  2. Changed the dial to rotate the scale and keep the pointer fixed, rather than vice-versa to provide better feedback with multi-touch rotation.
  3. Added inerita for rotation.
  4. Moved the category title back to the centre.
  5. Changed the button action to execute on click (manipulator-up) rather than on a manipulator-down event, with highlighting to provide better visual feedback when touching a button.
  6. Added support for enumrations, in addition to boolean and numeric buttons.
Float dial

Figure 1. Radial control for numeric settings

Bool dial

Figure 2. Radial control for boolean settings

Using a single hue for each of the regular, disabled, and highlighted button states gives the impression of a "heads-up-display" (HUD) rather than a mechanical dial. "LED"-style colors and a Gaussian blur add to the effect, as shown below in Figure 3.

LED HUD

Figure 3. LED coloring for HUD

Inertia Processing

By
Dave
Project
Published
23 Oct 2010 23:40
Last Modified
13 Jan 2013 17:38

I previously described an approach to implementing inertia processing by averaging positions to extract velocity from manipulation. However, this approach could result in discontinuous velocities when transitioning from manipulation to inertia.

While implementing multitouch manipulation processing for this project, I "smoothed-out" manipulations using a basic dampened-spring algorithm. Rather than extracting velocity from position, this approach conversely relies on calculating velocities to extrapolate positions. This neatly solves the problem of discontinuous velocities since I always know the correct velocity at the point at which manipulation ends, and is also a much simpler approach.

Using "stiff" springs ensures gestures such as flicks lead to rapidly increasing velocity.

Automatic Magnitude

By
Dave
Project
Published
11 Oct 2010 17:18
Last Modified
31 Dec 2010 08:19

In addition to indexing stars and deep sky objects by position, as described in Spatial Indexing Part 1 and Part 2, I added an additional index for magnitude to allow objects to be rendered by brightness.

I wanted to provide a way of automatically adjusting the faintest object visible as the field-of-view is changed, so that a sensible number of objects are drawn, particularly for rendering an appropriate number of labels. In order to consider which algorithm to use, I thought it would be useful to analyse the datasets for stars and deep-sky objects. Figure 1 shows the distribution of magnitude values in the Hipparcos catalog, the majority of which (87%) are in the range from 6-10.

Hipparcos Magnitude Distribution

Figure 1. Hipparcos magnitude histogram

In order to automatically select the magnitude for a given field-of-view, I chose first to calculate its fraction of the maximum field-of-view (1), then square this to give the fraction of area of the maximum field-of-view (2).

fov_fraction = current_fov(1)
                 max_fov
area_fraction = fov_fraction2(2)

I assume that the change in brightness is directly proportional to the change in area (3).

brightness_fraction = area_fraction(3)

Apparent magnitude is a logarithmic scale, with magnitude 6 stars defined as being one hundred times brighter than magnitude 1 stars. Hence each order of magnitude is 5√100 (approximately 2.512, known as Pogson's Ratio) times brighter (4). I then convert this difference in brightness to a difference in magnitude (5).

brightness_fraction = 2.512magnitude_delta(4)
magnitude_delta = log(brightness_fraction)(5)
                        log(2.512)

Finally I subtract the magnitude difference from the magntiude used at the maximum field-of-view (6). The maximum magnitude for labels can be set slightly below this figure (e.g. by some order of magnitude) so that items appear before their labels.

max_magnitude = 5.0 - magnitude_delta(6)

The maximum magnitude calculated in this manner is shown in Figure 2. Note that the graph tends to the magnitude at maximum field-of-view (magntide 5.00 at 100° in this case). For example, at a field-of-view of 10°, the area fraction of the maximum field of view is 0.01, hence the magnitude difference is -5.00, giving a calculated value of 5.00 + 5.00 = 10.00.

Automatic Magnitude

Figure 2. Automatic magnitude by field-of-view

Page