Star Names

By
Dave
Project
Published
21 Sep 2010 00:07
Last Modified
3 Jan 2011 13:41

I wanted to add labels to the brighter background stars. Stars are commonly identified using multiple designations, including:

  1. Common names
  2. Bayer names
  3. Flamsteed names
  4. Catalogue identifiers such as Hippacos, Henry Draper etc

In this way, the same star may be referred to as Belelgeuse, α Orionis, 58 Orionis, HIP27989, and HD39801 respectively. Since each designation is used for a varying group of stars, a given star may have a varying number of designations.

I chose to start with Bayer names, since there are a reasonable number of these (approximately 1500), and they would also serve as a good reference to add definitions of constellations and asterisms at a later date. The first step was to build a suitable list, which was greatly helped by the availability in the public domain of cross-references between designations.

I previously discussed a couple of approaches to text rendering for deep sky object labels, and I took a similar approach here. Since I was using wireframe labels, I needed to add a further set of vector definitions for the Greek alphabet. Unfortunately, these characters are not included in the "Modern", "Roman", and "Script" Windows fonts. However, other definitions are available in the public domain, such as the Hershey fonts developed in the 1960s by Dr. A. V. Hershey at the US Naval Weapons Laboratory.

Numeric superscripts are also used in the Bayer system. These may be used to distinguish stars with the same Greek letter, such as (typically optical) binaries, however there are some exceptions such as the chain of stars π1, π2, π3, π4 and π5 Orionis. I therefore needed to further extend the wireframe font rendering to support superscripts. An initial screenshot is shown in Figure 1.

Bayer labels

Figure 1. Bayer star names, in the region of Orion

One of the remaining issues, now highlighted by the larger number of names, is dealing with overlapping labels. Culling labels on the CPU, either by magnitude for wide fields-of-view, or when not visible outside narrow fields-of-view might be a useful approach, and will be investigated at a later date.

Multitouch Game Loop Manipulation Processing

By
Dave
Project
Published
12 Aug 2010 23:05
Last Modified
8 Dec 2012 16:21

Multitouch Surface support for XNA-based applications is provided via the Microsoft.Surface.Core.Manipulations namespace in an event-driven model. I was interested in investigating general multitouch processing within a game-loop model, i.e. not involving events, and thought it would be instructuve to develop my own class to process manipulations in this way.

In a similar manner to the Affine2DManipulationProcessor, I wanted to support arbitrary combinations of translation, rotation, and scale. Translation is trivial, since it is simply a matter of tracking the changes in each manipulation point between calls and taking the average.

Rotation and scale is a little more tricky, since each of these values are relative to a centre-point, or manipulation origin. Rotation is calculated from the average angle between each point and the origin, and scale is calculated from the average distance. Unlike a mouse cursor, manipulation points come and go, so the key is only to track changes to those points which persist between calls, and then update the positions, origin, distance and angles for the next call.

In order to generalise manipulation points from multiple sources such as mouse, gamepad, Surface Contacts and .NET 4.0 Touch, I created my own Manipulation structure to abstract properties such as ID and position. My input processing can then build a collection of these objects from any relevant source(s) and pass them to my manipulation processor as part of the Update call as follows:

myManipulationProcessor.ProcessManipulators(myManipulators);

The cumulative transforms for translation, rotation and scale (according to the supported manipulation types specified) are then immediately available as properties on the manipulation processor.

In order to test the solution, I wrote a small harness to visualise the manipulation points, and their effect on a reference cross-hair. While I also added support for a mouse (right-click to add/remove points, left-drag to move points), multi-touch hardware is required to test scenarios when multiple points are added, moved, or removed simultaneusly. A screenshot is shown in Figure 1.

Multitouch manipulation

Figure 1. Multitouch manipulation with averaged manipulation origin.

One of the complexities of working with multiple manipulation points is deciding how to determine the manipulation origin. One option is to simply take the average position of each point, as shown in Figure 1. Another option is to introduce a speicific "pivot point" as the origin and use this to calculate scale and rotation. This pivot point can either be fixed, or tracking the object being manipulated. The latter two options are shown in Figures 2 and 3.

Multitouch manipulation

Figure 2. Multitouch manipulation with fixed pivot point.

Multitouch manipulation

Figure 3. Multitouch manipulation with pivot point centered on manipulation object.

The solution is demonstrated in the following video. Each approach for determining the manipulation origin is illustrated using translation, rotation, and scaling manipulations, initially in isolation and then in combination.

Video 1. Multitouch game loop manipulation processing.

Surface 3D Demo Part 2

By
Dave
Project
Published
7 Aug 2010 11:03
Last Modified
13 Jan 2013 18:23

In Part 1 I discussed an simple shader which generated autostereograms from depth maps. The next step was to make the application interactive, so that instead of generating an autostereogram from a static depth-map, an interactive 3D model could be used to generate dynamic autostereograms.

To create a depth-map from a model, I added a pixel shader to return the depth value. A similar example can be found in How To: Create a Depth Texture. In order to obtain a sensible range of depth values, I need to calculate the bounds of the model using bounding spheres. I can then set the near and far planes in my projection matrix to correspond to the closest and furthest points in the model respectively. The depth texture can be extracted by rendering to a render target. It can then be passed as a shader parameter to allow autostereograms to be rendered in real time as the model is manipulated.

One of the main issues with animating an autostereogram is that manipulating the model results in changes across areas of the image which do not correspond to the location of the model itself.1 One way around this distracting side-effect is to use a SIRDS, or Single Image Random Dot Stereogram, and alter the pattern each frame in which the model is manipulated.

Instead of passing a pre-defined pattern texture, I generate a random texture on the CPU as follows:

private Texture2D CreateStaticMap(int resolution)
{
    Random random = new Random();
    Color[] colors = new Color[resolution * resolution];
    for (int x = 0; x < resolution; x++)
        for (int y = 0; y < resolution; y++)
            colors[x + y * resolution] = new Color(new Vector3((float)random.NextDouble()));

    Texture2D texture = new Texture2D(GraphicsDevice, resolution, resolution, 1, TextureUsage.None, SurfaceFormat.Color);
    texture.SetData(colors);
    return texture;
}

The resultant grayscale image is shown below in Figure 1.

SIRDS

Figure 1. SIRDS for animation

1This is exacerbated by my current simplistic stereogram shader, since it doesn't yet implmenent features such as hidden-surface removal (where a part of the model should only be visible by one eye) which in turn leads to additional "echoes" in the image.

Surface 3D Demo Part 1

By
Dave
Project
Published
1 Aug 2010 19:19
Last Modified
13 Jan 2013 18:22

I've always been fascinated by stereoscopic images and their ability to convey depth from just two dimensions, and I was interested to explore their effectiveness in implementing a Surface application for 3D visualisation.

As a multiuser and multidiretional platform, Microsoft Surface is ideal for viewing 2D content. Since stereoscropic images can viewed1 from angles orthogonal to an axis designed to align with the plane of the viewer's eyes, they enable depth perception from two opposing viewing positions on either side of a Surface device.

The first step was to generate a stereoscopic image from a depth-map by implementing a pixel shader to render an autostereogram. My initial algorithm was very basic, and produced images as per the example shown below in Figure 1.

Initial rendering

Figure 1. Initial autostereogram rendering from depth map.

The initial shader takes the following parameters:

  • Depth texture
  • Tile texture
  • Size of tile (number of vertical strips)
  • Depth factor

1In order to perceive a 3D image the viewer must decouple convergence and focusing of their eyes. In order to aid convergence, a disc and ring have been placed at the correct separation. Looking "through" the image results in four shapes (a disc and ring from each eye). The eyes are correctly converged when the two centre shapes "overlap" to give a disc within a ring. At this point the eyes must be refocussed without changing their convergence. Bright light can help, since a contracted iris gives a decreased depth of field.

Control Integration

By
Dave
Project
Published
19 Jul 2010 11:13
Last Modified
13 Jan 2013 17:45

There's still some tweaking to do on my radial controls discussed in Part 1 and Part 2 of a previous post, but I wanted to integrate them into the project. I've included a couple of screenshots below. Note that the size of the control has been enlarged for clarity and that the menu structure is purely for test-purposes.

Radial control for boolean settings

Figure 1. Radial control for boolean settings.

Radial control for numeric settings

Figure 2. Radial control for numeric settings.

I decided to render all of the "focus arcs" around the edge of the control, highlighting the relevant arc as appropriate, so that the pointer for numeric values didn't "float" in space when it wasn't adjacent to the currently focussed segment.

The control allows me to define a hierarchical menu structure and update properties using reflection, so it's trivial to add another entry. I can also define min/max ranges, format strings, rates of change when using the keyboard etc. For example, the menu item for "zoom" shown in Figure 2 looks like this:

new MenuItem
{
    FormatString = "Zoom\n{0}",
    PropertyInfo = typeof(Camera).GetProperty("Zoom"),
    Min = 0.0f,
    Max = 45.0f,
    Rate = 1.0f,
    TickFormatString = "0.0",
    ValueFormatString = "0.00",
}

A Matter of Controls Part 2

By
Dave
Project
Published
14 Jul 2010 23:53
Last Modified
13 Jan 2013 17:44

In Part 1 I showed some screenshots of a radial dial for controlling settings. I've made the following visual changes:

  • Removed the central buttons and to allow a Surface tag to control the dial position and orientation
  • Adjusted the size and fonts to use a tag 39mm in diameter (a poker chip)
  • Added a label indicating current category above the dial which can also be used to drag the dial with the mouse
  • Added a back button to the last (clockwise) segment to navigate back to the previous category
  • Added a ring to the centre of the dial to indicate dial focus when more than one dial is present. The arc segments around the outside indicate item segment focus for a given dial
  • Added highlighting to show selected or pressed item states
Bool dial

Figure 1. Radial control for boolean settings

Planetary Rings

By
Dave
Project
Published
31 May 2010 11:59
Last Modified
13 Jan 2013 17:46

The planetary bodies wouldn't be complete without their rings. Whilst Saturn has the most spectacular system, Jupiter, Uranus and Neptune also possess systems of their own. I chose to start with Saturn, and Figure 1 shows an initial screenshot.

Saturn Ring System

Figure 1. Initial rendering of saturn ring system.

I had to consider the following points, and will discuss each in turn:

  1. Mesh
  2. Texture & Lighting
  3. Size & Orientation
  4. Shadows

Mesh

I implemented a simple algorithmic disc mesh. For the planetary bodies I can use a single mesh, scaled appropriately. However for each instance of a ring system I generate a seperate mesh since both inner and outer radii can change. In the future I may look at using a single mesh and using vertex displacement in the vertex shader to size the ring appropriately.

Texture & Lighting

I chose to start by lighting the ring equally from each side, though in reality the rings look different from above and below. Using a diffuse, per-pixel shader gives a term such as the following, where the absolute value gives the same lighting from above and below.

float diffuse = saturate(abs(dot(direction, normal)));

In order to be able to see through the gaps in the ring system, I blended the rings using the following render states:

SourceBlend = Blend.SourceAlpha;
DestinationBlend = Blend.InverseSourceAlpha;

There are numerous textures available for Saturn's rings. I took a thin slice for a texture map, and added an alpha channel corresponding to the ring transmittence.

Size & Orientation

I added a further xml file for the physical data on a ring system, and used it to size the ring mesh and apply the appropriate texture.

In order to actually see any light on the ring system, I also had to add the correct obliquity (axial tilt) to the planet, available from the NASA JPL Horizons system. This parameter also needed to be applied to the positions and orbits of the planet's moons.

Shadows

The final piece for my first iteration on the ring system was to add shadows. Shadows are cast from the planet onto the rings, and also from the rings onto the planet. For the former, when rendering the ring I implemented a ray-sphere intersection algorithm to add shadow whenever a ray fired toward the sun intersected the planet. For the latter, when rendering the planet, I implemented a simple ray-plane algorithm to add shadow whenever a ray fired from the planet intersected the ring.

A Matter of Controls

By
Dave
Project
Published
19 May 2010 23:46
Last Modified
13 Jan 2013 17:45

Up to this point, I've had numerous settings which I can control using a keyboard-driven menu. One of the key things I've been looking at recently is to "touch-enable" these settings, so that a keyboard is not necessary. This will enable interaction via touchscreen, Microsoft Surface etc.

There are several different types of settings, including:

  1. Boolean values
  2. Floating-point values
  3. Integer values

The control had to support the following:

  1. Different data types
  2. Multidirectionality
  3. Multitouch manipulations, at least for translation and rotation
  4. Multiple simultaneous instances of the control
  5. Keyboard support with "tab stops"

Boolean Values

For boolean values, I've used a "radial" control which renders an arbitrary number of segments which can be selected/deselected. An early version is shown below in Figure 1.

Boolean settings

Figure 1. Radial control for boolean settings

Floating-Point Values

When floating-point values are selected, I add a radial scale and a dial which can be rotated to set the appropriate value. An early version is shown below in Figure 2.

Float settings

Figure 2. Radial control for float settings

Navigation

In order to navigate a "tree-view" of settings, the current "category" is displayed, along with a "Back" button to navigate back to the parent category. Child categories are shown as segments on the menu, along with settings for the current category. The currently-selected setting is indicated by a "focus" arc, and "pressed" buttons can be highlighted using different colors and opacities.

Precision Matters

By
Dave
Project
Published
28 Feb 2010 13:26
Last Modified
13 Jan 2013 17:46

The Vector3 struct uses 32-bit floating-point values. On a solar-system scale, this does not have sufficient precision to accurately position objects such as the far outer planets.

A single-precision (32-bit) floating-point value stores 7 decimal digits of precision. Neptune, for example, has an approximate distance from the sun of 4.5x109km. Expressed with 7 significant figures, this has an error of approximately 500km (one half of the value of the last significant place). For Pluto (although no longer classified by the IAU as a planet), at a distance of approximately 5.9x109km, 500km is approximately one quarter of its diameter! Clearly single-precision floating point numbers are not sufficient to store these positions to the required level of accuracy.

One option is to use double-precision (64-bit) values, which store 15 decimal digits of precision. Most modern GPUs remain incapable of performing double-precision arithmetic, however this level of precision is only required to calculate the position of the object relative to the camera. This can be done on the CPU, after which point single-precision calculations are just fine, as shown below in Figures 1-3.

Precision

Figure 1. Uranus and moon orbits

Precision

Figure 2. Neptune and moon orbits

Precision

Figure 3. Pluto and moon orbits

An example struct for dealing with double-precision vectors is as follows:

public struct Vector3Double
{
    double _x, _y, _z;

    public double X { get { return _x; } set { _x = value; } }
    public double Y { get { return _y; } set { _y = value; } }
    public double Z { get { return _z; } set { _z = value; } }

    public Vector3Double(double x, double y, double z)
    {
        _x = x;
        _y = y;
        _z = z;
    }
}

Using operator overloading it is possible to work with Vector3Double the same way as Vector3, for example:

public static Vector3Double operator -(Vector3Double vector1, Vector3Double vector2)
{
    return new Vector3Double(
        vector1._x - vector2._x,
        vector1._y - vector2._y,
        vector1._z - vector2._z);
}

Conversions, for example to and from Vector3 can also be easily supported, for example:

public static implicit operator Vector3(Vector3Double vector)
{
    return new Vector3(
        (float)vector._x,
        (float)vector._y,
        (float)vector._z);
}

Note that while double-precision solves the problem to the levels of accuracy described here, a better long-term option will be to use fixed-precision types such as 64-bit integers.

Constellation Boundaries

By
Dave
Project
Published
27 Feb 2010 22:38
Last Modified
13 Jan 2013 17:48

The International Astronomical Union (IAU) defines an "official" set of boundaries for the 88 constellations. I've now added these.

What I initially saw as a bug turned out to result from the fact that Serpens is actually divided into two boundaries (all other constellations have a single boundary).

Constellation Boundaries

Constellation Boundaries

Constellation Boundaries

Figures 1-3. Constellation Boundaries

Page