Surface Physics Download

By
Dave
Project
Published
13 Feb 2010 12:15
Last Modified
13 Jan 2013 18:05

I've had a lot of requests to share my physics-enabled, WPF layout control for Microsoft Surface. This was demonstrated in the previous series of posts, Surface Physics Demo Part 1, Part 2, and Part 3.

So here it is! The following downloads are available:

  1. Physics Library (binary), .dll (zip'd), 19Kb. The physics library and layout control.
  2. Surface Physics Sample (install), .msi (zip'd), 1,046Kb. The sample application for demonstrating the physics library and layout control.
  3. Readme for Surface Physics Sample, .pdf, 1,626Kb. Readme for the sample application.
  4. Surface Physics Sample (source code), Visual Studio 2008 Project (zip'd), 907Kb. Source code for the the sample application.

Note that the Physics Library itself is currenly only available as a binary.

The Surface Physics Sample demonstrates many of the features supported by this library. However, to get you started with using the library in your own projects, I'll discuss how to enable basic physics for a simple ScatterView sample.

Migrating ScatterView to PhysicsView

This example demonstrates how to take a simple ScatterView sample and migrate it to make use of the physics-enabled layout control. You'll need the Microsft Surface SDK, available from the MSDN site here, and access to a Microsoft Surface or at least the Surface Simulator in the SDK. You'll also need the physics library.

First create a new Surface Application (WPF) project using the Visual Studio template inlcuded in the SDK. In SurfaceWindow1.xaml add a ScatterView to the default Grid control as follows:

<s:ScatterView Name="ScatterView1">
    <s:ScatterView.ItemTemplate>
        <DataTemplate>
            <Border BorderBrush="White" BorderThickness="2">
                <Image Source="{Binding}" Width="128" Height="96" />
            </Border>
        </DataTemplate>
    </s:ScatterView.ItemTemplate>
</s:ScatterView>

In SurfaceWindow1.xaml.cs add an event handler for the Loaded event and add some items to the layout control:

public SurfaceWindow1()
{
     InitializeComponent();

     // Add handlers for Application activation events
     AddActivationHandlers();

     this.Loaded += new RoutedEventHandler(SurfaceWindow1_Loaded);
}

void SurfaceWindow1_Loaded(object sender, RoutedEventArgs e)
{
    ScatterView1.ItemsSource = System.IO.Directory.GetFiles(@"C:\Users\Public\Pictures\Sample Pictures");
}

Run the project and if you're running the Surface Simulator you should see the following:

ScatterView

Figure 1. Simple ScatterView example. Note that the blank image is from a hidden file (desktop.ini) in the Sample Pictures folder.

We'll use this as a baseline to migrate from ScatterView to PhysicsView. First of all, copy the physics library (Physics.SurfaceControls.dll) into the project, and add a reference to it. Then add a namespace declaration at the top of SurfaceWindow1.xaml as follows:

xmlns:p="clr-namespace:Physics.SurfaceControls;assembly=Physics.SurfaceControls"

In SurfaceWindow1.xaml do a find and replace on s:ScatterView with p:PhysicsView and change the name from ScatterView1 to PhysicsView1. The rest of the markup remains unchanged, and the layout control should now look like this:

<p:PhysicsView Name="PhysicsView1">
    <p:PhysicsView.ItemTemplate>
        <DataTemplate>
            <Border BorderBrush="White" BorderThickness="2">
                <Image Source="{Binding}" Width="128" Height="96" />
            </Border>
        </DataTemplate>
    </p:PhysicsView.ItemTemplate>
</p:PhysicsView>

We can now set the ItemsSource on the new layout control. The next thing we need to do is inform the physics library of the physical properties of these items. Before we do this, however, we should set up some walls defining the bounding area. The library doesn't make any assumptions here since this area doesn't have to be rectangular nor aligned with the x,y axes. In this case we'll simply add four walls, inset from the extent of the screen by 16px (remembering that a Surface application runs at 1024 x 768px). Update SurfaceWindow1_Loaded to look like this:

void SurfaceWindow1_Loaded(object sender, RoutedEventArgs e)
{
    //ScatterView1.ItemsSource = System.IO.Directory.GetFiles(@"C:\Users\Public\Pictures\Sample Pictures");

    // add walls
    PhysicsView1.Bounds = GenerateWalls(0.5, new Rect(new Point(16, 16), new Point(1008,752)));

    // set data context
    PhysicsView1.ItemsSource = System.IO.Directory.GetFiles(@"C:\Users\Public\Pictures\Sample Pictures");

    // add behaviours
    AddBehaviours(this.PhysicsView1);
}

The methods for setting the bounds and item properties are as follows:

List<WallBody> GenerateWalls(double restitution, Rect rect)
{
    return new List<WallBody> {
        new WallBody { Normal = new Vector(1, 0), StartPoint = rect.TopLeft,
                EndPoint = rect.BottomLeft, Restitution = restitution }, // left
        new WallBody { Normal = new Vector(0, 1), StartPoint = rect.TopLeft, 
                EndPoint = rect.TopRight, Restitution = restitution }, // top
        new WallBody { Normal = new Vector(-1, 0), StartPoint = rect.TopRight,
                EndPoint = rect.BottomRight, Restitution = restitution },  // right
        new WallBody { Normal = new Vector(0, -1), StartPoint = rect.BottomLeft,
                EndPoint = rect.BottomRight, Restitution = restitution }}; // bottom
}

void AddBehaviours(PhysicsView physicsView)
{
    Random random = new Random();

    // ensure layout
    physicsView.UpdateLayout();

    for (int i = 0; i < physicsView.Items.Count; i++)
    {
        // get item
        PhysicsViewItem item = physicsView.ItemContainerGenerator.ContainerFromIndex(i) as PhysicsViewItem;

        // set properties
        Body body = new RectangularBody
        {
            Width = item.DesiredSize.Width,
            Height = item.DesiredSize.Height,
            Density = 0.01,
            InertiaConstant = 0.5,
            Restitution = 0.5,
            Orientation = random.NextDouble() * Math.PI * 2,
            Location = new Point(random.NextDouble() * physicsView.ActualWidth,
                random.NextDouble() * physicsView.ActualHeight),
        };

        // add item
        PhysicsCanvas.SetChildBody(item, body);
    }
}

That's it! The items will now collide with one another. Run the project and you should now see the following:

PhysicsView

Figure 2. PhysicsView Items now collide with one another and do not overlap.

Regrettably I've had no time to extend this work, so a lot of features remain un-implemented at this time. Examples inlcude multi-touch manipulations on the individual items themselves, a better dampening algorithm etc. However, I hope it may still prove useful in some cases.

The source code for this example can be downloaded here.

HDR Effects

By
Dave
Project
Published
9 Feb 2010 22:02
Last Modified
13 Jan 2013 17:54

One of the things I needed to start thinking about sooner or later was supporting High Dynamic Range (HDR) render targets, initially for the following reasons:

  1. I wanted to use a realistic bloom effect for the sun, without affecting other rendered planetary bodies.
  2. Atmospheric scattering algorithms lead to a wide range of luminance values.
  3. Background stars span a wide range of luminance values.

The use of HDR in combination with Tone Mapping to normalise luminance values into a standard render target would allow me to deal with each of these issues. This post will focus on the just the first issue of rendering the sun.

Non-HDR Approach

There are a number of approaches to rendering bright objects. One example is the lens flare sample on the XNA Creators Club site, which uses occlusion queries on the GPU to render glow and flare sprites when the sun is visible. Figure 1 shows the use of this technique.

Occlusion Queries Sprites

Figure 1. Non-HDR Sun effect using Occlusion Queries with Glow and Flare Sprites

HDR Approach

Another approach to rendering bright objects is based on image post-processing. The bloom sample on the XNA Creators Club is a good model for implementing bloom post-processing effects, along with additional control over color saturation. I combined this approach with an HDR render target (I used SurfaceFormat.HalfVector4), and added some HDR-textured spheres to represent bright lights. I rendered my scene as per normal and, using a post-processing bright-pass filter, extracted pixels for which the color range fell outside of the normal (0-1) range. I then used several passes of a Gaussian blur filter to create "star" and "bloom" effects and combined this with the original image, as shown in Figure 2.

HDR Test

Figure 2. HDR Post-Processing Test. Note that the "lights" are not being used in any lighting calculation for the central "planet" (a simple ambient value is applied to its texture), since this test was simply to demonstrate post-processing effects.

I then applied this post-processing approach as an alternative method to rendering the sun, as shown in Figure 3 below.

HDR Post-Processing

Figure 3. Sun effect HDR Post-Processing effects

Combinations of both approaches can also be used, as per Figure 4 below.

HDR Post-Processing and Occlusion Queries Sprites

Figure 4. Sun effect using both HDR Post-Processing and Occlusion Query Sprites

Using HDR render targets is an expensive process, even for many current GPUs. However, it has a number of advantages over the Occlusion Query approach, such as:

  1. Rendering one or many HDR-textured items in a post-processing pixel shader has the same GPU cost, unlike running multiple occlusion queries.
  2. Since post-processing effects are pixel-based, this approach leads to more realistic results when HDR-textured items are partially occluded.

Planetary Body Shader Part 2

By
Dave
Project
Published
9 Feb 2010 19:30
Last Modified
13 Jan 2013 17:56

In Part 1 I showed some screenshots of a planet rendered using multiple effects. I'll discuss each of these in turn, and begin with a composite image to show how each effect contributes to the overall image.

Earth Shader Composite

Figure 1. Composite image showing per-pixel, single-direction lighting (top left), addition of bump-mapping and specular (Phong) reflection (top right), addition of atmospheric scattering (bottom left), and addition of clouds and cloud shadows (bottom right).

The first thing I needed to do was create a model. I could have used a predefined sphere, but chose instead to generate the mesh algorithmically so that I could easily control the number of vertices.

Texture

Once I had a model, the first thing I needed to do was to apply a texture. NASA has an extensive image library, and the Visible Earth site has a collection of land maps for Earth. These maps are Equidistant Cylindrical projections, so my texture coordinates were simply:

x = λ
y = θ

where

λ = longitude,
θ = latitude

Lighting

The Shader Series on the XNA Creators Club site is a great introduction to lighting. My initial lighting model was a simple per-pixel shader, with a single directional light source from the sun. I subsequently added specular reflection using a Phong shading algorithm.

Relief

In order to show surface features without significantly increasing the number of vertices in the planet model, bump (normal) mapping can be used. There are numerous sources of normal maps on the Internet, available in various formats (I'm using DDS), and a good sample of how to implement normal mapping in a shader can be found on the XNA Creators Club site.

Atmospheres

There are many discussions on the subject of atmospheric scattering, many of which reference the work by Nishita et al. The article "Accurate Atmospheric Scattering", GPU Gems 2, Chapter 16 by Sean O'Neil served as a good starting point and is available here.

Clouds

The Visible Earth site also has a collection of cloud maps. This texture is then rendered on another sphere model above the surface of the planet.

Shadows

It was an important effect to cast shadows from the clouds onto the surface, particularly toward the terminator where the shadows are longer and not directly below the clouds themselves. My first approach was to implement a sphere-ray intersection algorithm in a pixel shader to dertermine the surface position of a shadow cast from my cloud sphere, and subtract the result from the existing surface texture.

Planetary Body Shader

By
Dave
Project
Published
9 Feb 2010 00:58
Last Modified
13 Jan 2013 17:57

In order to render my planetary bodies, I had to consider the following effects:

  1. Texture
  2. Lighting
  3. Relief
  4. Atmospheres
  5. Clouds
  6. Shadows

I'll discuss each of these in later posts, but for now here are some screenshots of the first attempt at rendering a planetary body using these effects.

Earth Shader

Earth Shader

Earth Shader

Earth Shader

Earth Shader

Figures 1-5. Earth Shader

Background Stars

By
Dave
Project
Published
7 Feb 2010 21:18
Last Modified
13 Jan 2013 17:57

In order to render a realistic star background, I could either use an image texture (either mapped onto a sphere or a cube), or a set of dynamic points. I opted for the latter so that I could more easily support changing my field-of-view (i.e. "zooming-in") without losing detail on my background.

Numerous star catalogues are available in the public domain. I opted for the Hipparcos Catalog, which lists the positions of approximately 100,000 stars. I converted the catalog to XML and then used the Content Pipeline in XNA Game Studio 3.1 to compress the XML to an XNB file. The data can then be loaded at runtime simply by using:

BackgroundStar[] stars = Content.Load<BackgroundStar[]>("hipparcos");

BackgroundStar is a simple class containing information such as position, brightness, spectral type etc. for each star in the catalogue.

I was really surprised at the level of performance I got when rendering these items, initially as point primitives, and subsequently as point sprites. For the latter, I created myself a simple blurred sprite which I sized according to the brightness, and tinted according to the spectral type of the star. As an example, here's a screenshot of Orion taken with both a wide and narrow field-of-view.

Orion Wide-Angle

Figure 1. Orion Wide Angle

Orion Tele-Photo

Figure 2. Orion Tele-Photo

One of the issues here is that the apparent magnitude of a star is a logarithmic scale. This means that the faintest stars visible to the naked eye (around magnitude 7-8) are approximately five thousand times fainter than the brightest star in the sky (Sirius, magnitude -1.46). The Hipparcos Catalog lists stars down to around magnitude 14, so in order to render this range of magnitudes with only 255 levels of luminance I had to flatten the brightness curve.

Celestial Grid

By
Dave
Project
Published
7 Feb 2010 00:01
Last Modified
13 Jan 2013 17:58

To make things much easier, I wanted to render a grid to help with orientation. In planetary terms this would be a latitude/longitude grid, however the celestial equivalent is declination/right ascension respectively.

This was simply a matter of drawing line primitives on the surface of a sphere centered around the camera viewpoint. I chose not to converge all my lines of right ascension at the "poles", as shown below. The only drawback to this was that I had to draw multiple lines, since it's only possible to use just one if the lines converge.

Celestial Grid Equator

Figure 1. Celestial Grid Equator

Celestial Grid Poles

Figure 2. Celestial Grid Poles

One advantage of using multiple lines, however was that I have the option of varying the color for particular lines. For example, I might choose to make the "equator", or elliptical plane, more opaque.

Starting Positions

By
Dave
Project
Published
6 Feb 2010 22:46
Last Modified
13 Jan 2013 17:59

Errm, where do I start in building a Virtual Solar System?

How about the orbits of the planets? At the start of the 17th Century, Kepler coined his laws of planetary motion. His first law defined the orbit of a planet as an ellipse with the sun at a focus, and the position on an ellipse can be defined by six unique pieces of data:

  1. Semi-Major Axis (a)
  2. Eccentricity (e)
  3. Inclination (i)
  4. Argument of Perifocus (ω)
  5. Longitude of Ascending Node (Ω)
  6. True Anomaly (θ)

These "Orbital Elements" for each body are available on the NASA JPL Horizons system.

The XNA Creators Club is a fantastic resource, with heaps of examples to get me started on a simple app to render orbital positions. Plugging in the data gave me the following results:

Inner Planet Orbits

Figure 1. Inner Planet Orbits

Outer Planet Orbits

Figure 2. Outer Planet Orbits

Once I had this basic framework for rendering orbits and positions I could add additional bodies, such as comets and moons as shown below

Outer Planet & Comet Orbits

Figure 3. Outer Planet & Comet Orbits

Outer Jupiter Moon Orbits

Figure 4. Outer Jupiter Moon Orbits

Virtual Universe

By
Dave
Project
Published
6 Feb 2010 22:45
Last Modified
13 Jan 2013 18:01

I finally decided that I should try to get my head around the XNA Framework. Why, do you ask? Well, I found myself tinkering on another WPF project, requiring a reasonable amount of 3D, spending a considerable amount of time performance tuning the application. I started to wonder "if I spent the time I would otherwise spend performance tuning this app on learning how to implement the same application using XNA, could I end up getting to the same level of performance using XNA in the same overall time?"

So I thought I'd blow the cobwebs off this blog and use it as a journal as I try to find my path into the scary world of Games Programming.

I've had in my mind for some time the desire to learn more about the dynamics of the solar system. Maybe I spent too much time playing Elite. I really struggle with names, so for the time-being until I think of a better one, I'll refer to this project as "Virtual Universe".

Virtual Earth Geospatial Telemetry Visualisation

By
Dave
Project
Published
6 Mar 2009 00:14
Last Modified
13 Jan 2013 18:24

This post discusses a sample I put together to allow geospatial telemetry data to be visualised using Virtual Earth. The data itself was collected by driving an Aston Martin DB8 Vantage around a track with a GPS receiver. In addition to the location of the car, basic engine telemetry was captured and synchronised with the position data.

The basic idea was to take the data, and "play back" the drive of the car around the track, layering information on a map such as vehicle position, speed, braking position etc. Multiple data sets can be overlaid on the map for comparison. In order to show the vehicle position, a basic 3D car model was chosen. Virtual Earth supports both 2D and 3D map views, the latter of which gave an opportunity to implement a "virtual helicopter" camera which could follow the vehicle around the track.

Video 1. Virtual Earth geospatial telemetry visualisation.

This video shows a couple of laps of telemetry data. The path taken on each lap is drawn on the map (each in a different colour), and each has its own 3D car model (labeled "A" and "B" respectively). The buttons along the bottom of the screen control the "virtual helicopter" camera position and which car the camera is pointing at, and can be seen in more detail in Figure 1 below.

Camera Positions

Figure 1. "Virtual Helicopter" Camera Positions

Front/Rear

These angles follow the car a short distance above and in front of/behind respectively.

Above/Blimp

This angle is directly above the car at a low/high altitude respectively.

Fixed

This setting fixes the camera at its current point in space, but points to the selected car.

In Car

This setting sets the camera position to be "inside" the car, and points the camera in the direction that the car is traveling.

Free

This setting allows the user to freely move the camera.

As an aside, during development of this sample I initially only had access to a couple of models in .x format. Until I managed to find a suitable model for the car, I had to use the following:

Initial Actors

Figure 2. Initial Actors

Initially it was helpful to add some axes to the models so I could ensure they were oriented correctly - you can see these in Figure 2. I also experimented with transparency for "ghosting" the model(s) which didn't have focus:

Ghost Actors

Figure 3. Ghost Actor(s)

The cube shown in Figure 3 was used as a visual marker (also with axes) to show the camera position when I was in a "Free" camera mode. This was really helpful in ensuring the camera was positioned and tracking objects correctly.

Surface Physics Demo Part 3

By
Dave
Project
Published
19 Feb 2009 23:52
Last Modified
13 Jan 2013 18:13

In Part 1 and Part 2 I focussed on 2D features. This makes a lot of sense for Surface applications, as fundamentally items move in two dimensions, however there are particular scenarios that lend themselves to 3D, one of which I'll describe later in this post.

Video 1. Surface physics demo.

The video is divided into the following sections:

Layout

In some cases it is desirable to arrange the items into preset patterns, for example as part of an "Attract Mode" application, or when interacting with physical objects placed on the Surface. This screen defines some basic patterns and "locks" the items to the nearest position in the pattern from their original location. Selecting an item releases the "lock".

3D Rolling Spheres

Spheres lend themselves to an intuitive motion on a 2D-plane, such as when playing marbles, pool etc. When a texture is added to the sphere, it is important to ensure that the sphere "rolls" correctly when moved. Several examples of textures are shown in the video.

Here are some further screenshots.

Marbles

Figure 1. Marbles

Pool

Figure 2. Pool balls

Page