I wanted to support multitouch in an XNA application on Windows, without using the Microsoft Surface 2.0 SDK and runtime. Unlike Windows Phone however, touch input on Windows is not natively supported by the XNA framework. I have therefore followed the recommended approach and added a new input source to my manipulation processor for WM_TOUCH messages. It therefore works on both Windows 7 and Windows 8.
I hook the Windows message loop to a managed function pointer using GetFunctionPointerForDelegate and Get/SetWindowLongPtr. I then register the window for multitouch using RegisterTouchWindow, and process WM_TOUCHDOWN, WM_TOUCHMOVE and WM_TOUCHUP messages.
While no longer requiring the Microsoft Surface 2.0 runtime, one option to inject WM_TOUCH messages on non-touch hardware is to use the Microsoft Surface 2.0 SDK Input Simulator, as shown below in Figure 1. Note that this method for simulating input will only work on Windows 7, however.
Figure 1. WM_TOUCH manipulation, using the Surface 2.0 SDK Input Simulator.
Curiously, WM_TOUCH messages are only provided when not using a mouse. If the mouse is used, WM_TOUCHUP messages are fired for all current touches, so mixed-mode interactions when the mouse is used simultaneously are not possible.
There are multiple potential sources of input for an XNA game component, such as keyboard, gamepad, mouse, Surface multitouch, .NET 4 Touch, etc. Dealing with multpile approaches to gathering input in the Update loop of each component can become complex, and adding support for a new input type further adds to this complexity.
I divided my input handling into two different categories:
Controller-based input such as keyboard and gamepad.
Manipulation-based input such as mouse, Surface multitouch, and .NET 4 touch.
For the latter, I wanted to abstract input handling such that a given game component only had to deal with a custom Manipulator type. In this way, I would be free to change or add input source(s) without affecting the implementation in a game component.
Clearly, different input sources provide different types and degrees of information. Surface multitouch Contacts for example, provide information on size, orientation, type etc, whereas a mouse only provides position. In many cases only position information is necessary, however additional properties can easily be added to the Manipulator type and supported by a game component if available. In this case I decided to sub-class my Manipulator type to the following:
Mouse and finger-touch
Surface Tag contacts
Surface blob contacts
In order to deal with multitouch manipulations, I could process input using my previosuly-discussed Manipulation classes and processor.
The following video demonstrates the use of multiple input sources:
Video 1. Input processing from multiple sources.
Since a mouse can only deliver single-touch input, when demonstrating mouse input in Video 1 I switch the pivot type to "Tracked" in order to be able to demonstrate rotation and scaling, as shown in Figure 1.
Figure 1. Input processing from multiple sources.
Of course, it is unlikely that a scenario mixing both Mouse and Surface input would ever be used in practice, however it serves to illustrate how a game component can handle input in a consistent way, without being aware of the input source. For example, the buttons on the left of the screen are "pressed" using either Surface v1 Contact or a mouse-click, however the buttons only tracking Manipulator state.
A useful application of this approach is the ability to write XNA applications which work with both Surface v1 multitouch input and .NET 4 multitouch (and mouse for single-touch if multi-touch hardware is not available) without any code changes, i.e. multi-platform targetting.
In Part 1 I described a lightweight class to process general multitouch manipulations using a game-loop model in XNA.
A key aspect in bringing realism to a multitouch experience is the use of momentum and inertia. The Microsoft.Surface.Core.Manipulations.Affine2DInertiaProcessor provides this support in an event-driven model. In order to continue the approach of using a game-loop model, I needed to add linear and angular velocities, and expansion rates to my manipulation processor.
Changes in manipulator position occur relatively rarely in relation to the game loop frequency, hence calculating rates of change requires averaging these changes over time. The choice of duration and quantity of these time periods is somewhat subjective, and I settled on 5 samples averaged over a maximum of 200ms. Clearly this leads to a small latency between measured and actual velocities, however the smoothing effect is beneficial.
In order to visualise these rates, both for debbugging and to settle on suitable sampling settings, I added visual indicators of direction and magnitude, as shown below in Figure 1.
Figure 1. Visualising linear and angular velocities and expansion rates during multitouch manipulation.
With the addition of linear and angular velocities and expansion rates, I could now add a simple inertial processing component which uses these properties as input parameters. This inertial processor uses a simple deceleration algorithm, with configurable rates for translation, rotation, and expansion. The solution is demonstrated in the following video.
Video 1. Multitouch game loop inertia processing.
Note that for gestures such as a flick, where velcity increases rapidly, a time-weighted average may be more suitable for calculating the inertia processor's input parameters, and I'll investigate this at a later date.
Multitouch Surface support for XNA-based applications is provided via the Microsoft.Surface.Core.Manipulations namespace in an event-driven model. I was interested in investigating general multitouch processing within a game-loop model, i.e. not involving events, and thought it would be instructuve to develop my own class to process manipulations in this way.
In a similar manner to the Affine2DManipulationProcessor, I wanted to support arbitrary combinations of translation, rotation, and scale. Translation is trivial, since it is simply a matter of tracking the changes in each manipulation point between calls and taking the average.
Rotation and scale is a little more tricky, since each of these values are relative to a centre-point, or manipulation origin. Rotation is calculated from the average angle between each point and the origin, and scale is calculated from the average distance. Unlike a mouse cursor, manipulation points come and go, so the key is only to track changes to those points which persist between calls, and then update the positions, origin, distance and angles for the next call.
In order to generalise manipulation points from multiple sources such as mouse, gamepad, Surface Contacts and .NET 4.0 Touch, I created my own Manipulation structure to abstract properties such as ID and position. My input processing can then build a collection of these objects from any relevant source(s) and pass them to my manipulation processor as part of the Update call as follows:
The cumulative transforms for translation, rotation and scale (according to the supported manipulation types specified) are then immediately available as properties on the manipulation processor.
In order to test the solution, I wrote a small harness to visualise the manipulation points, and their effect on a reference cross-hair. While I also added support for a mouse (right-click to add/remove points, left-drag to move points), multi-touch hardware is required to test scenarios when multiple points are added, moved, or removed simultaneusly. A screenshot is shown in Figure 1.
Figure 1. Multitouch manipulation with averaged manipulation origin.
One of the complexities of working with multiple manipulation points is deciding how to determine the manipulation origin. One option is to simply take the average position of each point, as shown in Figure 1. Another option is to introduce a speicific "pivot point" as the origin and use this to calculate scale and rotation. This pivot point can either be fixed, or tracking the object being manipulated. The latter two options are shown in Figures 2 and 3.
Figure 2. Multitouch manipulation with fixed pivot point.
Figure 3. Multitouch manipulation with pivot point centered on manipulation object.
The solution is demonstrated in the following video. Each approach for determining the manipulation origin is illustrated using translation, rotation, and scaling manipulations, initially in isolation and then in combination.
Video 1. Multitouch game loop manipulation processing.