Multitouch Surface support for XNA-based applications is provided via the Microsoft.Surface.Core.Manipulations namespace in an event-driven model. I was interested in investigating general multitouch processing within a game-loop model, i.e. not involving events, and thought it would be instructuve to develop my own class to process manipulations in this way.
In a similar manner to the Affine2DManipulationProcessor, I wanted to support arbitrary combinations of translation, rotation, and scale. Translation is trivial, since it is simply a matter of tracking the changes in each manipulation point between calls and taking the average.
Rotation and scale is a little more tricky, since each of these values are relative to a centre-point, or manipulation origin. Rotation is calculated from the average angle between each point and the origin, and scale is calculated from the average distance. Unlike a mouse cursor, manipulation points come and go, so the key is only to track changes to those points which persist between calls, and then update the positions, origin, distance and angles for the next call.
In order to generalise manipulation points from multiple sources such as mouse, gamepad, Surface Contacts and .NET 4.0 Touch, I created my own Manipulation structure to abstract properties such as ID and position. My input processing can then build a collection of these objects from any relevant source(s) and pass them to my manipulation processor as part of the Update call as follows:
The cumulative transforms for translation, rotation and scale (according to the supported manipulation types specified) are then immediately available as properties on the manipulation processor.
In order to test the solution, I wrote a small harness to visualise the manipulation points, and their effect on a reference cross-hair. While I also added support for a mouse (right-click to add/remove points, left-drag to move points), multi-touch hardware is required to test scenarios when multiple points are added, moved, or removed simultaneusly. A screenshot is shown in Figure 1.
One of the complexities of working with multiple manipulation points is deciding how to determine the manipulation origin. One option is to simply take the average position of each point, as shown in Figure 1. Another option is to introduce a speicific "pivot point" as the origin and use this to calculate scale and rotation. This pivot point can either be fixed, or tracking the object being manipulated. The latter two options are shown in Figures 2 and 3.
The solution is demonstrated in the following video. Each approach for determining the manipulation origin is illustrated using translation, rotation, and scaling manipulations, initially in isolation and then in combination.