One of the new features to Silverlight 3 is the ability to add multi-touch capabilities to your application. When I posed the question on Twitter, I got some responses of ideas people would use this for. Honestly most of them could be accomplished with mouse events today and X/Y calculations. These would be the touch applications that are pretty singular. But I did get some multi-touch ideas that I think I’ll try to explore. First though, let’s look at the basics of what Silverlight provides for multi-touch application development.
Hopefully I’m stating the obvious here, but your hardware has to support multi-touch. And I’m not talking about that fake kind. I’m talking about hardware that announces the WM_TOUCH messages to Windows. If you (or your customers) aren’t going to be having multi-touch hardware, then writing against this API isn’t going to help! I’m currently using the HP TouchSmart TX2 laptop running Windows 7. I find this to be a good machine and fairly cheap-ish with regard to how laptops are these days and with the features it provides.
The first thing to understand is how to tap into the touch events from the hardware to Silverlight. Understanding this at the beginning of your application development can be a critical step. The key reason for this is unlike other input events (i.e., MouseLeftButtonDown, etc.) which can be added to individual UIElements in an application, the touch event is an application-wide event.
There is one primary event: FrameReported. This event is what gets fired when the hardware sends the touch events to the runtime. The Touch class is a static class for the sole reason of this FrameReported API. To wire it up in your application you can use code like this:
And now you have to write your event handler.
The Event Handler Arguments
Once the runtime receives a touch message, the FrameReported event fires (and will do so several times…see later here). The arguments you get that you primarily need to concern yourself in most circumstances are the GetPrimaryTouchPoint and GetTouchPoints.
The primary touch point could be thought of as the first touch message/point that the runtime received in a current sequence. So if your application is using single gesture touch behaviors, this is likely what you’d use. Otherwise GetTouchPoints is going to give you all the registered touch points from the hardware reported to the runtime.
For me understanding the Move event is critical. If you take a look and add the data to my diagnostic app below for Move, you’ll see that even simply touching your finger in one place fires constant Move commands.
What you get in a TouchPoint
Both the primary and the collection of touch points listed above will return the TouchPoint object, which contains valuable information. Namely it is going to give you Postition, which is a point relative to the offset you provided in the GetTouchPoint call (or absolute if you pass in null).
You also get the Action of the touch. There are three actions: Down, Move and Up. It is important to understand the firing sequence here. Assume a given TouchPoint, it will first report Down, then it will continue to report Move until the touch is removed, at which point Up will occur. The key piece in the middle is Move. This action is firing even if you aren’t moving any element. It is essentially reporting that you have a TouchPoint that is in Down state (i.e., touched). Move can be helpful if you are needing to move things along with the updated position of the element.
You also get the TouchDevice which contains some helpful information. Provided via the TouchDevice is an Id value, which is a unique id provided by the operating system for the device which reported the TouchPoint. Also provided is DirectlyOver which is the topmost UIElement the position is over at the time the touch was produced.
What about my mouse events?
Ah, good point! In the TouchFrameEventArgs you have a method you can call which is SuspendMousePromotionUntilTouchUp. You would want to use this if you knew *for sure* that the end user had multi-touch hardware. This would prevent the mouse event promotion for the given touch point. This method can only be called if the Action is Down for the TouchPoint. Once the TouchPoints all report Up, then normal mouse event promotion would resume.
Putting it all together
For these basics, I decided just to create a quick diagnostic application that would show the registering of the TouchPoint elements, as well as identifying the primary touch point. My application has registered the FrameReported event handler and then I’ve added some logic:
The end result is that when the user touches that application surface, we add the TouchPoint to an ObservableCollection that is bound to a DataGrid to show the points currently registered and by which device. When the user removes the touch action, they go away.
Obviously it is hard to demonstrate touch capabilities in a screenshot and it really does it no justice. I’m going to do my best attempt here to show you a picture-in-picture view of the application running and me interacting with it via Touch. You’ll need Silverlight to view this demonstration.
There you have it. The basics of multi-touch in Silverlight 3. It’s fairly simple to understand the core mechanics of the API. What gets tricky is interacting with your application beyond just showing the points :-). In a future post I’ll show an application that makes use of this multi-touch feature in understanding where the touch occurred in my given application and how you can find the element that was touched (even though it’s an application-wide event). If you aren’t subscribed, please consider subscribing to my blog for regular updates for Silverlight information.
Hope this helps!