Professional Documents
Culture Documents
tabletop devices with different physical sizes. So, P3 had to Feature Simulator Actual Table
manually test on both tables whenever there was a significant
change in the application. # of touches # of Mice 52+
Limitations of the Simulator. Tabletop hardware vendors
often ship device simulators. Although these simulators Physical Limited Almost any shape
can mimic the hardware on a standard PC to some extent, objects
the developers still run into issues as a result of differences
between the simulator and the actual tabletop. For example, Sensitivity Mouse is very “fat-finger” Finger is
in response to a question on the difference between testing Precise (300- less Precise
alone at the workstation and with multiple users at the table- 800 DPI)
top hardware, participant P1 mentioned the following:
# of Testers # of Mice More than one
“... if you are trying to create a new window, you can’t do it
more than once at the same time because you have only two Physical ori- Vertical Horizontal
hands (two mice at the simulator). So if two people are trying entation
to test at the same time (on the actual hardware) maybe they
will check occurrences like doing this at exactly the same
time.” Table 2: Feature comparison between a Simulator and
an Actual Tabletop
Table summarizes the key differences between these two
environments (in this case the Microsoft Surface Computer
and the Microsoft Surface Simulator). From Table we see votes, 2) multiple users vote sequentially and 3) multiple
that the simulator supports a limited capability multi-touch users vote concurrently. However, multi-user interactions
and multi-user environment compared to the target tabletop. can go beyond a single interaction on a single element. In
As a result, a significant amount of testing and debugging that situation, manual testing becomes even harder as there
work needs to be carried out on the actual table, especially is an explosion of possible states. Multi-user scenarios often
when complex concurrent interactions need to be considered. introduce unseen performance issues as well. P2 and P3
mentioned that at times they experienced severe performance
Testing Multi-User Scenarios. Multi-user scenarios typically degradation when multiple users were concurrently using
involve a large number of possible concurrent interactions their systems. But a single developer or tester, when doing
by different users on the same interface. Manually testing manual testing can only explore a limited set of possible
such interfaces require multiple users, which is an often concurrent scenarios.
difficult to find every time a feature needs to be tested. For
an example, P3 mentioned a multi-user scenario that he de- Bringing Code to the Tabletop. In most development teams
veloped where multiple users could vote by placing a tap that our participants worked in, digital tables are shared by
gesture on a specific interface element. He prepared the multiple developers. As a result, developers typically need to
test plan to test for the following scenarios: 1) single user move code between their PC and the shared tabletop so that
Application Test Runner processor that recognizes gestures from the raw touch data,
(3) the touch recorder that stores the raw data from the
hardware abstraction layer and (4) the core that acts as a
Touch- bridge among the components. The test framework executes
Toolkit
Event Test Online
Controller Framework Storage the automated test scripts. A test script validates the ges-
ture recognition code against one or more recorded touch
interactions. Gestures are defined using a gesture definition
Gesture Touch language [15] provided by the toolkit which allows to define
Core Recorder
Processor multi-touch gestures that may include touch interactions in
multiple steps. We describe the key components related to
automated testing in the subsequent sections.
Hardware Abstraction Layer
Hardware Abstraction Layer
MS Surface TUIO … Virtual Device Like other tools (e.g., [16]), TouchToolkit decouples the ac-
tual hardware from the application by providing a hardware
Figure 2: Components of the TouchToolkit Framework abstraction layer. This layer includes a hardware agnostic
interface for capturing multi-touch inputs. This interface can
be implemented for most multi-touch enabled hardware plat-
they can test the features in the target environment. Our study forms and we currently have implementations for Microsoft
participants use source code repositories or USB memory Surface, SMART Tabletop, Windows 7, AnotoPen and the
sticks as intermediate storage between the two environments. TUIO protocol[10]. TouchToolkit also has an implementa-
This process of going through an intermediate medium slows tion of this interface for a virtual hardware device that can
down the familiar workflow of the develop-debug-develop be used to playback recorded interactions and run automated
cycle. Also, it requires developers to commit untested code tests. Multiple devices can be active at the same time and
to the shared repository, which often breaks a working build. the framework supports changing devices at runtime. This
As P1 mentioned: is useful in an environment where additional devices (e.g.,
AnotoPens) need to be connected while the application is
“(The process of transferring code to the table) is not com- running.
fortable because sometimes you make some changes but you
are not confident to commit it, as it’s not a final change” While all multi-touch devices provide a set of common in-
puts like coordinates for touch points, additional inputs are
Participants P1 and P3 mentioned that developing on the also available that are often unique to a particular device.
tabletop with an additional vertical display was faster as For example, the Microsoft Surface provides finger direction
the outcome of the work could be loaded and debugged data, Diamond Touch can identify users and so on. To
immediately. To boost productivity, we recognize that it is maintain hardware independence, the hardware abstraction
important to provide developers with tools so that they can layer ensures that basic touch information is processed in
get immediate feedback about their work-in-progress code. a common format. However, hardware specific data can
TOUCH TOOLKIT ARCHITECTURE also be provided and that data is passed, through the core
To address some of the challenges faced by the participants in component, to other parts of the toolkit and to applications.
our exploratory study, we developed TouchToolkit. Specifi- When an application uses a gesture that needs a device spe-
cally, TouchToolkit aims to address four development and cific input, it should also provide an alternate option (i.e.
testing challenges: (1) automated unit testing, (2) debug- another gesture) as a fall back. For example, an applica-
ging, (3) testing multi-user scenarios, and (4) device inde- tion can have a single finger “rotate” gesture. This gesture
pendence. requires the “touch direction” data that is available in Mi-
TouchToolkit provides a touch interaction record and play- crosoft Surface but not in many other multi-touch devices.
back system that helps simplify testing and debugging during As an alternate, the application may also provide the two
development as well as automating tests to validate multi- finger rotate gesture that uses the basic inputs to comply with
touch interactions. It also provides a device independent API different devices. In most cases, the TouchToolkit framework
that can be used to add new device support and a gesture can be used to determine whether an application that uses the
recognition system. The tool can be used in applications Toolkit has any device specific dependencies.
that use Windows Presentation Foundation (WPF) and also Touch Interaction Recorder
in Silverlight-based web applications. The entire source code To record interactions, the Touch Recorder subscribes to
and documentation for the tool is available on the project web lower level input from the hardware abstraction layer through
site9 . The video with this paper shows how the tool is used. the core component and saves the data into an online storage
The component diagram of the TouchToolkit framework is and also cache it locally to improve performance. This
shown in Figure 2. The four key components of the toolkit allows automatic synchronization of data between developer
are: (1) the hardware abstraction layer which exposes a machines and actual devices. The data is stored in an XML
hardware agnostic API for the application, (2) the gesture format (see Figure 3). The recorder can record and store in-
teractions from any device that is supported by the hardware
9 http://touchtoolkit.codeplex.com abstraction layer, including basic touch information (i.e., co-
<FrameInfo> 2
1
<TimeStamp>10926403</TimeStamp>
<Touches> Test Framework Touch Recorder
<TouchInfo>
<ActionType>1</ActionType> Basic Create virtual app. Load data from
storage
<Position> touch data
<X>451.14</X> Register gesture
<Y>107.29</Y> events Merge timelines
</Position>
<TouchDeviceId>10</TouchDeviceId> 4
<Tags> Device Init. virtual device
User’s validation code
<Tag> specific
<Key>Size</Key> touch data Simulate touch
Playback Completed
<Value>10</Value>
</Tag>
</Tags> Gesture detected
</TouchInfo> Core, Gesture Processor
... & Event Controller
3
</Touches>
</FrameInfo> Continuous flow Asynchronous communication
Figure 3: An XML code fragment representing a part Figure 4: Work flow of automated test framework
of a touch interaction. Some details are omitted for
clarity. Each interaction is recorded as a frame which
contains one or more touches. gesture is detected, it invokes the user defined validation
code. Depending on the type of gesture the user defined code
can be invoked multiple times. Regardless of the status of
ordinates, touch ID) and any additional device specific data gesture detection, the framework also invokes the “Playback
provided by the hardware. Completed” test code defined by the user at the end. An
example test using this framework is given later in this paper.
During playback this module reconstructs the touch data
object from the XML content and sends the data to the TESTING USING TOUCH TOOLKIT
system through a virtual device so that it appears to the rest Automated unit testing is a well known way to increase
of the system as if it is coming from the actual device. This the effectiveness, efficiency and coverage of software testing
allows the developers to test applications that require multi- [14]. It is one of the industry standard methods for repeatedly
touch interactions on their development machine. verifying and validating individual units of the application
in regression testing. Though there are some simulators
Test Framework available to manually test tabletop applications, tool support
Record and playback can be used for both manual and au- for unit testing multi-touch gestures is limited.
tomated testing. While manual test may involve gesture
detection as well as other UI related functionality testing, the Writing Unit Test for Gestures
automated test framework focuses specifically on validating Traditional unit tests execute sequentially, however, multi-
gesture detection code. Most automated Unit Test systems do touch gesture based user interactions require asynchronous
not have the option to use an active UI during test. However, processing. For example, a “Flick” gesture requires a cer-
gestures are directly related to UI and testing them often tain period of time to complete, so a test to validate the
requires UI specific functionality. To mimic a realistic ap- recognition of that gestures must wait until the gesture is
plication scenario, the test framework creates an in-memory completed. Such tests require asynchronous execution which
virtual UI layer and subscribes to gesture events in the same is supported by the TouchToolkit test framework API.
way that an application would. The test framework can be
used to test any type of gestures that can be defined in the Multi-touch gesture interactions can trigger continuous events.
gesture definition language, including complex multi-touch For an example, a photo viewer application needs to keep
gestures that involve touch interactions with multiple steps. responding to the “Zoom” gesture in “real-time” as long as
More details about the gesture processor and the gesture the zoom interaction continues. So, a test for this scenario,
definition language can be found in [15]. needs to validate the zoom interaction continuously instead
of just once it is completed. The TouchToolkit test frame-
Figure 4 shows the work flow of an automated test in Touch- work allows a developer to write unit test cases for such
Toolkit. To start, it creates the virtual application and reg- continuous touch-interactions.
isters the necessary gesture events during the initialization
process. Then the TouchRecorder loads the data from storage To support asynchronous and continuous interaction testing,
and starts the simulation process. If a test involves simulating our test framework is event driven. The core of the API is the
multi-user scenarios using multiple touch interactions then Validate method which takes the following parameters:
the system merges frames from individual recorded data into
one time line. The virtual device continues to send simulated 1. expectedGestureName: The name of the gesture to detect
device messages to the framework. As soon as the desired (e.g., Zoom).
Figure 5: Example unit test code using Touch Toolkit
2. savedInteraction: The identifier or the recorded interac- TouchToolkit allows developers to write unit test code for
tion that should produce the expected gesture. validating multi-touch interactions.
Figure 7 shows a screenshot of the TouchToolkit record and an audio recorder to record the conversations and a screen
playback window for a multi-touch photo viewer application. capturing tool to record a user’s activities on the screen
The “Add New” tab in the debug window allows developers during the coding tasks. The evaluation was done three
to record new touch interactions. The “Existing Gestures” months after the initial exploratory study using the same
tab allows developers to playback recorded touch interac- participants.
tions individually or in parallel. In step 1, a “Lasso” gesture
is recorded. Next, in step 2 the “Zoom” gesture is recorded. Data Collection
Once these two are recorded separately, as shown in step 3, Each session lasted 50 minutes and consisted of three sec-
TouchToolkit allows one to run both of them in parallel to tions as follows:
simulate multi-user scenario. To differentiate the constituent
interactions of a combined playback, TouchToolkit applies First, we showed the participant a seven minute introductory
different color codes to the individual interactions as shown video describing the main features of our framework and
in Step 3. A speed controller on the playback panel allows spent 1-2 minutes on follow up discussion. Then, we asked
the playback speed to be adjusted. the participant to perform the following three tasks to a par-
tial implementation of a tabletop photo viewer application:
3. J.O. Wobbrock, M.R. Morris, and A.D. Wilson, User- 16. Echtler, F. and Klinker, G. 2008. A multitouch
defined gestures for surface computing, Proceedings of software architecture. In Proceedings of the 5th Nordic
the 27th international conference on Human factors in Conference on Human-Computer interaction: Building
computing systems, 2009, pp. 1083-1092. Bridges (Lund, Sweden, October 20 - 22, 2008).
NordiCHI ’08, vol. 358. ACM, New York, NY, 463-466.
4. Fails, J. and Olsen, D. A design tool for camera-based
interaction. In Proc. CHI, 2003. 17. Hansen, T. E., Hourcade, J. P., Virbel, M., Patali,
S., and Serra, T. 2009. PyMT: a post-WIMP multi-
5. Witten, I. H. and Frank, E. Data Mining: Practical touch user interface toolkit. In Proceedings of the ACM
Machine Learning Tools and Techniques. Morgan international Conference on interactive Tabletops and
Kaufmann, San Francisco, 2nd edition, 2005. Surfaces (Banff, Alberta, Canada, November 23 - 25,
2009). ITS ’09. ACM, New York, NY, 17-24.
6. Westeyn, T., Brashear, H., Atrash, A. and Starner, T.
Georgia tech gesture toolkit: supporting experiments in 18. Lee, J.C. Hacking the Nintendo Wii remote, IEEE
gesture recognition. In proc. ICMI, 2003. Pervasive Computing, vol. 7, no. 3, July-Sept 2008, 39-
45.
7. Ashbrook , D. and Starner,T. MAGIC: a motion gesture
design tool, in Proceedings of the 28th international 19. Pawar, U.S., Pal, J., and Toyama, K. Multiple mice
conference on Human factors in computing systems, pp. for computers in education in developing countries,
2159-2168, 2010. International Conference on Information Technologies
and Development, 2006.
8. Long, A. C., Landay, J. A. and Rowe, L. A. Quill: a
gesture design tool for pen-based user interfaces, Eecs 20. Strauss, A. L. and Corbin, J. Basics of Qualitative
department, computer science division, UC Berkeley, Research: Techniques and Procedures for developing
Berkeley, CA, 2001. Grounded Theory. Sage Publications, 1998.
9. Gray, P., Ramsay, A. and Serrano, M. A demonstration 21. Villar, N., Izadi, S., Rosenfeld, D., Benko, H., Helmes,
of the OpenInterface Interaction Development Environ- J., Westhues, J., Hodges, S., Ofek, E., Butler, A., Cao,
ment, UIST’07 Adj. Proc. X., and Chen, B. 2009. Mouse 2.0: multi-touch meets
the mouse. In Proceedings of the 22nd Annual ACM
10. Kaltenbrunner, M., Bovermann, T., Bencina, R., and Symposium on User interface Software and Technology
Costanza, E. TUIO: A protocol for table-top tangible (Victoria, BC, Canada, October 04 - 07, 2009). UIST
user interfaces. In Proceedings of Gesture Workshop ’09. ACM, New York, NY, 33-42.
2005, 2005.