All posts for the month March, 2010

I know this is a bit late, because my co-author has already reported it, but I am very happy to announce that some ears have been listening. The CHI ’10 Workshop, “Natural User Interfaces: The prospect and challenge of Touch and Gestural Computing” has granted an audience to the new metaphor for design.

The workshop is going to be an all day event on Saturday, where each of the authors will present and offer discussion about their position papers. The real benefit of this is that in such a narrow expertise, you really get amazing peer reviews and leave with amazing new ideas. These types of scoped interaction gatherings are a wonderful thing to foster innovation and creativity. It’s like a specialty conference inside of a specialty conference.

One of the most interesting things, as I read the position papers that were accepted, was a cite from my writings.  In the paper, “Natural User Interfaces: Why We Need
Better Model-Worlds, Not Better Gestures.
(PDF)”  the authors argue the need for separation between “symbolic gestures” and “manipulations.”

Manipulations are not gestures.
We believe in a fundamental dichotomy of multi-touch
gestures on interactive surfaces. This dichotomy
differentiates between two classes of multi-touch
interactions: symbolic gestures and manipulations.

They go on to define each specifically.

For us, symbolic gestures are close to the keyboard
shortcuts of WIMP systems. They are not continuous
but are executed by the user at a certain point of time
to trigger an automated system procedure. There is no
user control or feedback after triggering.

The opposite class of multi-touch interactions is
manipulations. Unlike symbolic gestures, manipulations
are continuous between manipulation initiation (e.g.
user fingers down) and completion (e.g. user fingers
up). During this time span, user actions lead to smooth
continuous changes of the system state with immediate

It is a lovely way to differentiate the two and I couldn’t agree more. As the crevasse between the two interactions widens, we begin to see many more differences. I have gotten a few emails from bewildered Interaction Designers, both young and old, asking “why do we need this separation?”

The answer is not quite as apparent as it will be in the next few years. We need this distinction because designers and developers need to think about experiences differently. We need to plan and design for interactions in a fluid and responsive way. Arbitrarily throwing a manipulation on a destructive action, could have dire consequences for the user. Using a complex gesture for a simple help menu, would create a pause in your user’s experience.

Let me give you an example of the two with a nice crisp physical metaphor demonstrating OCGM

You have a pile of sticks.

Just a pile of sticks

Think of the sticks as Objects. Think of the table as a Container. Now let’s examine the relationship between the two. The container contains many objects, therefore operations executed on the container will have multiple effects on each of the singular instances of the objects.

Now, let’s give you a few things to do those operations. First, let’s give you a hand, which we will call a Manipulation. Then let’s give you a chainsaw, which we will call a Gesture.

To move the sticks, to rearrange them, to take them in and out of the container (on or off the table) we can use a hand. It might not be the most efficient, but it gets the job done and we can do these methodically. All the while, we can always undo our actions easily.

If someone comes up and talks to us, we don’t hesitate to respond because there is nothing really at jeopardy here. In fact, it might be nice for someone to come up and interact with us while we are doing these chores.

This also puts us at ease. We can do these things and if we get interrupted, we have no stress. Why? Because everything is undo-able. Things are easily moved back and forth with no real consequences. Even if we push the entire stack to the ground, we can easily pick them up and put them back. One handed.

This is the physical realization of a manipulation.

a Chainsaw "gesture"

Now, we take your chainsaw. This is a complex piece of machinery. This is going to take focus, intent, and possibly a plan to implement. The things we can do with this are destructive, irreversible, and should be taken with care because they could damage other things around the single container or multitude of objects.

We consult the manual, don safety gear, check the gasoline in the chainsaw, pull the handle, and then fire it up. Each one of these actions were simple and were not harmful, until they were combined. Combining all of these harmless manipulations results in a gesture that could do harm.

Let’s think about what we just did though. We had to do things in a certain order, we had to do each of them, and they were predetermined by something other than us.

Now could we take our gesture chainsaw and destroy the Container and all of the objects inside? Yes. Could we also affect other containers and things in the vicinity? Yes. There is no undo, when its done its done.

This is the physical realization of a Gesture.

So I digress….

Why is it important in interface design to distinguish between the two? To promote a better experience through expected interactions and results. Do you want your users concentrating on the most mundane of tasks? Let your user relax when they can. Do not make them concentrate on these details of where to move an object when they don’t have to.

Allowing your users to free up precious concentration allows your interface to become more complex. Right now, the main impediment for complexity in interface design is the user. By distinguishing between the two, we can begin to create more complex interfaces by relinquishing as much focus as possible, when possible.