WIMP is the current acronym for the Windows User Experience. It stands for Windows Icons Menus Pointing Devices.
In human–computer interaction, WIMP stands for “window, icon, menu, pointing device“, denoting a style of interaction using these elements. It was coined by Merzouga Wilberts in 1980.[1] Although its usage has fallen out of favor, it is often used as an approximate synonym of “GUI“. WIMP interaction was developed at Xerox PARC (see Xerox Alto, developed in 1973) and “popularized by the Macintosh computer in 1984″, where the concepts of the “menu bar” and extended window management were added. [2] [via Wikepedia]
The WIMP interface is a slow dying breed as our demands on user experience and the demands of user’s keep inflating. It’s time to start thinking in a new direction. A direction that sheds many of the harnesses of the old acronym and begins to explain the building blocks of the future. It will be simple, concise, and cover all of the bases we need. There is no need to rely on pointing devices, menus, or windows anymore. It’s time to let the experience be the interface, and the user to be in total control. The interface will begin to blend in with the experience and the experience will be the interface.
I have spent several months thinking about this and trying to solidify something presentable. This is the fruit of my labor. I present to you:
OCGM
Objects
Objects are the core of the experience. They can have a direct correlation with something physical, or they can just be objects in the interface.
Containers
Containers will be the “grouping” of the objects. This can manifest itself in whatever the system sees fit to better organize or instruct the user on interactions. They do not have to be, nor should they be, windows. They can be any sort of method of presentation or relationship gathering as seen fit.
Gestures
I went into detail about the differences in Gestures and Manipulations in a previous post [check it out for a refresher]. Gestures are actions performed by the user that initiate a function after its completion and recognition by the system. This is an indirect action on the system because it needs to be completed before the system will react to it.
Manipulations
Manipulations are the direct influences on an object or a container by the user. These are immediate and responsive. They are generally intuitive and mimic the physical world in some manner. The results are expected and should be non-destructive. These are easily performed and accidental activations should be expected and frequent.
This acronym is short, concise, and to the point. It contains all the elements the modern designer will ever need. In discussing this acronym with someone yesterday, he asked “Why do you separate out manipulations and gestures?” This is a good question and lies at the very core of modern design. These are the two basic interactions needed for a NUI, Touch, or even a Windows based system. The first is easy, intuitive, usually engulfed in a metaphor of some sense. The second is complex, learned, non-physical, and super-natural. The understanding of these two types of interactions are core to designing something for the modern world.
We have objects, which can be grouped into containers. We have manipulations, which can be contained inside of a gesture. The simplicity is liberating.
By a lucky coincidence, the acronym also bears very similar pronunciation and essence to Occam’s Razor. The simplest answer tends to be the right one.
Occam’s razor (or Ockham’s razor[1]), entia non sunt multiplicanda praeter necessitatem, is the principle that “entities must not be multiplied beyond necessity” and the conclusion thereof, that the simplest explanation or strategy tends to be the best one. The principle is attributed to 14th-century English logician, theologian and Franciscan friar, William of Ockham. Occam’s razor may be alternatively phrased as pluralitas non est ponenda sine necessitate (“plurality should not be posited without necessity”).[2] [via Wikepedia]
I hope you love this acronym as much as I do. Thanks for reading and feel free to comment.
Pingback: Tweets that mention OCGM (pronounced Occam['s Razor]) is the replacement for WIMP- Experience Design by Ron George -- Topsy.com
I think a paradigm shift for the way we interact with computers probably is very overdue (and the graphical capabilities of machines these days would allow for very nice new interfaces). I admire your attitude towards the shift and I love your acronym, I think it sums up the way modern interfaces should be thought of. I also think it’s sad that the old interfaces should die as the WIMP system is the way it has always been since the birth of GUIs and is quite nostalgic for many… I’m a big fan of the Windows GUI (and love the way the taskbar has been brought into modern times with the superbar in Win7).
The way GUI development is going, things seem to be being made simpler for a wider range of audiences (possibly at the cost of ultimate control of the system – but that is my opinion) and for the casual user this is fine – but power users, such as myself, may feel that over simplifying things could leave a lot of functionality ultimately locked out. So I remembered that, in a GUI, there is always an option to make things more advanced/accessible, such as changing the control panel view to classic (win), changing the appearance of the GUI to the style of older OSes and (going back a few years) even running in DOS rather than a graphical OS. Perhaps a more appropriate example is Windows Media Centre; If a user simple wants to play their music and videos without any complications then they can use this simple tailored interface designed for this purpose but if this is not enough and the user wants more functionality they can use Windows Explorer to look for the file, change its name and meta data, organise music into folders etc.
So I had the idea of possibly having the OPTION to change between a WIMP interface and an OCMG interface depending on the user or depending on what they need to do at a certain time. Such a change could be controlled by a simple gesture like WinKey+Tab which is used to look through open windows in the Aero interface, which leaves power to the user while also giving them a clean and simple interface (OCGM) when they want to do simpler tasks. Just a thought, what do you think?
We agree on several levels.
First, I too hate “simpleton” interfaces. In fact, I really think they are some of the worst designs on the market today. When you keep trying to design for a wider audience AND maintain your expert user, you WILL begin to alienate. There is no way around it beyond something like you said above. The problem is, no one does that. They try to satisfy all customers all of the time and this is not possible. What happens is the interface gets so simple, it actually gets difficult.
Here is a prime example. Getting your IP address in Windows XP was 2 clicks with no submenus. In Windows 7, its now 5-6 clicks depending on your settings and 2 submenus minimum. Why did they make it more difficult? Well, I talked to the designers actually and they said it was so they could move back the Network Properties into a category with its peers. This makes sense on the outside, but to the user, its more difficult.
These are the types of things I am proposing. I am not proposing a simple interface with no room for the user to “grow.” In fact, I want an interface that grows with the user and his experience level. I want the interface to “follow” you from device to device. Your Name is your experience.
As we begin to think about the system as a whole, we start to think of how each component of that system will grow along with us and be in tune with how we PREFER to interact with it. Not how it prefers to interact with us.
Theres no need to choose between a simple interface for a new user and a complex interface for a power-user. What if you build the interface you want to use for that particular system? Theres no need for a predetermined set of buttons to be in a line, why not have no line and let you put buttons, gestures, whatever you like there. Customization is the key.
Fair enough! I agree entirely – the interface should fit the user. You certainly seem to be the right guy for the job 😉
I like this alot, moving to a more object driven definition of the UI is definetely on the right track.
One thing that i find is lacking in your OCGM is that thing that defines relationships. Each of those elements OCGM will be impacted by “R”elationships within and between themselves.
The question is, is “Relationship” too low level, as OCGM are from what i can gather are more higher level definitions.
Great Comment.
Relationships are covered under “Container.” The important thing to remember is that a container does not necessarily reflect a physical object. Container is just a method to group, show, or somehow perform a relationship on items. This allows for the designer to have a wide berth on design decisions that are RIGHT for the experience. Not trying to keep them chained up with ‘design patterns’ or being consistent for consistency’s sake.
An example of a container could be a tag. Using the tag joins items in a relationship, but it does not need to physically contain them.
I want to empower modern designers and enable them to use all the tools at their beckon call to create amazing experiences. Limiting designers to formulas is counter-productive.
Appologies for commenting again, but this post definetely intrigues me.
Another important feature of UI’s that designers need to consider, but normally don’t until well into development when its too late, is soemthing im calling “Reactions”
Basically how an gesture or manipulation causes a “Reaction”. I think this is important enough to be promoted to a first class citizen in your definition 🙂
Far too many times alot of the UI’s i’ve built has had a lack of definition of how the UI behaves after someone clicks on it or manipulates some object/container.
What’s your opinion on “Reactions”? Is it too low level a definition? Is it not important enough to be included in OCGM?
Affordance is very important for several reasons. They are not as important as say, “object,” but they are an integral part of the design for a manipulation and gesture.
Manipulations need immediate feedback in a timely manner. Inform the user of the state, the action, and the result as soon as possible. You could also give visual affordances around “possibilities” to show which objects can be manipulated, etc.
Gestures are also very important to have visual affordances. A great paper out of the Advanced Design Team for CHI is about that very thing. Learning gestures, recognition, auras, all of those things are something the designer needs to understand and implement.
All in all the individual manipulations or gestures will be constructed for their specific purpose and their specific environment. We need to give that freedom to the designer to make the best judgment for his product.
Pingback: OCGM replacement for WIMP: New Windows User Experience
I welcome change that makes computers more visually intuitive to use. The less I have to keep in my head the better – which is why I always preferred Windows.
But as a developer, I hope that all of these advancements will remain as accessible to develop as the old ways of doing things. WIMP is easy – you have a single pointer that can click on things, which triggers actions that happen one at a time. There’s no need for much mental flexing to wrap your head around this, and it’s easy enough to abstract away, making development accessible.
With things moving into natural interfaces, multitouch, parallel processing, rich UI/Xs, all these ideas that we’ve gotten so used to, like the simple concept of things happening one after the other, are starting to shatter. I feel this is going to be painful because we haven’t had a reinvention of the basics since… well, ever. The idea that I might soon be able to do things like press two buttons on the screen at the same time absolutely thrills me as a user, but honestly scares me a bit as a developer.
It’s good that these abstractions have finally matured enough to make their debut into the mainstream, but here’s hoping that something as ubiquitous as UI won’t need a terribly huge level of specialization on the developer’s part to make a modern application.
Ilia,
The good news about multi-touch development is that it is incredibly easy with WPF 4 (and the Surface SDK before it.) You don’t have to worry about parallel processing or similar things for the multi-touch aspects. You still need to use worker processes for long running tasks, of course, but for the most part the event model is the same as standard WPF applications.
There will be some head wrapping to get used to OCGM but the actual development difficulty is on par with GUI development with the same level of quality.
I’m writing a book about Multi-touch and NUI development. It will cover development using the WPF 4 Touch API as well as the basic design concepts the developers need to understand to communicate better with designers and implement great NUI designs. Watch my blog http://nui.joshland.org for more information, coming soon!
Hi Ron,
Congratulations on starting your own company! I wish you success and happiness.
I read your post on OCGM with interest after reading Josh Blake’s post – he emailed me to let me know you guys were working on this.
While I applaud every effort to try to quantify and define NUI I don’t agree with the OCGM. Rather than go into detail here I’ve posted a blog entry about it on my own site. I’m happy to discuss in more detail in any venue you find appropriate.
Here is my blog entry on OCGM
http://theclevermonkey.blogspot.com/2009/12/what-is-nuis-wimp.html
All the best!
Richard
Thanks for that post. I read it and come away with several points. I got about 5 paragraphs into a rebuttal and decided that I will provide some examples and give some direct answers to better explain the concepts in a new blog post so its easier to find and read. I will hopefully be doing that in the next hour or two.
Thanks again for the detailed post.
Pingback: uberVU - social comments
Hi, nice article.
2 comments :
A Container contains objects so you first explain what is an object then a container : logical
A Gesture can be composed of multiple Manipulations but you explain Gesture first : illogical and inconsistant with the container/object explanation.
I know it’s a very picky detail but I think UI’s and NUI are all about details, you know lijke “if you can do it with one click don’t use 2”
+ Gesture and Manipulations are so closely related that yo really need to explain this right and clearly from the start.
second command is about the disparition of the WIMP paradigm : I don’t believe so, it is not thinkable for programs with very intesive menu usage like word or Photoshop 3DSMax for example.
I think there is only a very limited set of actions/gestures/manipulations that can be used naturally by mosts usersand there is no point in learning a specific and difficult gesture to start a “Blur Edge Effect” in Photoshop If i can simply click on a menu “Filters-Blur-Blur Edge”
You don’t have enough faith!
A gesture at its core is just a prolonged action that ends with a result.
What if I had a small menu at the bottom left. When the user presses it, a submenu comes out, and then another. Where your finger lands, is what action takes place.
Now.. remove the menus. 🙂 Welcome to the OCGM Generation!
I don’t understand your answer.
Do you say menus will still be used but hidden after use ?
If so you don’t really change the ergonomy.
If not my question was how to handle very heavy menu related apps like Photoshop or 3DS.
I am not as technical as most on this blog so cannot comment on specifics in a coding sense. What I can say is that I went from being able to navigate Windows Explorer in Windows XP and being able to see all of my installed programs in the START menu as well to having to “jump through hoops” in Vista. The navigation and functionality of Windows Explorer in Vista is to say the least confusing. And the fact that clicking on START –> Programs doesn’t give me the ability to access come programs (intuitively) makes me fume.
I would like a computer with an operating system that has an easy flow (e.g. Being able to type on the desktop the name (or portion thereof) of a file or program and have it instantly give me the executable files to choose from instead of having to go to a folder and try and figure out which is the executable I have to use to start it.) and not so many buttons and settings.
I hope Windows 8 will approach that level of simplicity for the end user.
I think you will find Windows Vista and Windows 7 with desktop search indexing already has accomplished this. Either from the run command on thestart Menu or from the Search Field on the Task/Superbar.
Windows 7 already do this, you press the windows key wich brings up the start menu with the focis already in the search box, from there you can type “Excel” or “Firefox” or any other name (even partial names) of any apps wich has an entry in the start menu (as 99,99% apps you install have) and press Enter , et voila !
The same thing is possible under XP using the free software “Launchy”
enjoy 😉
@Scott
It’s a bit off-topic, but the Vista/Win7 start menu does exactly what you ask for in your second paragraph. Click start, type ‘calc’, hit enter. Caculator starts. That’s more efficient than searching through the menus for the Calculator entry.
In windows explorer, things like Libraries and Favorites make me FAR more efficient with file management than I used to be in XP.
Pingback: Twitted by splattne
Pingback: uberVU - social comments
I posted a “PART I” response to this post on my blog- it is a bit long, because I’m in the middle of reading “Acting With Technology: Activity Theory and Interaction Design” and also thinking about concepts that relate to universal design that could be considered as we move forward in the “post-WIMP” world.
The Post-WIMP Conversation: Some Thoughts about Conceptualizing the NUI, Part I
http://bit.ly/7Xcm0P
Thanks Lynn. I plan on writing some more details in a new post hopefully tonight. I wanted to get it out there naked first to promote interest. Now comes the boring technical details. 🙂
I think OCGM is a pretty cool acronym, but I may fall into the camp that believes we don’t necessarily need to separate Gestures and Manipulations at this foundational level. I’m all for having separate definitions, but I tend to agree with Richard when he says having both is redundant. Not a deal-breaker for me though.
I think the lack of specifically calling out affordance, or something else that calls out the need for intuitiveness in NUIs might be a bigger issue. Answering this by saying it is built-in to the requirements of defining a gesture or manipulation seems weak.
I keep coming back to a fundamental question though, why do we need a new WIMP? Were people really using WIMP as a guiding principle when designing GUI’s? More broadly, you could argue that coining WIMP was a direct result of the inherent limitations of available hardware at the time. Isn’t NUI supposed to lead us into a world where interacting with computers has little-to-no limitations and works ‘naturally’. What is mother nature’s version of WIMP?
My take at Nature’s version of WIMP:
Exist
Sense
Intend
Act
Result
ESIAR pronounced like ‘easier’
🙂
Thanks for the post.
As I was saying to Richard, the magnitude of difference between a gesture and a manipulation is far undervalued in the non-professional community.
One of the biggest debates at larger companies is the use of the word “gesture.” Marketing departments want to use it because it sells. Designers do not want to use it because it is inaccurate. The main problem with this is the misuse of the word by those of us in the field, confuse those that are not in it.
One of the largest complaints I have about the external-NUI field is the lack of a common language. We all need to agree on a common language so we can come up with solutions. If people call each thing a different word how can we properly communicate directions or need? Then, take that a step farther and say to a developer, how can we communicate with him or her if we cannot even explain it to each other?
On that same note, how can I explain to a developer how crucial and important the difference between a manipulation and a gesture is when people consistently try to devalue it?
As an example, in Dan Saffer’s book, Designing Gestural Interfaces…. he is wrong. The entire book he uses the wrong terms. He confused the definition of the two interactions and uses it throughout. How can that be used as a reference?
That is one of the reasons I am now writing a book for O’Reilly. To clear the air of some of these arguments that are gaining cobwebs with no discussion.
I know this thought has probably never occurred to you, but perhaps it is YOU that is wrong? I’m tired of hearing you say that–repeatedly–about me and my work when it is a difference in terminology, and I’m using the term that EVERYONE aside from you (and perhaps some of your pals at Microsoft) is using. I didn’t come up with it, the industry did. Bill Buxton, whom you are so quick to site and who knows more about this field than you and me combined, uses “gesture” to describe directed physical movement as well, as evidenced by his latest piece called “Gesture Recognition.” http://www.billbuxton.com/input14.Gesture.pdf Why not go to his office up there at MSFT and tell him he’s wrong too?
Best of luck getting the industry to change terminology that’s been in place for at least a decade now. I’ll check back in a year and see how that works out for you. Until then, how about some professional courtesy and note that you have a different take on an emerging field, and perhaps even calmly, logically, less arrogantly, lay out your case? Just saying I’m wrong over and over does not make it so.
As for this OCGM gargle-sounding acronym (which might be more silly than “modern experience design” (but just barely), I’m simply going to point to Joel Spolsky’s post on “Secret Language:”
“Microsoft..[has] become so insular that their job postings are full of incomprehensible jargon and acronyms which nobody outside the company can understand. With 93,000 employees, nobody ever talks to anyone outside the company, so it’s no surprise they’ve become a bizarre borg of “KT”, “Steve B”, “v-team”, “high WHI,” CSI, GM, BG, BMO (bowel movements?) and whatnot.
When I worked at Microsoft almost two decades ago we made fun of IBM for having a different word for everything. Everybody said, “Hard Drive,” IBM said “Fixed Disk.” Everybody said, “PC,” IBM said “Workstation.” IBM must have had whole departments of people just to FACT CHECK the pages in their manuals which said, “This page intentionally left blank.”
When you talk to anyone who has been at Microsoft for more than a week you can’t understand a word they’re saying. Which is OK, you can never understand geeks. But at Microsoft you can’t even understand the marketing people, and, what’s worse, they don’t seem to know that they’re speaking in their own special language, understood only to them.”
http://www.joelonsoftware.com/items/2009/12/30.html
Come to think of it, this applies to your trying to rename gestures to manipulations as well. Thanks Joel!
Hi Dan! Thanks for your comment.
I’ll be honest. I thought this was a joke after I read what you wrote and saw what you cited. It might have been better for you to do this in private, but I’m all for public as well. I think it benefits the community to see rationale arguments in this space.
I have spoken to Bill several times about this very subject. We are in 100% complete agreement. In fact, without some peer review I would never have imagined to release this for public debate. Needless to say, I am always astonished at his wealth of knowledge. I cite him continuously because he is a rationale, well thought out guy. I respect him very highly.
On to the technical……..
Dan, have you actually read Bill’s paper, cover to cover? The one you cited, because that is one of my cites for this acronym. I mean, if you had read it, you would understand that again…. you are wrong. You are 100% completely wrong about the terminology. This isn’t a Microsoft thing, this is a knowledge thing.
You are using the terms incorrectly. Your “industry” is using the terms incorrectly. No one uses them the way you do except for non-professionals in the field and marketing. Marketing has been flat out and told me personally, that “Gestures sell… period, what can we do?” So if you want to use them in a salesman’s fashion, that is your prerogative but please do not try to pass that off as knowledge.
Let me spell it out for you, and I will only use YOUR cite:
Page 1!
“A gesture is a motion of the body the contains information. Waving goodbye is a gesture.
Pressing a key on a keyboard is not a gesture because the motion of a finger on it’s way to
hitting a key is neither observed nor significant. All that matters is which key was pressed.”
This is much more eloquent than my definition, but exactly what I am saying. Manipulations are single, thoughtless interactions. Gestures are more involved.
Page 3!
“In contrast to this rich gestural taxonomy, current interaction with computers is almost
entirely free of gestures. The dominant paradigm is direct manipulation, however we may
wonder how direct are direct manipulation systems when they are so restricted in the ways
that they engage our everyday skills.”
He goes on to discuss Chunking and Phrasing, which I feel is an incredibly insightful paper. This all goes to the true definition of manipulation and gesture.
So Dan, before you blindly dismiss this acronym for personal or whatever reasons, please spend some time to understand it. If you understand it and don’t agree with it, then I can respect that. Please come up with something better, and if I agree with it, I will beat the drums for you.
Dan, also thank you for your twitter update:
“Before making up Yet Another Acronym (YAA), ask if it’s really going to help people work better or succinctly encapsulate a complex concept.”
That’s rather harsh to say when you don’t even understand the foundation of the acronym.
PS: Thank you so much for the Joel link, that is absolutely hilarious. That made my night, I shall begin the retweets asap. lol.
Pingback: Un Windows 8 sin… ¿ventanas? (Y sin iconos, menús, ratón…) | MuyWindows
Nothing’s really new under the Sun. To me all that words sound like coming from a young guy who feel he could revolutionize the world. I myself am more interested in substance than in propaganda… Sorry – not convinced at all.
Thank you for the comment.
I am neither young, I’ll be 40 in a week, or unaccomplished. In fact, I have done everything I have wanted to do in the design world. Ever used MySpace, Yahoo Mail? ToonTown? Sony Music? Visited a Clear Channel Radio Station website? When I started doing this, about 16 years ago I had one goal. Work on Windows. I have worked at Microsoft for 5 years and got my paws in Windows 7.
It may sound like Propaganda, but these core concepts are crucial to the design world and even more so when dealing with non-designers.
For more substance you could read my latest blog entry, Part 2, which has more substance, examples, and details…or you could dismiss that one as well and head back to Reddit, whatever floats your natural user boat. 🙂
Pingback: OCGM : quand Microsoft repense l’ergonomie de Windows | News Jeux video – SFA
Pingback: Un Windows 8 sin… ¿ventanas? (Y sin iconos, menús, ratón…) « inetworks
The field of “gesture recognition” as part of computer science goes back at least 15+ years. Papers such as “Television control by hand gestures” by William Freeman and Craig Weissman from Mitsubishi Electric Research Lab came out in 1995. I refer you to Wikipedia: http://en.wikipedia.org/wiki/Gesture_recognition
Clearly, I did not make up the term or use of the word gesture to designate motions of the human body as input device. But let’s look at my definition.
“A gesture is any physical movement that can be sensed and responded to by a digital system without the aid of a traditional pointing device such as a mouse or stylus.” I think we can put keyboard in there as well.
Bill defers to the 1990 article by Kurtenbach and Hulteen “Gestures in Human-Computer Communication” for their definition: “A motion of the body that contains information.”
I see no disagreement between those two definitions. In my discussions with Bill, he’s challenged me on points of history (which I have conceded), but never on the use of the term gesture. My book also had multiple technical editors, none of whom challenged it either.
You, however, seem to want to make “manipulation” rather than “gesture” be the overarching category of human movement. This is your prerogative, but it does not make me wrong in that I’m using the standard, industry-wide definition that Bill uses and has been around since at least 1990 in the HCI/CS fields. These technical papers are far from marketing.
What irks is that you are using your own definition as a yardstick and, since you have a different, non-standard definition, you disparage me and my book because we don’t. Multiple times. Consider, just consider, that you might be trying to change a standard, of which I’m a teeny, tiny fraction.
Dan,
I’m comparing your definition of gesture with Bill’s definition. You say you don’t see any disagreement between the definitions. I can see why you say this, since everything that fits Bill’s definition also fits your definition.
I think why Ron takes issue with you definition is because it is too broad. Your definition is a superset of Bill’s definition and thus includes things that are not true Gestures. Consider Bill’s (i.e. Kurtenbach and Hulteen’s) definition of Gesture:
“A motion of the body that contains information.”
The form of this definition partitions the set of all body motions into motions that contain information and motions that do not contain information. Kurtenbach clarifies how to determine whether a motion contains information:
“Waving goodbye is a gesture. Pressing a key on a keyboard is not a gesture because the motion of a finger on it’s way to hitting a key is neither observed nor significant. All that matters is which key was pressed.”
Thus, if the motion itself is observed and significant, it is considered to contain information and is therefore a gesture. On the other hand, if the motion is not significant, then it is not a gesture. To put it another way, a body motion is a gesture if and only if achieving the desired result is conditioned upon specific qualities (such as the path, speed, and/or direction) of the motion.
Your definition does not differentiate between motions with information and motions without information. It includes any motion that a computer can detect and does not exclude non-gestures, so it is really just a definition of motion detection. Gesture does not mean “motions of the human body as input device.”
I hope you can see how using your broad definition as the definition for gestures is inaccurate and can cause conceptual confusion.
The question of whether a motion is a manipulation is orthogonal to whether a motion is a gesture or not. A motion could be classified as both a gesture and a manipulation, just a gesture, just a manipulation, or neither.
My understanding is that a motion is a manipulation if there is a direct and immediate relationship between at least one degree-of-freedom of the body motion and at least one degree-of-freedom of an interface element.
Examples:
1) Motion is neither: finger moves around and no action occurs
2) Motion is a Gesture only: finger moves back and forth four times quickly to scratch out/delete a word under the gesture
3) Motion is a Manipulation only: finger moves an object around the screen
4) Motion is a G & M: finger drags an object back and forth four times quickly to delete the object
Gestures and Manipulations co-exist and the distinction is important.
Dan, you are thinking to deeply about the responses rather than the heart of the matter.
I don’t think this is some grand evil plan by you that you just invented out of thin air. By contrast, I think just the opposite. I think you are a product of the drifting divide between technical design and artistic design.
There has been a very clear divide between the types of design. The technical designers have drifted into their own realm and the artistic designers as well. The line is clear but it shouldn’t be.
You and I disagreeing about this point also does not make this a drastic affair. Disagreeing with colleagues is a way of scholarly thought! We should disagree on things, it promotes creativity and critical thinking! Our disagreement on this definition has nothing to do with you personally or the quality of your designs.
Also, don’t think that I dislike your book. That is not the case at all. I actually have 3 copies of it sitting on my shelf that I loan out to inquisitive students. I will admit though, I do have some “edits” on the first few pages, 😉
The simple fact is, we need more books on the subject. We all need to buy more, read more, and write more. This field of study is so young that it lacks in this area and it shows.
Pingback: Interfaz de usuario de Windows 8 « Windows 8
Hi Ron,
Interesting tots and looking forward to your technical details. I’ve been hoping to re-define the UI/UX for my company’s product as well, but I am to be much further from where you are at this point.
In any case, I like to share some tots and would appreciate comments.
As Win7 and multi-touch slowly move into the consumer space and even before that, I’ve had opportunities to try out several touchscreens. Together with previous experience with Pen-based tablets (coupled with my own conjecture on why they did not take off as someone predicted), I would venture to say that a gesture-based HCI is not likely to receive wide adoption in most productivity space until the UI is no longer WIMPy.
From those demo which I have seen, multi-touch truly shines under certain scenario, but I cannot imagine myself employing gestures for my day-to-day work at this juncture.
The reason why I cannot imagine gesturing in day-to-day work is the same reason that I cannot play Nintendo Wii for more than an hour without breaking a sweat and feeling tired; Physiology? ergonomics?
I can see myself using gestures for day-to-day work if I have my touchscreen recessed nicely into my desk (at an angle for comfortable viewing) which at the same time allows my arms to rest on the table top while I work.
On the other hand, I’m not too keen to use Win7 and the touchscreen as my primary means of input just yet.
So-called “Win7 touch-based PCs” are already commercially available from HP, Dell, Acer, etc. Using them would probably means that I have to move my entire arm to do the “same” thing I now do with my wrist and fingers (mouse button). Of course, the catch is in “same”, which is where I believe OCGM needs to come in rather radically.
One incident was particularly memorable. Recall a teenage promoter in a recent IT fair trying to launch MSPaint more than 5 times using a multi-touch screen with Win7.
I can see that he was serious ‘trying’, but that is as far as he goes, cos he did not succeed in launching MSPaint before I politely asked him to let me ‘try’.
Touching the Start button, then ‘guiding’ the cursor with finger across the Start menu items seems like a simple task, which I am sure he will have no problem if that finger is resting on a mouse. Unfortunately, he consistently manage to have the Start menu close on him before arriving at the MSPaint menu item or launch the items above / below MSPaint.
I thus believe that, until the arrival of OCGM in consumer space or whatever radical changes as necessary. Something like this: http://www.apple.com/magicmouse/ or this: http://www.geek.com/articles/chips/redmond-goes-after-apple-with-five-touch-enabled-wild-mice-of-the-future-2009106/, would likely serve public interest better.
Hope OCGM can take this into consideration as it evolves.
I think you have a great grasp of the new technologies. The only point I would caution you about is this:
How do you define gestures? Don’t think of gestures as complex motions performed on a touchscreen. They can be any set of motions simple or complex performed on anything. Thinking beyond the touchscreen on your desk, which has obvious ergonomic problems, think about putting a touchpad in front of you.
Check out this video, http://www.youtube.com/watch?v=9qg8IB64yu8
This is a promotional video about Fingerworks and the line of products he had. The touchpad that Westerman created was incredible.
For a bit of history, Westerman was the guy who wrote the seminal paper in 1999 about multi-touch pads and who Apple hired to do all the touch work for the iPhone.
The other point you bring up is valid. The UI should respond to the type of input that you choose. So if you begin to touch with a fingertip, it should change and become more affordable to the fingertip. Its useless to try and create a UI that will encompass all inputs.
Pingback: Un Windows 8 sin… ¿ventanas? « Espacio Geek
Pingback: Nex’Hebdo 02 | Nexdotnet’s Blog
Pingback: noah adler » Bringing NUI to VST