In this post I’m going to explain some of the concepts and give a few examples of each. For the most part I will be responding to other posts I have seen on the issue. I will not be explaining it 100% because I want you to let your mind roam and explore this area. Think of this as the Socratic Method for promoting and understanding this concept.
OCGM
This concept for promoting the design and development of modern interfaces is not an end all, be all solution. This is just another step in the discussion of design.
When discussing interface or system design we need a way to discuss it to the non-design person so they understand the general concepts. This would be a way to use it when discussing what type of interface you are going to have on your system. It bolts out the quick cornerstones of development and will encompass the general ‘feel’ of the end result. Typically when you hear what type of interface you are going to design, you hear “WIMP,” which stands for Windows, Icons, Menus, and Pointing Devices. This tells the developers in short and quick fashion exactly what to expect. This is still being used to this day.
In our generation, systems and interfaces are growing by leaps and bounds. The easiest way to communicate that to a developer, stakeholder, or another designer is by using the new acronym, OCGM.
“What type of interface is it going to be Ron? WIMP?”
“Actually, No, its going to be OCGM!”
“Ummm, huh? Like… we already have these templates made out with buttons and sizes… wait.. what the hell is OCGM?”
OCGM breaks down the basis of all future interfaces into two categories and those are broken down into two subcategories. One for items and one for Actions. Everything on an interface that you will be interacting with is going to be one or the other. Of those two categories, they are then broken down even further, by saying, we have a base unit and then we have a complex unit of each. With those 4 items, you can begin to discuss exactly how those will come into play into your future interface.
OBJECTS – Objects are any type of unit or part of a unit on your interface. This is just a way to define your smallest quantifiable bit. This could take the shape of a piece of album art, a picture, an icon, a ball, or an aura of some kind. The importance of this is that each of these objects represent something or some action in the system. This is meant to be all encompassing because we do not want to limit designers or developers when they sit down to brainstorm ideas. If you tell them Icons and Windows… they will design Icons and Windows. Let them think outside the box when they develop.
CONTAINERS – Containers are a way to discuss the relationship of objects. Containers do not have to take the form of an actual physical box or window. They take the shape of a relationship between objects that you manage through your interface in whatever means you see fit. They could be 5 balls circled around a larger ball which forms a sort of a menu. They could be a simple tagging system. Then by the use of a gesture you reveal the tagged objects and therefore reveal the container. Relationships are key to managing objects and understanding how they will interact with each other is key to your design.
MANIPULATIONS and GESTURES are absolutely crucial in their significance from each other and their significance when designing the user experience. Understanding the difference between these two interactions will make or break the user experience. Manipulations are direct action and reaction on your interface. The user manipulates something, gets immediate feedback, and understands the result of their action. These are simple, easy to understand, somewhat intuitive, and graceful. Gestures are complex actions that are indirect. They can be harmful (format a drive), they are usually not intuitive (draw a ? for help), and are not geared towards the first user experience. So let’s break this down a step further.
Why does the designer or developer need to understand the difference and design accordingly? Because manipulations are the easy way out. They can be your absolute best friend and they can perform most of the common daily user tasks that the user will need. They are designed for beginners, medium users, and for accidental activations. Accidental Activations!! When designing your interface, always design for accidental activations and always gear them towards a Manipulation. Never allow them to be a gesture on accident! While using a Surface Unit, when I brush my sleeve across the screen (which happens significantly) you should never design a “left swipe” to delete a file. This is the core of understanding the difference.
If you want to start the self destruct on a ship, you don’t merely have to press a button. You have to perform a gesture, several manipulations in a sequence that are recognized at the end of the sequence. Only then, after the order is maintained and accomplished does a gesture get recognized and then the action is performed.
Ok, that’s enough explaining for now. Let me answer a few blog posts about the subject. I will dissect the arguments a little to pull out points.
Some great critical thinking over at the clevermonkey. (we need more of this)
… I’m sorry to say that OCGM fails both of my tests. It is at once non-inclusive of the three primary technologies I outlined as well as being to ambiguous to be useful. In addition, the terms used in the acronym overlap so much as to be redundant. ..
The first test is…
- Touch UI
- Voice UI
- Gestural UI
- Tangible UI
- Organic UI
- Augmented Reality
- Automatic Identification [via clevermonkey]
Richard is saying that OCGM does not encompass the first three of his 7 technologies. I think the first problem I have with this, is the list is it is not a list of NUI devices. This is a mixture of interface types (OUI), interaction types (GUI), experience types (Augmented Reality), and Identification Methods (Automatic Identification). I don’t see a relationship between these devices other than they are new and could be perhaps governed by a non-standard UI. That is the case in most devices though, isn’t it? Let me give a quick sentence on some of the farther reaching devices.
OUI – is a non-symmetrical, bendable, or wearable interface. The determining factor is how its displayed to the user. The actual interface will take the shape of its viewable area, but it is just a way to describe non-Monitor types of interfaces. [Examples: bracelets that have an LCD around the band, shirts that tell your vital signs, a small LQD that bends around a table leg and gives you scores/radio for your favorite show or game].
Automatic Identification – This is a method to identify a user, an action, or another system by any means necessary. Could be authentication, recognition for home entertainment, DNA for weapons [District 9 killed!]
Augmented Reality – superimposing the results of a system onto your life through vision, motion, or some other means not developed yet. [Yelp on your phone while looking through the camera, a HUD on a fighter jet superimposing targets on the screen]
My Answer: The first 3 all fit very well into the OCGM acronym.
Voice – Voice is a complex system. Of the few dozen or so pure voice systems I have played with most of them. The latest and most advanced one that has come out, came from MSN Auto. It is a purely voice driven menu system for a car. It contains OBJECTS [people, phone numbers, favorites, places, presets], CONTAINERS [groups of contacts such as Work or Home, groups of places such as frequently shopped locations] Manipulations [“Volume Up!” “Call ….. “] and it contains Gestures [“Emergency!” automatically performs a complex manipulation {“dial…. 9…..1…….1……. “}, or presets, “Becky!” automatically performs whatever action you set for the Becky command].
The OCGM system works very well with most languages as well, and especially well with Bill Buxton’s paper on the 3 State Method. If anyone hasn’t read that, you should read it now or put down your pen forever!
Touch UI – This absolutely fits the model because it is inherently part of the birth of it. It contains OBJECTS [pictures, icons, floating buttons, small song notes that represent songs], CONTAINERS [groups of pictures in a Pod or Bar {PS: I was published on my creation of a selector system for the POD in Surface at the 2008 IEEE Tabletop Conference}, playlists of notes, tagging multiple photos], Manipulations [touch the ball and move it across the screen] and GESTURES [right now this is slim on the Surface, but there are several in the SDK, such as draw a ? for help, draw an X for delete].
GESTURAL UI – I’m not sure what you mean by this one. Do you mean SPATIAL? If you mean spatial, I’m not really sure what I can disclose about NATAL, but I can assure you that all of the 4 items are covered.
The second point I see from Richard is this one
Windows, Icons, Menus, and Pointer are all pretty clear. An acronym for NUI should be equally as clear or its not useful. [via clevermonkey]
I wholeheartedly disagree with this. In fact, we want to go the opposite direction. We want to not spell out all the details for interfaces and we want to empower the designers to design for their experience. We want to arm the designers of the future with the cornerstones of good design and let them go wild! It’s no secret that I am not a big fan of UI DESIGN PATTERNS. I think that for the most part, they are a waste of talent. When designers could and should be thinking outside of the typical experience, they rely on a “crutch” called a UI pattern. Those patterns were developed by City Engineers because there were only so many different ways you can put 3 buildings on a city block. That’s where they came from and that’s where they need to stay!
This acronym is intentionally vague by only discussing the bare mechanics of a future driven interface. The reasoning for this is simple. It’s to empower the designers! Designers need that room to breathe when sitting down to solve their next problem. By only giving the mechanics we allow the designers to design the experience.
That’s all for round 1! I welcome emails or comments for tomorrow’s battle. With this, I leave you one last question: