Archives

All posts for the month December, 2009

In this post I’m going to explain some of the concepts and give a few examples of each. For the most part I will be responding to other posts I have seen on the issue. I will not be explaining it 100% because I want you to let your mind roam and explore this area. Think of this as the Socratic Method for promoting and understanding this concept.

OCGM

This concept for promoting the design and development of modern interfaces is not an end all, be all solution. This is just another step in the discussion of design.

When discussing interface or system design we need a way to discuss it to the non-design person so they understand the general concepts. This would be a way to use it when discussing what type of interface you are going to have on your system. It bolts out the quick cornerstones of development and will encompass the general ‘feel’ of the end result. Typically when you hear what type of interface you are going to design, you hear “WIMP,” which stands for Windows, Icons, Menus, and Pointing Devices. This tells the developers in short and quick fashion exactly what to expect. This is still being used to this day.

In our generation, systems and interfaces are growing by leaps and bounds. The easiest way to communicate that to a developer, stakeholder, or another designer is by using the new acronym, OCGM.

“What type of interface is it going to be Ron? WIMP?”

“Actually, No, its going to be OCGM!”

“Ummm, huh? Like… we already have these templates made out with buttons and sizes… wait.. what the hell is OCGM?”

OCGM breaks down the basis of all future interfaces into two categories and those are broken down into two subcategories. One for items and one for Actions. Everything on an interface that you will be interacting with is going to be one or the other. Of those two categories, they are then broken down even further, by saying, we have a base unit and then we have a complex unit of each. With those 4 items, you can begin to discuss exactly how those will come into play into your future interface.

OBJECTS – Objects are any type of unit or part of a unit on your interface. This is just a way to define your smallest quantifiable bit. This could take the shape of a piece of album art, a picture, an icon, a ball, or an aura of some kind. The importance of this is that each of these objects represent something or some action in the system. This is meant to be all encompassing because we do not want to limit designers or developers when they sit down to brainstorm ideas. If you tell them Icons and Windows… they will design Icons and Windows. Let them think outside the box when they develop.

CONTAINERS – Containers are a way to discuss the relationship of objects. Containers do not have to take the form of an actual physical box or window. They take the shape of a relationship between objects that you manage through your interface in whatever means you see fit. They could be 5 balls circled around a larger ball which forms a sort of a menu. They could be a simple tagging system. Then by the use of a gesture you reveal the tagged objects and therefore reveal the container. Relationships are key to managing objects and understanding how they will interact with each other is key to your design.

for further thought, if you dare…. – Containers do not necessarily have to contain just objects unless you can consider gestures and manipulations objects as well. Taking that to the next step we say that the key to managing gestures is the way you will handle their relationships with each other… Yes! Exactly, now we say that the interface is made up of objects that are manipulations and gestures and managing the containers that envelop them is the key to it all! Now you are on to something! If you understand this concept, then you are well on your way to understanding the key to OCGM and why its so important.

MANIPULATIONS and GESTURES are absolutely crucial in their significance from each other and their significance when designing the user experience. Understanding the difference between these two interactions will make or break the user experience. Manipulations are direct action and reaction on your interface. The user manipulates something, gets immediate feedback, and understands the result of their action. These are simple, easy to understand, somewhat intuitive, and graceful. Gestures are complex actions that are indirect. They can be harmful (format a drive), they are usually not intuitive (draw a ? for help), and are not geared towards the first user experience. So let’s break this down a step further.

Why does the designer or developer need to understand the difference and design accordingly? Because manipulations are the easy way out. They can be your absolute best friend and they can perform most of the common daily user tasks that the user will need. They are designed for beginners, medium users, and for accidental activations. Accidental Activations!! When designing your interface, always design for accidental activations and always gear them towards a Manipulation. Never allow them to be a gesture on accident! While using a Surface Unit, when I brush my sleeve across the screen (which happens significantly) you should never design a “left swipe” to delete a file. This is the core of understanding the difference.

If you want to start the self destruct on a ship, you don’t merely have to press a button. You have to perform a gesture, several manipulations in a sequence that are recognized at the end of the sequence. Only then, after the order is maintained and accomplished does a gesture get recognized and then the action is performed.

Ok, that’s enough explaining for now. Let me answer a few blog posts about the subject. I will dissect the arguments a little to pull out points.

Some great critical thinking over at the clevermonkey. (we need more of this)

… I’m sorry to say that OCGM fails both of my tests. It is at once non-inclusive of the three primary technologies I outlined as well as being to ambiguous to be useful. In addition, the terms used in the acronym overlap so much as to be redundant. ..

The first test is…

  • Touch UI
  • Voice UI
  • Gestural UI
  • Tangible UI
  • Organic UI
  • Augmented Reality
  • Automatic Identification [via clevermonkey]

Organic UI on the side of a Coke Can, but can it remove the sugar? That's my question.

Richard is saying that OCGM does not encompass the first three of his 7 technologies. I think the first problem I have with this, is the list is it is not a list of NUI devices. This is a mixture of interface types (OUI), interaction types (GUI), experience types (Augmented Reality), and Identification Methods (Automatic Identification). I don’t see a relationship between these devices other than they are new and could be perhaps governed by a non-standard UI. That is the case in most devices though, isn’t it? Let me give a quick sentence on some of the farther reaching devices.

OUI – is a non-symmetrical, bendable, or wearable interface. The determining factor is how its displayed to the user. The actual interface will take the shape of its viewable area, but it is just a way to describe non-Monitor types of interfaces. [Examples: bracelets that have an LCD around the band, shirts that tell your vital signs, a small LQD that bends around a table leg and gives you scores/radio for your favorite show or game].

Automatic Identification – This is a method to identify a user, an action, or another system by any means necessary. Could be authentication, recognition for home entertainment, DNA for weapons [District 9 killed!]

The F-35 Demon Helmet is Augmented Reality to the extreme

Augmented Reality – superimposing the results of a system onto your life through vision, motion, or some other means not developed yet. [Yelp on your phone while looking through the camera, a HUD on a fighter jet superimposing targets on the screen]

My Answer: The first 3 all fit very well into the OCGM acronym.

Voice – Voice is a complex system. Of the few dozen or so pure voice systems I have played with most of them. The latest and most advanced one that has come out, came from MSN Auto. It is a purely voice driven menu system for a car. It contains OBJECTS [people, phone numbers, favorites, places, presets], CONTAINERS [groups of contacts such as Work or Home, groups of places such as frequently shopped locations] Manipulations [“Volume Up!” “Call ….. “] and it contains Gestures [“Emergency!” automatically performs a complex manipulation {“dial…. 9…..1…….1……. “}, or presets, “Becky!” automatically performs whatever action you set for the Becky command].

The OCGM system works very well with most languages as well, and especially well with Bill Buxton’s paper on the 3 State Method. If anyone hasn’t read that, you should read it now or put down your pen forever!

Touch UI – This absolutely fits the model because it is inherently part of the birth of it. It contains OBJECTS [pictures, icons, floating buttons, small song notes that represent songs], CONTAINERS [groups of pictures in a Pod or Bar {PS: I was published on my creation of a selector system for the POD in Surface at the 2008 IEEE Tabletop Conference}, playlists of notes, tagging multiple photos], Manipulations [touch the ball and move it across the screen] and GESTURES [right now this is slim on the Surface, but there are several in the SDK, such as draw a ? for help, draw an X for delete].

Project Natal. "Falcon Punch!, Body Blow, Body Blow... FINISH HIM!"

Project Natal. "Falcon Punch! Body Blow! Body Blow! FINISH HIM!"

GESTURAL UI – I’m not sure what you mean by this one. Do you mean SPATIAL? If you mean spatial, I’m not really sure what I can disclose about NATAL, but I can assure you that all of the 4 items are covered.

The second point I see from Richard is this one

Windows, Icons, Menus, and Pointer are all pretty clear. An acronym for NUI should be equally as clear or its not useful. [via clevermonkey]

I wholeheartedly disagree with this. In fact, we want to go the opposite direction. We want to not spell out all the details for interfaces and we want to empower the designers to design for their experience. We want to arm the designers of the future with the cornerstones of good design and let them go wild! It’s no secret that I am not a big fan of UI DESIGN PATTERNS. I think that for the most part, they are a waste of talent. When designers could and should be thinking outside of the typical experience, they rely on a “crutch” called a UI pattern. Those patterns were developed by City Engineers because there were only so many different ways you can put 3 buildings on a city block. That’s where they came from and that’s where they need to stay!

This acronym is intentionally vague by only discussing the bare mechanics of a future driven interface. The reasoning for this is simple. It’s to empower the designers! Designers need that room to breathe when sitting down to solve their next problem. By only giving the mechanics we allow the designers to design the experience.

That’s all for round 1! I welcome emails or comments for tomorrow’s battle. With this, I leave you one last question:

WIMPy vs OCCAM. Is there really a choice? I mean, OCCAM got to wear a wreath on his head every day. That is awesome!

WIMP is the current acronym for the Windows User Experience. It stands for Windows Icons Menus Pointing Devices.

In human–computer interaction, WIMP stands for “window, icon, menu, pointing device“, denoting a style of interaction using these elements. It was coined by Merzouga Wilberts in 1980.[1] Although its usage has fallen out of favor, it is often used as an approximate synonym of “GUI“. WIMP interaction was developed at Xerox PARC (see Xerox Alto, developed in 1973) and “popularized by the Macintosh computer in 1984″, where the concepts of the “menu bar” and extended window management were added. [2] [via Wikepedia]

The WIMP interface is a slow dying breed as our demands on user experience and the demands of user’s keep inflating. It’s time to start thinking in a new direction. A direction that sheds many of the harnesses of the old acronym and begins to explain the building blocks of the future. It will be simple, concise, and cover all of the bases we need. There is no need to rely on pointing devices, menus, or windows anymore. It’s time to let the experience be the interface, and the user to be in total control. The interface will begin to blend in with the experience and the experience will be the interface.

I have spent several months thinking about this and trying to solidify something presentable. This is the fruit of my labor. I present to you:

OCGM

Objects

Objects are the core of the experience. They can have a direct correlation with something physical, or they can just be objects in the interface.

Containers

Containers will be the “grouping” of the objects. This can manifest itself in whatever the system sees fit to better organize or instruct the user on interactions. They do not have to be, nor should they be, windows. They can be any sort of method of presentation or relationship gathering as seen fit.

Gestures

I went into detail about the differences in Gestures and Manipulations in a previous post [check it out for a refresher]. Gestures are actions performed by the user that initiate a function after its completion and recognition by the system. This is an indirect action on the system because it needs to be completed before the system will react to it.

Manipulations

Manipulations are the direct influences on an object or a container by the user. These are immediate and responsive. They are generally intuitive and mimic the physical world in some manner. The results are expected and should be non-destructive. These are easily performed and accidental activations should be expected and frequent.

This acronym is short, concise, and to the point. It contains all the elements the modern designer will ever need. In discussing this acronym with someone yesterday, he asked “Why do you separate out manipulations and gestures?” This is a good question and lies at the very core of modern design. These are the two basic interactions needed for a NUI, Touch, or even a Windows based system. The first is easy, intuitive, usually engulfed in a metaphor of some sense. The second is complex, learned, non-physical, and super-natural. The understanding of these two types of interactions are core to designing something for the modern world.

We have objects, which can be grouped into containers. We have manipulations, which can be contained inside of a gesture. The simplicity is liberating.

By a lucky coincidence, the acronym also bears very similar pronunciation and essence to Occam’s Razor. The simplest answer tends to be the right one.

Occam’s razor (or Ockham’s razor[1]), entia non sunt multiplicanda praeter necessitatem, is the principle that “entities must not be multiplied beyond necessity” and the conclusion thereof, that the simplest explanation or strategy tends to be the best one. The principle is attributed to 14th-century English logician, theologian and Franciscan friar, William of Ockham. Occam’s razor may be alternatively phrased as pluralitas non est ponenda sine necessitate (“plurality should not be posited without necessity”).[2] [via Wikepedia]

I hope you love this acronym as much as I do. Thanks for reading and feel free to comment.

Special thanks to Josh Blake over at Deconstructing the NUI for helping me hammer this out.
I’m going to start a new regular feature here on my blog that I have been banging around in my head for a few months. What I’m going to do is give you a breakdown and some discussion points about interfaces that I see in movies. I’m a big movie buff and am always looking for details that have to do with design.

One thing I always enjoy, when seeing a movie with other designers,  is the discussion afterward. Talking about different things we each captured and then having some in depth critique is always fun. The format will probably change with each additional new movie, but I want to keep it digestible. Also feel free to point out any interfaces that I miss!

Movie: Code 46 (2003)

[it contains Organic User Interfaces, transparent monitors, futuristic workstation with a touch pad]

IMDB Link

This is a futuristic movie so there are several experiences that could be captured. I want to capture the two interesting ones.

The first is the Digital Photo Album. It’s a normal pocket sized photo album, but instead of 4×6 pictures, it has 4×6 bendable lcd screens. On these screens they play home movies that have been recorded. This is a great concept because you use the already current mental model of pocket sized albums to store memories. This would be a great leap into the household.

The other interesting thing about this is the interface is nothing more than an onscreen “jog” mechanism. The user rewinds and fast forwards by moving the thumb north and south on the jog. By pressing the center, the movie pauses. Great device and very understated, which as you know I like. 🙂

Summary

For further reading, and for classification, this interface would called an Organic User Interface. Mainly because the interface bends into shapes other than flat. There are some very interesting studies and prototypes around this model. If you are feeling particularly brave, you should head over to the Organic User Interface site (a spinoff of the ACM Magazine), that has a ton of information, videos, and papers that have been published on the subject. Of particular note about the actual implementation would be the speed of the video and no apparent view of the battery pack. These futuristic things are what will really start pushing the need for modern user interfaces.

As we begin to blend the hardware and mechanics of devices into the background and out of view, we also need to start hiding interfaces as well.

The second piece is wonderful setup for a futuristic workstation. This is their vision of the modern workstation. It consists of multiple monitors, above eye level on opposite walls, and a controlling device near the hand rest area. Of course it’s a natural interface due to the lack of mouse and traditional keyboard, but I also like what they did with the monitor position (above eye level, which prevents tiring of the eyes). I also like that they blended the controller and monitors in with the environment. The monitors are transparent when they are not on, and the small keyboard-like controller is small, clear, and flat, almost concealing itself when not in use.

Transparent Monitors are just around the corner! The recent work over at Purdue into optically transparent electronics shows a lot of promise.

The development of mechanically flexible and/or optically transparent electronics could enable next-generation electronics technologies, which would be easy-to-read, light-weight, unbreakable, transparent, and flexible. Potential applications could include transparent monitors, heads-up displays, and conformable products. Recent reports have demonstrated transparent thin film transistors (TFTs) using channels consisting of semiconductor nanowires (ZnO, SnO2, or In2O3) and random networks of single-walled carbon nanotubes (SWNTs).[1,2] [Source]

Interesting update: With everyong heading to a more “green” design direction, most never took into account that the new LED lights will not generate enough heat to melt snow (source). This is such an obvious problem that I really doubt they had a professional Experience Designer involved. This sounds like a problem that arises when they try to “cut” costs by eliminating a designer.
I’m taking a break from writing my book and going to write a bit more about current happenings. Expect to see more blog entries. In this entry I’m going to do a cursory overview of a design winner and the thought process that you should take when partaking in a design

A few days ago The Red Dot Design Award Winners were announced. This is always such a great competition because the participants are so varied and different. The sky is the limit, its wonderful! This year there were 12,000 entries from 60 countries. Of the winners, one entry has gained some traction. The particular entry was progress indicating traffic lights.

Progress Indicator Lights

Progress Indicator Lights

I like this design! Anyone who knows me, knows that “Wait UI” (ex.- Press and Hold) is the bane of my existence [constant source of irritation]. Making the user wait for any period of time is a bad experience. We should challenge designers to come up with things that are not Wait UI. On the other hand, there are examples like this, where waiting ISthe UI. The users have to wait, now its time to make it more intuitive. Let’s break this down into the psychology of the problem and the Mechanical part of the problem.

Occupied time feels shorter than unoccupied time or Queuing Psychology 101 (the UX)

“…a day full of waiting, of unsatisfied desire for change, will seem a small eternity.” —William James, 1891

MIT’s Engineering Systems Division has an ace in the hole, so to speak, when talking about Queuing Psychology. Dr. Larson or affectionately referred to as “Dr. Queue” has been studying the effects of queuing for more than twenty years. The team over at ESD came up with a few things that were very interesting and solved a few pain points for Disney and theme parks in general. If you have ever been to Disney and went on any of the rides, the lines are insane. The lines can be anywhere between 15 minutes to 2 hours per ride. The challenge was to find a way to make this necessary evil more fun. They had a few great ideas that involved a wonderful use for a “touch wall” and other short interactive games.

Short interactive games while waiting in line at Disney

Short interactive games while waiting in line at Disney

Progress indicating lights have existed for 100 years (history)

When researching a design, we have to lean on what Bill Buxton always talks about in “new” designs. There rarely are any! They are just recirculations of old designs that we re-purpose for our current needs. This design is no exception. Marshalite Traffic Signals have been around in Australia since 1936 and still exist in a few places.

an analog version!

Marshalite - an analog version!

These lights already exist in the world, so what research can we gather? (current UX research)

So let’s look around and try to find some pain points for the current design. The progress indicator lights already exist in a few countries and obviously people are going to have some thoughts on them. In my very informal search and reading to look at what people think about them now, I found a few quotes.

They already have traffic lights and padestrian crossings in Manila with timers on them. As far as I can tell they don’t really help there.

… Delhi/Mumbai. … the last 5 seconds before the light turns green resemble the start of a NASCAR race. -both via Neatorama

So the reference here is that they also resemble Racing Trees and therefore will push the user to anticipate the light and may cause accidents. This is a problem that we should be aware of.

Racing Tree Lights

Racing Tree Lights

The first thing that comes to mind after seeing what we have seen so far, is that one solution will not solve all the problems. We are going to need a way for it to be configured at installation. We need to let the city engineers do the final stage of the design so they can customize it to fit their needs.

Why do we need to do this? (the greater design tenet with UX in mind)

The problem is that gasoline is getting more expensive and more scarce. If we continue how we are now, we will destroy the environment around us. We need to think green. You should always think low impact in your design solutions because it means they are less expensive in the long run. The more the design saves the company, the more apt they are at instituting it. So let’s look around at some of the current research on Hybrids and gasoline.

Question: Is it better to turn your car off for a 30 second stop or to leave it running?

Answer: Turn it off and it saves gas and its more wear and tear on your vehicle (starter, crank shaft, etc). Leave it on burns more gas and its easier on your vehicle.  –(1995) paraphrased from The Car Guys.

How does that compare to what the average is?

How long does the average American spend waiting at a red light?

Answer: 3min. and 18sec. via – WikiAnswers

There seems to be a gap. What about current technology with Hybrids?

Comparison of what Hybrids do

Comparison of what Hybrids do

So it seems that all manners of Hybrids shut their engines off at stop lights.

Given all that we have learned, what changes would we make? (UX Design)

It seems that really, the only glaring thing we need to take into account is the final 10 seconds. When users would start to rev their engines and get ready for the green light. This revving would then eliminate any of the benefits of the engines being shut off in the first place. The other piece we need to keep in mind is have it configurable at time of installation. This would be very helpful for tuning and further refinement as the time of install progresses.

Here is the current design that won the competition.

Design Winner

Design Winner

and here is a blank slate for you to test out your designs.

Blank Traffic Light

Blank Traffic Light

You can download this Illustrator CS4 file here. If you happen to be using another type of program, I also uploaded the EPS file, and you can get it here. I created the outer circle in Live Paint, so all you need to do is grab the paint bucket tool and drop whatever color you want in there.

Let’s see your designs! Send me your concoctions and I’ll post them here. Also write a bit about your rationale and reasoning for designing it your way.