Recently I have learned there are several non-designers that read my blog out of interest for the field, to get better at the experience end of things, or to just gain more knowledge about the process we go through. In this article I want to give a skimming overview of one of the great ideas around designing for social experiences. One thing you should understand before you begin the design.

I usually begin with definitions to gain a common vocabulary. This article begins with two stories.

1. You have a ton of monkeys

You are a monkey pet owner. You have several monkeys. When you begin adopting monkeys you give them names and remember them by their attributes. As you begin to amass your monkey army, some of the monkeys do not interact with you that often and therefore begin to be second thought in your mind. The ones who interact with you the most are the closest to you. You recognize them from a distance, you know their names as well as you know what day it is. Those monkeys that are close to you, your intimate circle, are what we will call your “Monkeysphere.”

The psychology question is at what point will you start forgetting their names? When will you begin to start thinking of them as “the other monkeys” or “the monkeys that don’t come to me when its feeding time.” What’s happening is the monkeys are being established in your mind in 2 initial sets. You have a monkeysphere and an extended circle of monkeys. The extended monkeys, even though they are close, are more distant in your emotional response and interactions with them. The answer to the question, “When will you start forgetting their names?”, we have had several studies based on that exact question. We think we know the answer.

2. You are a prehistoric man that lives in a tribe

As a cave-person you understand that living and moving about the bountiful plains in a group is much safer. You are a member of a tribe for support and safety. In this tribe you also have your immediate family and close friends. You have daily interactions with your family and friends. That group we will call your intimate network.

The intimate network you have around you is a small subset of your tribe. You have daily close interactions with them. You might groom each other, talk about the day’s events, share food, protect them with your life if need be, care for them when they are sick. This group is small and tightly monitored. One of the reasons this group is small is that it is made up of people just like you in some manner or they are relatives, who you feel a connection with. What keeps them in your intimate network is the constant actions with them. You are always reminded of their presence and well-being. Much like the monkey pet owner, if you have constant interactions with them, the closer they get to your emotional well-being.

You also look for them as your own personal support network. You will ask for their opinions, affirmations, and look for guidance as well. You may borrow things from them knowing they will say yes, and that you will return them in good order. You take a less critical eye to what they say and they do likewise to you as well.

Your extended network are those in your tribe. You have interactions with them, but not necessarily daily. You conduct or attend ceremonies with them, you might support them from a distance by donating food to the group, and you may attend tribal council meetings to help shape or establish rules for all to follow. Your extended network is important to you, but they are not crucial. You may look at them with a slightly more critical eye than you do to your intimate network, but not like a stranger.

The interesting thing is that we believe there is a set number you can have in your intimate network and your extended network. That number is based on brain size and species. How big is your tribe and how big is your intimate network? We think we have a good idea of that answer as well because it is the same as the first story. The answer is 12 for intimate networks, and 150 for extended networks.

Dunbar’s Number

Dunbar’s number is a theoretical cognitive limit to the number of people with whom one can maintain stable social relationships. These are relationships in which an individual knows who each person is, and how each person relates to every other person.[1] Proponents assert that numbers larger than this generally require more restricted rules, laws, and enforced norms to maintain a stable, cohesive group. No precise value has been proposed for Dunbar’s number, but a commonly cited approximation is 150.

Primatologists have noted that, due to their highly social nature, non-human primates have to maintain personal contact with the other members of their social group, usually through grooming. Such social groups function as protective cliques within the physical groups in which the primates live. The number of social group members a primate can track appears to be limited by the volume of the neocortex region of their brain. This suggests that there is a species-specific index of the social group size, computable from the species’ mean neocortex volume.

In a 1992 article, Dunbar used the correlation observed for non-human primates to predict a social group size for humans. Using a regression equation on data for 38 primate genera, Dunbar predicted a human “mean group size” of 148 (casually rounded to 150), a result he considered exploratory due to the large error measure (a 95% confidence interval of 100 to 230). [via Wikipedia]

This is important because using this theory, we understand that there will only be about 12 very intimate people in someone’s network and about 150 total. When someone says “How many friends do you think our customers will have?” that is a great starting point.

If this is true, why do people have thousands of facebook friends and have hundreds of people they consider “great friends”? What we see through research is that people have augmented their extended network because in modern civilization people have gotten more secluded. Yes, more secluded through technology. When you wake up you don’t have to interact with anyone if you don’t want to. The sizes of communities have gotten smaller and more niche. We do not have to interact with people like we used to. You probably feel close to some online friend than you do to your neighbor that lives 4 houses down. This is the crux of modern society. We have gotten more lonely.

To maintain our extended network, because we find that people really do need an extended network, we use other avenues to look for affirmation and guidance. These can take the form of online groups, message boards, or another place that you may frequent, such as a grocery store. You may go to the gym to work out and then afterwards sit and talk to the people that work their or others that work out there, because you show similar interests as them, and therefore consider them a part of your extended network. This is the same reason people gossip about celebrities with each other.You and I don’t know any celebrities, but it is a way for us to have a common ground to discuss something. Therefore we reaffirm our morality and our views based on them with each other. Through the use of this dialog we struggle to maintain our values, our principles, and our extended network.

What does this mean to design?

Well, the full meaning of this to design is well beyond the scope of this article, but it can be boiled down to a few things. First we know that most people will have a very close circle, a monkeysphere, of about 12 people. That will be a group comprised mainly of your close friends and family. We also know that most people will have an extended network of about 150 people. Using those two numbers you will have a great start at designing the features and functions of a social network. You can limit features and functions based on those numbers to save on development and iterations. We know that you will not need 50 functions for 500 people, but you may need 50 functions for about a dozen. Of the next set, you will not need 25 functions for 500 people, but you may need them for 150 or so.

When you design for a social experience always keep in mind ‘what is their monkeysphere?’ and ‘where will this play a part in it.’ Most social designers fail to realize this and try to design a small amount of functions for everyone, thereby leaving out the intimate network and leaving user’s wanting more.

This is just a light skimming of the theory and I hope it has motivated you to go out and read more on it.

Further Reading

The original Monkeysphere article, which is a great article and where I got the first story from.

The ultimate brain teaser from the University of Liverpool, which discusses the more technical reasons for Dunbar’s studies.

In this post I’m going to explain some of the concepts and give a few examples of each. For the most part I will be responding to other posts I have seen on the issue. I will not be explaining it 100% because I want you to let your mind roam and explore this area. Think of this as the Socratic Method for promoting and understanding this concept.


This concept for promoting the design and development of modern interfaces is not an end all, be all solution. This is just another step in the discussion of design.

When discussing interface or system design we need a way to discuss it to the non-design person so they understand the general concepts. This would be a way to use it when discussing what type of interface you are going to have on your system. It bolts out the quick cornerstones of development and will encompass the general ‘feel’ of the end result. Typically when you hear what type of interface you are going to design, you hear “WIMP,” which stands for Windows, Icons, Menus, and Pointing Devices. This tells the developers in short and quick fashion exactly what to expect. This is still being used to this day.

In our generation, systems and interfaces are growing by leaps and bounds. The easiest way to communicate that to a developer, stakeholder, or another designer is by using the new acronym, OCGM.

“What type of interface is it going to be Ron? WIMP?”

“Actually, No, its going to be OCGM!”

“Ummm, huh? Like… we already have these templates made out with buttons and sizes… wait.. what the hell is OCGM?”

OCGM breaks down the basis of all future interfaces into two categories and those are broken down into two subcategories. One for items and one for Actions. Everything on an interface that you will be interacting with is going to be one or the other. Of those two categories, they are then broken down even further, by saying, we have a base unit and then we have a complex unit of each. With those 4 items, you can begin to discuss exactly how those will come into play into your future interface.

OBJECTS – Objects are any type of unit or part of a unit on your interface. This is just a way to define your smallest quantifiable bit. This could take the shape of a piece of album art, a picture, an icon, a ball, or an aura of some kind. The importance of this is that each of these objects represent something or some action in the system. This is meant to be all encompassing because we do not want to limit designers or developers when they sit down to brainstorm ideas. If you tell them Icons and Windows… they will design Icons and Windows. Let them think outside the box when they develop.

CONTAINERS – Containers are a way to discuss the relationship of objects. Containers do not have to take the form of an actual physical box or window. They take the shape of a relationship between objects that you manage through your interface in whatever means you see fit. They could be 5 balls circled around a larger ball which forms a sort of a menu. They could be a simple tagging system. Then by the use of a gesture you reveal the tagged objects and therefore reveal the container. Relationships are key to managing objects and understanding how they will interact with each other is key to your design.

for further thought, if you dare…. – Containers do not necessarily have to contain just objects unless you can consider gestures and manipulations objects as well. Taking that to the next step we say that the key to managing gestures is the way you will handle their relationships with each other… Yes! Exactly, now we say that the interface is made up of objects that are manipulations and gestures and managing the containers that envelop them is the key to it all! Now you are on to something! If you understand this concept, then you are well on your way to understanding the key to OCGM and why its so important.

MANIPULATIONS and GESTURES are absolutely crucial in their significance from each other and their significance when designing the user experience. Understanding the difference between these two interactions will make or break the user experience. Manipulations are direct action and reaction on your interface. The user manipulates something, gets immediate feedback, and understands the result of their action. These are simple, easy to understand, somewhat intuitive, and graceful. Gestures are complex actions that are indirect. They can be harmful (format a drive), they are usually not intuitive (draw a ? for help), and are not geared towards the first user experience. So let’s break this down a step further.

Why does the designer or developer need to understand the difference and design accordingly? Because manipulations are the easy way out. They can be your absolute best friend and they can perform most of the common daily user tasks that the user will need. They are designed for beginners, medium users, and for accidental activations. Accidental Activations!! When designing your interface, always design for accidental activations and always gear them towards a Manipulation. Never allow them to be a gesture on accident! While using a Surface Unit, when I brush my sleeve across the screen (which happens significantly) you should never design a “left swipe” to delete a file. This is the core of understanding the difference.

If you want to start the self destruct on a ship, you don’t merely have to press a button. You have to perform a gesture, several manipulations in a sequence that are recognized at the end of the sequence. Only then, after the order is maintained and accomplished does a gesture get recognized and then the action is performed.

Ok, that’s enough explaining for now. Let me answer a few blog posts about the subject. I will dissect the arguments a little to pull out points.

Some great critical thinking over at the clevermonkey. (we need more of this)

… I’m sorry to say that OCGM fails both of my tests. It is at once non-inclusive of the three primary technologies I outlined as well as being to ambiguous to be useful. In addition, the terms used in the acronym overlap so much as to be redundant. ..

The first test is…

  • Touch UI
  • Voice UI
  • Gestural UI
  • Tangible UI
  • Organic UI
  • Augmented Reality
  • Automatic Identification [via clevermonkey]

Organic UI on the side of a Coke Can, but can it remove the sugar? That's my question.

Richard is saying that OCGM does not encompass the first three of his 7 technologies. I think the first problem I have with this, is the list is it is not a list of NUI devices. This is a mixture of interface types (OUI), interaction types (GUI), experience types (Augmented Reality), and Identification Methods (Automatic Identification). I don’t see a relationship between these devices other than they are new and could be perhaps governed by a non-standard UI. That is the case in most devices though, isn’t it? Let me give a quick sentence on some of the farther reaching devices.

OUI – is a non-symmetrical, bendable, or wearable interface. The determining factor is how its displayed to the user. The actual interface will take the shape of its viewable area, but it is just a way to describe non-Monitor types of interfaces. [Examples: bracelets that have an LCD around the band, shirts that tell your vital signs, a small LQD that bends around a table leg and gives you scores/radio for your favorite show or game].

Automatic Identification – This is a method to identify a user, an action, or another system by any means necessary. Could be authentication, recognition for home entertainment, DNA for weapons [District 9 killed!]

The F-35 Demon Helmet is Augmented Reality to the extreme

Augmented Reality – superimposing the results of a system onto your life through vision, motion, or some other means not developed yet. [Yelp on your phone while looking through the camera, a HUD on a fighter jet superimposing targets on the screen]

My Answer: The first 3 all fit very well into the OCGM acronym.

Voice – Voice is a complex system. Of the few dozen or so pure voice systems I have played with most of them. The latest and most advanced one that has come out, came from MSN Auto. It is a purely voice driven menu system for a car. It contains OBJECTS [people, phone numbers, favorites, places, presets], CONTAINERS [groups of contacts such as Work or Home, groups of places such as frequently shopped locations] Manipulations [“Volume Up!” “Call ….. “] and it contains Gestures [“Emergency!” automatically performs a complex manipulation {“dial…. 9…..1…….1……. “}, or presets, “Becky!” automatically performs whatever action you set for the Becky command].

The OCGM system works very well with most languages as well, and especially well with Bill Buxton’s paper on the 3 State Method. If anyone hasn’t read that, you should read it now or put down your pen forever!

Touch UI – This absolutely fits the model because it is inherently part of the birth of it. It contains OBJECTS [pictures, icons, floating buttons, small song notes that represent songs], CONTAINERS [groups of pictures in a Pod or Bar {PS: I was published on my creation of a selector system for the POD in Surface at the 2008 IEEE Tabletop Conference}, playlists of notes, tagging multiple photos], Manipulations [touch the ball and move it across the screen] and GESTURES [right now this is slim on the Surface, but there are several in the SDK, such as draw a ? for help, draw an X for delete].

Project Natal. "Falcon Punch!, Body Blow, Body Blow... FINISH HIM!"

Project Natal. "Falcon Punch! Body Blow! Body Blow! FINISH HIM!"

GESTURAL UI – I’m not sure what you mean by this one. Do you mean SPATIAL? If you mean spatial, I’m not really sure what I can disclose about NATAL, but I can assure you that all of the 4 items are covered.

The second point I see from Richard is this one

Windows, Icons, Menus, and Pointer are all pretty clear. An acronym for NUI should be equally as clear or its not useful. [via clevermonkey]

I wholeheartedly disagree with this. In fact, we want to go the opposite direction. We want to not spell out all the details for interfaces and we want to empower the designers to design for their experience. We want to arm the designers of the future with the cornerstones of good design and let them go wild! It’s no secret that I am not a big fan of UI DESIGN PATTERNS. I think that for the most part, they are a waste of talent. When designers could and should be thinking outside of the typical experience, they rely on a “crutch” called a UI pattern. Those patterns were developed by City Engineers because there were only so many different ways you can put 3 buildings on a city block. That’s where they came from and that’s where they need to stay!

This acronym is intentionally vague by only discussing the bare mechanics of a future driven interface. The reasoning for this is simple. It’s to empower the designers! Designers need that room to breathe when sitting down to solve their next problem. By only giving the mechanics we allow the designers to design the experience.

That’s all for round 1! I welcome emails or comments for tomorrow’s battle. With this, I leave you one last question:

WIMPy vs OCCAM. Is there really a choice? I mean, OCCAM got to wear a wreath on his head every day. That is awesome!

WIMP is the current acronym for the Windows User Experience. It stands for Windows Icons Menus Pointing Devices.

In human–computer interaction, WIMP stands for “window, icon, menu, pointing device“, denoting a style of interaction using these elements. It was coined by Merzouga Wilberts in 1980.[1] Although its usage has fallen out of favor, it is often used as an approximate synonym of “GUI“. WIMP interaction was developed at Xerox PARC (see Xerox Alto, developed in 1973) and “popularized by the Macintosh computer in 1984″, where the concepts of the “menu bar” and extended window management were added. [2] [via Wikepedia]

The WIMP interface is a slow dying breed as our demands on user experience and the demands of user’s keep inflating. It’s time to start thinking in a new direction. A direction that sheds many of the harnesses of the old acronym and begins to explain the building blocks of the future. It will be simple, concise, and cover all of the bases we need. There is no need to rely on pointing devices, menus, or windows anymore. It’s time to let the experience be the interface, and the user to be in total control. The interface will begin to blend in with the experience and the experience will be the interface.

I have spent several months thinking about this and trying to solidify something presentable. This is the fruit of my labor. I present to you:



Objects are the core of the experience. They can have a direct correlation with something physical, or they can just be objects in the interface.


Containers will be the “grouping” of the objects. This can manifest itself in whatever the system sees fit to better organize or instruct the user on interactions. They do not have to be, nor should they be, windows. They can be any sort of method of presentation or relationship gathering as seen fit.


I went into detail about the differences in Gestures and Manipulations in a previous post [check it out for a refresher]. Gestures are actions performed by the user that initiate a function after its completion and recognition by the system. This is an indirect action on the system because it needs to be completed before the system will react to it.


Manipulations are the direct influences on an object or a container by the user. These are immediate and responsive. They are generally intuitive and mimic the physical world in some manner. The results are expected and should be non-destructive. These are easily performed and accidental activations should be expected and frequent.

This acronym is short, concise, and to the point. It contains all the elements the modern designer will ever need. In discussing this acronym with someone yesterday, he asked “Why do you separate out manipulations and gestures?” This is a good question and lies at the very core of modern design. These are the two basic interactions needed for a NUI, Touch, or even a Windows based system. The first is easy, intuitive, usually engulfed in a metaphor of some sense. The second is complex, learned, non-physical, and super-natural. The understanding of these two types of interactions are core to designing something for the modern world.

We have objects, which can be grouped into containers. We have manipulations, which can be contained inside of a gesture. The simplicity is liberating.

By a lucky coincidence, the acronym also bears very similar pronunciation and essence to Occam’s Razor. The simplest answer tends to be the right one.

Occam’s razor (or Ockham’s razor[1]), entia non sunt multiplicanda praeter necessitatem, is the principle that “entities must not be multiplied beyond necessity” and the conclusion thereof, that the simplest explanation or strategy tends to be the best one. The principle is attributed to 14th-century English logician, theologian and Franciscan friar, William of Ockham. Occam’s razor may be alternatively phrased as pluralitas non est ponenda sine necessitate (“plurality should not be posited without necessity”).[2] [via Wikepedia]

I hope you love this acronym as much as I do. Thanks for reading and feel free to comment.

Special thanks to Josh Blake over at Deconstructing the NUI for helping me hammer this out.
I’m going to start a new regular feature here on my blog that I have been banging around in my head for a few months. What I’m going to do is give you a breakdown and some discussion points about interfaces that I see in movies. I’m a big movie buff and am always looking for details that have to do with design.

One thing I always enjoy, when seeing a movie with other designers,  is the discussion afterward. Talking about different things we each captured and then having some in depth critique is always fun. The format will probably change with each additional new movie, but I want to keep it digestible. Also feel free to point out any interfaces that I miss!

Movie: Code 46 (2003)

[it contains Organic User Interfaces, transparent monitors, futuristic workstation with a touch pad]


This is a futuristic movie so there are several experiences that could be captured. I want to capture the two interesting ones.

The first is the Digital Photo Album. It’s a normal pocket sized photo album, but instead of 4×6 pictures, it has 4×6 bendable lcd screens. On these screens they play home movies that have been recorded. This is a great concept because you use the already current mental model of pocket sized albums to store memories. This would be a great leap into the household.

The other interesting thing about this is the interface is nothing more than an onscreen “jog” mechanism. The user rewinds and fast forwards by moving the thumb north and south on the jog. By pressing the center, the movie pauses. Great device and very understated, which as you know I like. 🙂


For further reading, and for classification, this interface would called an Organic User Interface. Mainly because the interface bends into shapes other than flat. There are some very interesting studies and prototypes around this model. If you are feeling particularly brave, you should head over to the Organic User Interface site (a spinoff of the ACM Magazine), that has a ton of information, videos, and papers that have been published on the subject. Of particular note about the actual implementation would be the speed of the video and no apparent view of the battery pack. These futuristic things are what will really start pushing the need for modern user interfaces.

As we begin to blend the hardware and mechanics of devices into the background and out of view, we also need to start hiding interfaces as well.

The second piece is wonderful setup for a futuristic workstation. This is their vision of the modern workstation. It consists of multiple monitors, above eye level on opposite walls, and a controlling device near the hand rest area. Of course it’s a natural interface due to the lack of mouse and traditional keyboard, but I also like what they did with the monitor position (above eye level, which prevents tiring of the eyes). I also like that they blended the controller and monitors in with the environment. The monitors are transparent when they are not on, and the small keyboard-like controller is small, clear, and flat, almost concealing itself when not in use.

Transparent Monitors are just around the corner! The recent work over at Purdue into optically transparent electronics shows a lot of promise.

The development of mechanically flexible and/or optically transparent electronics could enable next-generation electronics technologies, which would be easy-to-read, light-weight, unbreakable, transparent, and flexible. Potential applications could include transparent monitors, heads-up displays, and conformable products. Recent reports have demonstrated transparent thin film transistors (TFTs) using channels consisting of semiconductor nanowires (ZnO, SnO2, or In2O3) and random networks of single-walled carbon nanotubes (SWNTs).[1,2] [Source]

Interesting update: With everyong heading to a more “green” design direction, most never took into account that the new LED lights will not generate enough heat to melt snow (source). This is such an obvious problem that I really doubt they had a professional Experience Designer involved. This sounds like a problem that arises when they try to “cut” costs by eliminating a designer.
I’m taking a break from writing my book and going to write a bit more about current happenings. Expect to see more blog entries. In this entry I’m going to do a cursory overview of a design winner and the thought process that you should take when partaking in a design

A few days ago The Red Dot Design Award Winners were announced. This is always such a great competition because the participants are so varied and different. The sky is the limit, its wonderful! This year there were 12,000 entries from 60 countries. Of the winners, one entry has gained some traction. The particular entry was progress indicating traffic lights.

Progress Indicator Lights

Progress Indicator Lights

I like this design! Anyone who knows me, knows that “Wait UI” (ex.- Press and Hold) is the bane of my existence [constant source of irritation]. Making the user wait for any period of time is a bad experience. We should challenge designers to come up with things that are not Wait UI. On the other hand, there are examples like this, where waiting ISthe UI. The users have to wait, now its time to make it more intuitive. Let’s break this down into the psychology of the problem and the Mechanical part of the problem.

Occupied time feels shorter than unoccupied time or Queuing Psychology 101 (the UX)

“…a day full of waiting, of unsatisfied desire for change, will seem a small eternity.” —William James, 1891

MIT’s Engineering Systems Division has an ace in the hole, so to speak, when talking about Queuing Psychology. Dr. Larson or affectionately referred to as “Dr. Queue” has been studying the effects of queuing for more than twenty years. The team over at ESD came up with a few things that were very interesting and solved a few pain points for Disney and theme parks in general. If you have ever been to Disney and went on any of the rides, the lines are insane. The lines can be anywhere between 15 minutes to 2 hours per ride. The challenge was to find a way to make this necessary evil more fun. They had a few great ideas that involved a wonderful use for a “touch wall” and other short interactive games.

Short interactive games while waiting in line at Disney

Short interactive games while waiting in line at Disney

Progress indicating lights have existed for 100 years (history)

When researching a design, we have to lean on what Bill Buxton always talks about in “new” designs. There rarely are any! They are just recirculations of old designs that we re-purpose for our current needs. This design is no exception. Marshalite Traffic Signals have been around in Australia since 1936 and still exist in a few places.

an analog version!

Marshalite - an analog version!

These lights already exist in the world, so what research can we gather? (current UX research)

So let’s look around and try to find some pain points for the current design. The progress indicator lights already exist in a few countries and obviously people are going to have some thoughts on them. In my very informal search and reading to look at what people think about them now, I found a few quotes.

They already have traffic lights and padestrian crossings in Manila with timers on them. As far as I can tell they don’t really help there.

… Delhi/Mumbai. … the last 5 seconds before the light turns green resemble the start of a NASCAR race. -both via Neatorama

So the reference here is that they also resemble Racing Trees and therefore will push the user to anticipate the light and may cause accidents. This is a problem that we should be aware of.

Racing Tree Lights

Racing Tree Lights

The first thing that comes to mind after seeing what we have seen so far, is that one solution will not solve all the problems. We are going to need a way for it to be configured at installation. We need to let the city engineers do the final stage of the design so they can customize it to fit their needs.

Why do we need to do this? (the greater design tenet with UX in mind)

The problem is that gasoline is getting more expensive and more scarce. If we continue how we are now, we will destroy the environment around us. We need to think green. You should always think low impact in your design solutions because it means they are less expensive in the long run. The more the design saves the company, the more apt they are at instituting it. So let’s look around at some of the current research on Hybrids and gasoline.

Question: Is it better to turn your car off for a 30 second stop or to leave it running?

Answer: Turn it off and it saves gas and its more wear and tear on your vehicle (starter, crank shaft, etc). Leave it on burns more gas and its easier on your vehicle.  –(1995) paraphrased from The Car Guys.

How does that compare to what the average is?

How long does the average American spend waiting at a red light?

Answer: 3min. and 18sec. via – WikiAnswers

There seems to be a gap. What about current technology with Hybrids?

Comparison of what Hybrids do

Comparison of what Hybrids do

So it seems that all manners of Hybrids shut their engines off at stop lights.

Given all that we have learned, what changes would we make? (UX Design)

It seems that really, the only glaring thing we need to take into account is the final 10 seconds. When users would start to rev their engines and get ready for the green light. This revving would then eliminate any of the benefits of the engines being shut off in the first place. The other piece we need to keep in mind is have it configurable at time of installation. This would be very helpful for tuning and further refinement as the time of install progresses.

Here is the current design that won the competition.

Design Winner

Design Winner

and here is a blank slate for you to test out your designs.

Blank Traffic Light

Blank Traffic Light

You can download this Illustrator CS4 file here. If you happen to be using another type of program, I also uploaded the EPS file, and you can get it here. I created the outer circle in Live Paint, so all you need to do is grab the paint bucket tool and drop whatever color you want in there.

Let’s see your designs! Send me your concoctions and I’ll post them here. Also write a bit about your rationale and reasoning for designing it your way.

UPDATE: The story hit engadget as well, but the story isn’t quite as positive. The comments are though, here’s the story.

I saw an interesting article on Gizmodo, ( ) discussing some apparently new top secret laptop/tablet at Microsoft called Courier. The funniest thing is they mentioned what team it is, E&D, who is developing it, and who the head of the team is. Quite specific I think. I doubt if even half of it is true, but I must tell you. It is beautiful, … from the pictures of course.

As a Natural User Interface Designer working at Microsoft. I can tell you, this has piqued my interest. Things to note, they mention Multi-Touch and Stylus support.

I also particularly loved the comments. Quite surprising actually.

If this works exactly, and lives up to the video, as shown Ill buy it. and if they can beat Apple to the market, I’ll forget the iSlab (even as a mac user).

add mp3 capability, and why would i need a laptop at all?

I can see this being HUGE in schools. I know Drexel just replaced all their medical textbooks with iPod Touches (and I’m surprised to see nothing on Giz about it) but I’m sure if they knew this was on its way from Microsoft, they would have waited to see some prices.

Two Words, BAD ASS. I’ve been wanting to get a new laptop, but if this thing is truly on the horizon I’ll be saving up.

I love Apple and their products so much that I (Ed. pee)  apple juice, but this… this would have my money damn near instantly.

(All quotes taken from

Does anyone else have any comments they would love to share? Or how about a Feature Wish List? Thinks that you absolutely MUST have or you will die a slow and painful death. I know I have my list of things I would want in a product like this, but I would like to compare mine to yours. So, Asus if you are reading, here are a few tidbits for you. 🙂

User Interface Technology Adoption

User Interface Technology Adoption

This is an interesting graphic from Gartner ( ). I also like the small excerpt.

Gesture recognition dominates the hype in human-computer interaction in 2009, as virtual worlds hit the Trough of Disillusionment. A wide range of emerging technologies are moving from the trigger toward the peak, indicating that innovation continues almost unabated during the current recession.

All over the web I see the word gesture used to describe every type of interaction on a natural user interface. Just because you use your finger or a stylus or an accelerometer, does not make it a “gesture.” Is this crucial? Not really to users, consumers, marketing, et al. But it is in being a good scholar and interaction designer to get your terminology straight. It also helps when speaking with other developers to have your vocabulary correct so they do not misinterpret your meaning or solutions. Let’s start with the classical, dictionary definitions:

Main Entry: 1ges·ture
Pronunciation: ˈjes-chər, ˈjesh-
Function: noun
Etymology: Middle English, from Anglo-French, from Medieval Latin gestura mode of action, from Latin gestus, past participle of gerere
Date: 15th century

1 archaic : carriage, bearing
2 : a movement usually of the body or limbs that expresses or emphasizes an idea, sentiment, or attitude
3 : the use of motions of the limbs or body as a means of expression
4 : something said or done by way of formality or courtesy, as a symbol or token, or for its effect on the attitudes of others <a political gesture to draw popular support — V. L. Parrington>

Main Entry: ma·nip·u·late
Pronunciation: mə-ˈni-pyə-ˌlāt
Function: transitive verb
Inflected Form(s): ma·nip·u·lat·ed; ma·nip·u·lat·ing
Etymology: back-formation from manipulation, from French, from manipuler to handle an apparatus in chemistry, ultimately from Latin manipulus
Date: 1834

1 : to treat or operate with or as if with the hands or by mechanical means especially in a skillful manner
2 a : to manage or utilize skillfully b : to control or play upon by artful, unfair, or insidious means especially to one’s own advantage
3 : to change by artful or unfair means so as to serve one’s purpose : doctor

You can already start to see the differences for our purposes. One is emotional, symbolic, indirect. The other is direct or mechanical. There are 4 primary differences between the two and they are easily classified after you know them.


  1. contextual – they only happen at specific location(s) or on specific object(s)
  2. react immediately – there is a direct correlation in cause and effect between your interaction and the system (this does not include visual affordance)
  3. can be single state, but are usually 3 or more states ( see Bill Buxton’s paper on Chunking and Phrasing )
  4. direct (could possibly be considered indirect by way of augmenting your actual interactions with the reaction of the system) – your actions directly affect the system, object, or experience in some way


  1. not contextual – they can be anywhere in the system in location and time
  2. the system waits for the series of events to complete to decide on how to react (again, this does not include visual affordance)
  3. they contain at least 2 states
  4. indirect – they do not affect the system directly according to your action. Your action is symbolic in some way that issues a command, statement, or state.

In Dan Saffer’s book, Designing Gestural Interfaces, (O’Reilly, 2009) on page 2 he states “for the purposes of this book, is any physical movement that a digital system can sense and respond to without the aid of traditional pointing devices such as a mouse or stylus.” That may be a simple way to define the types of interaction for his book, but generalizing them in that manner is incorrect. I think Professor Shneiderman’s seminal paper in 1983 was absolutely correct. Direct manipulation is just that, direct manipulation. When we start to discuss more complex chained movements that are commands, we need a new set of terminology. (

Manipulations are the lowest common denominator and the “catch-all.” They are the most prevalent and the most widely patterned because they are easy to design for, easy to understand, and very intuitive with expected results. Gestures are more complex and is what all designers strive to achieve. When trying to decipher if something is a manipulation or a gesture, unless it passes all 4 tests for gesture, it is a manipulation. There are very few true gestures in systems currently.

These have also been called direct gestures (manipulations) and indirect gestures (gestures). Calling them this is confusing the terms and can lead to errors in design or implementation. I leave you with a graphical representation of gestures vs manipulations.

Manipulation vs Gesture

I’m eager to hear any dissenting opinions. Please comment or drop me an email. I’ll also send a copy of this to Dan as well.

I was recently reading an article from Daniel Pink, a self confessed sign nut. He was discussing a great topic that I believe gets so little coverage in the design world. The power and language of empathy in messaging and signage and in user experience as a whole. In particular he had two specific points about signage:

  • Demonstrate empathy
  • Encourage empathy

Wonderful example of an empathetic sign - does this make you want to slow down?

Form an emotional bond with your user in the language and spirit of your messaging and experience. You do not have to limit yourself to this archaic and silly messaging that is from the days of past. Create helpful and meaningful messaging that gives your user’s the understanding of A) why they got here B) how they can fix it. If you cannot give them those two reasons specifically. Then give them hints to try fixing it themselves.

A little side note: Did you know that most of the messaging you see for errors in Windows are actually relics from debugging days of old? The overly complex messaging, “Error 34807” were actually used by the programmers when they have to debug the software in the testing phase. The problem is that no one knew what to do with them from Windows 95 to XP, so they left them because if a customer was stuck atleast it would be helpful to Technical Support in fixing them over the phone. This has evolved into a rather colorful way of being able to deal with problems yourself by giving specific search terms to find errors. When is the last time you called Technical Support on the phone? That being sad, no one in their right mind likes them or wants them to stay. 🙂 Its just a great case of user’s adapting to overcome the limits of the software with their own personal needs.

Things do not have to be clinical to be correct. They don’t have to speak to every use case under the sun to be correct. To try and weave a web of user assistance that will include and cover everyone that could possibly be involved in your product, is to doom your experience to the lowest common denominator. When designing aspects of your product, think of how you can design the best possible experience, for the best possible user. The definition of the best possible user is a whole other argument.

More importantly, speak to them as people. As you would want to be spoken to. I get irritated when I am simply handed user experience testing results in a neat and tidy word document. It contains charts, and graphs, and these little recommendations on what would get better results. That’s the important part. Results and not experience. The problem with user experience recommendations are they are data driven and therefore not experience driven. It makes the experience as a whole so clinical. I want to see the videos to see the context of these results. Why did they do this? Why did they do that? Why couldn’t they find _____ ? Let me see for myself what happened and when, to see where the breakdown or the eureka came into play. Results can be so misleading if you try to boil them down into a nice tidy package. I encourage everyone out there to challenge results and to push user experience. Don’t let them give you something that doesn’t make sense and accept it so that now need to change your design. If it doesn’t make sense, start a conversation about it.. but most importantly, watch the tapes if they are there so you can understand for yourself.

Empathy is the capability to share and understand another’s emotions and feelings. It is often characterized as the ability to “put oneself into another’s shoes,” Empathy does not necessarily imply compassion, sympathy, or empathic concern because this capacity can be present in context of compassionate or cruel behavior.

Don’t change your design unless you understand fully and completely the reasons you need to change it and for who and why. To do otherwise is to wander aimlessly.Gather clear direction and purpose from a business need or a user’s needs to discover where your designs need to improve for the user. The reason I discuss this under a post of empathy, because first and foremost you need to remember what your product is for.   Users and they are just people like you and me. I always sort of laugh at the term, “go back to the drawing board.” If that were the case, it is underestimating the actual knowledge you have already gained by a failure.

This is a common argument heard around non-design tables: “I am a user, and this is what I like…” to support a design direction or decision that they want to make. This argument is futile from their end because what this shows is complete lack of empathy for the user. You are not designing a particular thing for one person, but for many. To reach a decision based on one person shows no grasp of the concept of “user-centered design.”

Empathize with your users. Eat your own dogfood.

To say that a company “eats its own dog food” means that it uses the products that it makes. For example, Microsoft emphasizes the use of its own software products inside the company. “Dogfooding” is a means of conveying the company’s confidence in its own products.[1]

The idea originated from television commercials for Alpo brand dog food;[citation needed] actor Lorne Greene would tout the benefits of the dog food, and then would say it’s so good that he feeds it to his own dogs. In 1988, Microsoft manager Paul Maritz sent Brian Valentine, test manager for Microsoft LAN Manager, an email titled “Eating our own Dogfood” challenging him to increase internal usage of the product; from there, the usage of the term spread through Microsoft, as chronicled in the book Inside Out: Microsoft—In Our Own Words (ISBN 0446527394). The phrase became slang during the dot-com craze of the late ’90s, and is used most commonly in reference to technology companies.[citation needed]

Using one’s own products has four primary benefits:

  1. The product’s developers are familiar with using the products they develop.
  2. The company’s members have direct knowledge and experience with its products.
  3. Users see that the company has confidence in its own products.
  4. Technically savvy users in the company, with perhaps a very wide set of business requirements and deployments, are able to discover and report bugs in the products before they are released to the general public.

All from Wikipedia

If you want to design a product for someone. Put yourself in their shoes for as long as it takes to be empathetic. Daily I am surprised at the people who are designing products that they themselves have not, or will never use. If you would not use your own product, how could you possibly design an experience that would be rich or rewarding to someone else? As a natural interface designer I push myself in taking away things like a mouse and a keyboard. I force myself to use the methods that users will use exclusively. I want to empathize with the user who will be using it, so I want to be in their shoes. The easiest way is to limit yourself to the designs which you are creating. There is so much knowledge and experience that can be derived from using a product, form factor, interface when you use it yourself.

Here is Youtube video of a Pecha  Kucha presentation by Daniel Pink on Emotionally Intelligent Signs as well thats interesting.