I saw this article on Forbes pop up yesterday, http://www.forbes.com/sites/stevecooper/2013/11/30/designing-a-website-for-2014/

I read a few of the called out comments and this one popped out.

Continuous scrolling is one of the worst features to use on many wecsites, including websites like LinkedIn, because the website visitor never gets to the bottom of the webpage where they might see important links or other information. For example on LinkedIn the LinkedIn member homepage, LinkedIn “Contacts” webpage and company pages now incorporate continuous scrolling, and in all cases the feature is a real *bother* because the member can never easily get to either help links, a particular letter for contacts, or, in the case of the company page, the all important information about the company, which is why most people would want to visit the company page.

A much better solution is to provide a “more” link, which enables the website visitor to see more as a matter of their choice.

Too many so-called website designers still flaunt website “tricks” to those simply looking to have an easy to navigate website.

This is a great example of a learned behavior that happens to not be beneficial to the user experience. Designers must resist designing FOR these bad use cases. I commented as well.

Hi Carocc, I am a User Experience Architect and I design software interfaces for a living, commonly called Human Computer Interaction. I study, research, and analyze the human psychology of interacting with interfaces and design accordingly. I did web experiences for several years, but “graduated” to major software many years ago. I was asked to redesign Walmart.com and I refused, if that gives you an idea. Let me shine some light on the decisions that you don’t like. Hopefully I can change your mind about your expectations in the future.

I am making this overly simple, but hopeful it helps with the explanation.

Any interface can be divided into sections. Let’s just say that the interface(webpage) has 3 sections. The header(where you are), the navigation(go somewhere else), and the results(the actual contents of the page). When designing the interface for a user, your main goal is consumption of the content. The content will reside inside of the results section. With that as your main goal for each page, everything else is secondary. Granted, the other things are important as well, as they must be contained on the page, but they are secondary to the actual purpose of the interface itself. The ability to give those sections their proper weight and placement is key to providing a streamlined experience.

Dividing sections of an interface is confusing and gives mixed expectations. If the contents of the page were at the top right and at the bottom left, it would lead to users not being able to find everything or finding the incorrect thing(even worse). The same goes with navigation. If you have some of your navigation at the top and at the bottom, it may lead users to not find all of the navigation that you need. This problem was circumvented by adding ‘next’ links to the top and the bottom of content, which is incorrect. The bottom of the content should be the bottom of the page. When a user finds a piece of navigation or content, they should be in the correct location to find all of its peers, regardless of type(navigation, content, contact information).

Forced navigation is redundant and hinders the ability to consume. The user made a choice, either by clicking a link or searching, and the user should get all of the results of that query. There is no logical reason(from the User Experience point of view) to limit the content on a page as long as its filtered correctly. Having someone click another navigation item that interrupts them is not conducive to a seamless experience. Users should be able to consume whatever they wish without hesitation or interruption. Stopping the flow or task at hand is not the job of the interface(unless an error happens). Users enjoy consuming quickly and without interface interruption.

Footers are dynamic. They appear at the bottom of content. Their existence is not known until sought after. For those reasons the footer of a webpage should be arbitrary or redundant for ease of use. Nothing important should be at the end of a webpage. The footer area is a great area for ‘upselling’ or furthering their experience, but those are not mandatory and are for enjoyment.

In closing, I am not saying you are wrong. I am saying that you have come to expect something that has been incorrectly built for years. You should not expect or need to find a footer on a webpage. Nothing should be there that is important at all. Web designers need to stop building them as integral parts of the site.

I personally do not follow trends. I push the user experience further, no matter where that may lead me.

All designers know how to improve their designs. They read, they study, and they attend workshops. Articulating failures and successes and the reasons for each is also a great way of pushing something forward. Improving each and every time is a sure fire way of creating a successful design.

One topic I think does not get enough exposure is how to sell your ideas properly.  I wrote an article for UXmag about this topic. I tried to give as much advice as I could to help. When I was first faced with the challenge of trying to present good ideas, I was taken aback by how no one else could see it. To me it was so obvious.

http://uxmag.com/articles/winning-approval-in-design-presentations

A few years ago I was approached by Bestica, a UX Company, with a question. They wanted to know what advice I would give to UX Designers in improving their careers. The advice that I gave was about selling your ideas of course. Being an advocate for good design as well as your own is a great way of moving yourself to the next level. Here is that interview.

 

I have always been a huge fan of Noam Chomsky. He revolutionized linguistics with a way of explaining the building blocks of all language. He broke linguistics down to its basics in a way that it could be understood, built upon, and extrapolated from to form a system of thought and construction. From this system other scholars were able to begin building rules and intricate patterns. These rules and patterns can be used to build a new language. A language in any form, sound, motion, math, or any other means available for that matter.

  • Using the base definition of an object and container, you can easily classify all objects of an interface.
  • Using the base definition of a manipulation and a gesture, you can easily classify all interactions of an interface.

The approach I am taking may seem simple minded and unsophisticated, but never the less, correct.

So what is the purpose? The purpose of classifying all objects and actions for an interface is to help build a language of interaction. We rely on this language when we create patterns or rules. Then rules become laws. This brings me to my second point.

“Creativity is only possible within a system of rules.” -Noam Chomsky

Free creation, without arbitrary limitations of computers and interaction. Humans are genetically pre-programmed to interact with things in a certain way. The way we pick up a stone to throw or food to eat. If we take those simple interactions and put them into two categories, we start to see evidence of a ruleset. We take simple actions, things that only contain one element of motion, and we take complex actions, things that contain more than one element, each into their own piles. These two piles can be used to classify all interactions. It’s a very simple way of thinking about interaction. We have simple actions and we have complex actions. This same way of thinking can be used in defining objects and containers.

If we have these four categories, two for each type of object. We can begin to understand the framework for creativity. Using those simple classifications we try to look at ways that humans will naturally interact with computers and devices. This is the basis for Natural User Interfaces.

Innovation does not necessarily mean invention.

I’ve been so busy at Bloomberg I haven’t had the chance to write anything new, even though I have a few things cooking already. I got this email from a reader and asked her permission to post. Enjoy.

Jocelyn writes

Hello,
I’ve been reading your articles about OCGM and found them quite interesting, thanks for sharing your thoughts. I was intrigued by your statement, saying that you were

“not a fan of UI DESIGN PATTERNS”.

“When designers could and should be thinking outside of the typical experience, they rely on a “crutch” called a UI pattern.” Say you are to implement a login feature to your application/site, couldn’t you rely, at least partially, on what’s already been done ? And so on for search, breadcrumbs etc.

“Those patterns were developed by City Engineers because there were only so many different ways you can put 3 buildings on a city block.”

Are there many more possible solutions in HCI? Isn’t one of those solutions better than the other (id “the pattern for this problem”) ?

In my opinions design patterns are like having an HCI expert team at your side (don’t remember where I read that). You are not compelled to use them everytime, but it’s nice having them for some tasks.

I’m genuinely interested in hearing your opinion on the matter. I hope my bad English doesn’t sound angry, I assure you that’s not the feeling.

Regards,

Jocelyn

Ron writes…

The problem with patterns are they do not exercise the mind or further the experience. Having a book of patterns at your side is very unlike having an HCI Expert on your team, because those are just cookie cutter solutions. HCI is not math. There is not one simple solution to every problem. My main point is to reach further than what has been seen so far. Just because its the most popular, or most successful at the time, does not mean its correct.

The primary difference between math and HCI is that HCI contains people, and people change in expectations, considerations, and needs, among other things.

Design patterns will never substitute a person that has been trained in the field and is willing to challenge the norm to find something unique and innovative. Design patterns are the antithesis of innovation.

If it’s ok with you, I would like to post this on my blog with my answer. Ive been meaning to write something new. 🙂

Jocelyn writes…

“The problem with patterns are they do not exercise the mind or further the experience.”

This is a very valid concern.

“Just because its the most popular, or most successful at the time, does not mean its correct.”

So very true (isn’t it even called “the Smashing Magazine Effect”?).

Yet I cannot help but notice that conventions people are used to, physiological stability of the user, recurrence of problems (again, a login form) make for a quite repeteable set of constraints, thus theremust be some repeatable solutions, be it patterns or another artifact.

You say that patterns are numbing creativity. After reading your answer, I agree, but partially : pattern overuse (eg relying on others work to solve every problem) is nefarious. But it doesn’t mean you shouldn’t have a look at whats currently the best practice ; you cannot push the enveloppe on every component on a given project. Or maybe sometimes you just have to get it done for yesterday. Or you need an overview of available solutions before diving in. Or you need to share knowledge. In those case, patterns are well suited (imho).

After a bit of pondering on your message, I’ll keep that: there’s a place and time to use patterns, it’s not a solution ex machina. But it doesn’t mean knowledge reuse is never appropriate. What’s your opinion? Is there any form of reuse that fits you better, or do you take them all for dumbing practices?

In any case, thank you very much for taking the time to answer, it is much appreciated. It’s totally ok to post the whole thing as you wish. Last of all and on a totally unrelated matter : would you recommend any resource pertaining to touch/multi-touch interfaces (I’m talking rugged tablet PC rather than iPad)? Thanks in advance!

Regards,
Jocelyn.

Thanks so much for writing in Jocelyn!

Well it’s been a chaotic few weeks here in wonderful Microsoft Land. I have been getting tons of questions about some things I said at CHI. Apparently, unknown to me, a few people overheard me talking to some SOFTie colleagues and telling them that I was planning an external move soon. What that means is I was planning on getting another job, but not at Microsoft.

ProTip: A “move” just means you are shifting teams, an “external move” means I was going back into the world. Its rare that a Microsoftie goes back out into the world. It really is an incredible place to work.

Here is a small “guide” to Ron.

  1. I have Asperger’s Syndrome. It is clinically significant, but not so much that I freak out or am crazy awkward in public situations. I consider it an amazing gift to be honest. There are a few things about the “syndrome” that really come out in me. I am very honest and upfront, and I am incredibly gifted in some things (mainly design and using logic to break down and solve problems) and am very bad at other things (remembering dates, balancing a checkbook, etc).
  2. I am obsessive about finding solutions to problems. (see #1) The more difficult the problem, the more obsessive I get. The real challenge to this is that in design, you rarely find a “solution” rather than finding a better than what you have now option. That does suffice for me, but usually I will obsess about a problem until I make a giant leap in the space. Good enough isn’t enough.
  3. I LOVE a good challenge. These are what make me get up in the morning and clap my hands.
  4. When it comes to work, I rarely choose the “easy” route. Which means if I have the choice between a difficult job and an easy job, with both being the same amount of money… I always choose the difficult job because it will cause me to grow.

So where does that leave us? Well, I’m trying to lead up to where I am going to next. My last day at Microsoft was Friday and I am busy preparing to move across the country.

So what is my next challenge? I think it is the most complex problem in the User Experience world at the present time and just thinking about it gets my brain pumping.

Bloomberg.

The article at UX Mag.

This interface is complex, rich, and mind-blowing in size and scope. I think this article really sums up a few of the problems, but also makes some wrong assertions as well.

http://uxmag.com/design/the-impossible-bloomberg-makeover

I think the best line in the article is this,

“Redesigning the Bloomberg Terminal would be any interface designer’s dream.”

You are correct, and if I have said it once, I’ll say it again… I am living the dream. See you in New York!

Further Reading:

Wikipedia Article

Google image search of examples of the terminal.

An interesting visual history/lineup of past terminals on display at Bloomberg

An example of a typical Terminal in use

I know this is a bit late, because my co-author has already reported it, but I am very happy to announce that some ears have been listening. The CHI ’10 Workshop, “Natural User Interfaces: The prospect and challenge of Touch and Gestural Computing” has granted an audience to the new metaphor for design.

The workshop is going to be an all day event on Saturday, where each of the authors will present and offer discussion about their position papers. The real benefit of this is that in such a narrow expertise, you really get amazing peer reviews and leave with amazing new ideas. These types of scoped interaction gatherings are a wonderful thing to foster innovation and creativity. It’s like a specialty conference inside of a specialty conference.

One of the most interesting things, as I read the position papers that were accepted, was a cite from my writings.  In the paper, “Natural User Interfaces: Why We Need
Better Model-Worlds, Not Better Gestures.
(PDF)”  the authors argue the need for separation between “symbolic gestures” and “manipulations.”

Manipulations are not gestures.
We believe in a fundamental dichotomy of multi-touch
gestures on interactive surfaces. This dichotomy
differentiates between two classes of multi-touch
interactions: symbolic gestures and manipulations.

They go on to define each specifically.

For us, symbolic gestures are close to the keyboard
shortcuts of WIMP systems. They are not continuous
but are executed by the user at a certain point of time
to trigger an automated system procedure. There is no
user control or feedback after triggering.

The opposite class of multi-touch interactions is
manipulations. Unlike symbolic gestures, manipulations
are continuous between manipulation initiation (e.g.
user fingers down) and completion (e.g. user fingers
up). During this time span, user actions lead to smooth
continuous changes of the system state with immediate
output.

It is a lovely way to differentiate the two and I couldn’t agree more. As the crevasse between the two interactions widens, we begin to see many more differences. I have gotten a few emails from bewildered Interaction Designers, both young and old, asking “why do we need this separation?”

The answer is not quite as apparent as it will be in the next few years. We need this distinction because designers and developers need to think about experiences differently. We need to plan and design for interactions in a fluid and responsive way. Arbitrarily throwing a manipulation on a destructive action, could have dire consequences for the user. Using a complex gesture for a simple help menu, would create a pause in your user’s experience.

Let me give you an example of the two with a nice crisp physical metaphor demonstrating OCGM

You have a pile of sticks.

Just a pile of sticks

Think of the sticks as Objects. Think of the table as a Container. Now let’s examine the relationship between the two. The container contains many objects, therefore operations executed on the container will have multiple effects on each of the singular instances of the objects.

Now, let’s give you a few things to do those operations. First, let’s give you a hand, which we will call a Manipulation. Then let’s give you a chainsaw, which we will call a Gesture.

To move the sticks, to rearrange them, to take them in and out of the container (on or off the table) we can use a hand. It might not be the most efficient, but it gets the job done and we can do these methodically. All the while, we can always undo our actions easily.

If someone comes up and talks to us, we don’t hesitate to respond because there is nothing really at jeopardy here. In fact, it might be nice for someone to come up and interact with us while we are doing these chores.

This also puts us at ease. We can do these things and if we get interrupted, we have no stress. Why? Because everything is undo-able. Things are easily moved back and forth with no real consequences. Even if we push the entire stack to the ground, we can easily pick them up and put them back. One handed.

This is the physical realization of a manipulation.

a Chainsaw "gesture"

Now, we take your chainsaw. This is a complex piece of machinery. This is going to take focus, intent, and possibly a plan to implement. The things we can do with this are destructive, irreversible, and should be taken with care because they could damage other things around the single container or multitude of objects.

We consult the manual, don safety gear, check the gasoline in the chainsaw, pull the handle, and then fire it up. Each one of these actions were simple and were not harmful, until they were combined. Combining all of these harmless manipulations results in a gesture that could do harm.

Let’s think about what we just did though. We had to do things in a certain order, we had to do each of them, and they were predetermined by something other than us.

Now could we take our gesture chainsaw and destroy the Container and all of the objects inside? Yes. Could we also affect other containers and things in the vicinity? Yes. There is no undo, when its done its done.

This is the physical realization of a Gesture.

So I digress….

Why is it important in interface design to distinguish between the two? To promote a better experience through expected interactions and results. Do you want your users concentrating on the most mundane of tasks? Let your user relax when they can. Do not make them concentrate on these details of where to move an object when they don’t have to.

Allowing your users to free up precious concentration allows your interface to become more complex. Right now, the main impediment for complexity in interface design is the user. By distinguishing between the two, we can begin to create more complex interfaces by relinquishing as much focus as possible, when possible.

I hope some of you are as excited as I am about this recent discovery.

This is spray on liquid glass on a microfibre. They caused this fissure to demonstrate the properties.

One of the biggest drawbacks to a public mounted touch screen is the transfer of germs from bystander to bystander. This is not as much as real as it is a psychological boundary we have encountered. In exit interviews from User Experience tests we consistently get feedback about the cleanliness of the surface and the other participants hands. We had to put antiseptic baby wipes near the Surface units to help alleviate this problem.

Cleaning touch screens is an odd process. Depending on the material used, be it a rough or smooth material, it usually had special instructions about cleaning. The typical monitor has a special non-glare coating and recommends using soap and a damp cloth. Using non-glare finishes on touch screens have similar recommendations. Do not use cleaners or antiseptic solutions because they will damage the finish and possibly remove the protective coating.

What this amazing discovery gives us is something that will innovate the market in the eyes of the public.

Here is the main article about the glass.

Spray-on liquid glass is transparent, non-toxic, and can protect virtually any surface against almost any damage from hazards such as water, UV radiation, dirt, heat, and bacterial infections. The coating is also flexible and breathable, which makes it suitable for use on an enormous array of products. (via physorg)

This is amazing. This gives us the ability to spray a coating on a touch screen and then the ability to clean it with antiseptic germ killing chemicals without the harmful side effects of destroying the surface or the experience. It is also so thin it allows the transference of touch to the unit.

Long live physics!

Updated on March 20. Todd Sieling was kind enough to put the Gesturcons into an Omnigraffe stencil, so those are included in the package as well. Thanks!
Updated the link on March 4th. Thanks to all that emailed me.
Current Version v1.51 : have just made the second revision, corrected some spelling and updated a definition.
I purposely left out some of the icons at first launch because I wanted to hear some feedback before people thought I had a fully thought out the solution. So, I have updated this post with the additional icons and the explanation for the purpose of them at the bottom. Enjoy!

One of the prevailing themes of my writing is the ability for everyone to gain common grounds when discussing interactions. I believe one of the keys to this is a common metaphor, OCGM (Objects, Containers, Gestures, and Manipulations) as well as a set of icons for use in design. When sketching out the user experience it’s important to note the interactions. This is especially true in state diagrams, specs, and other interaction design documents. In my first installment of Gesturcons, I present to you the Gesturcons : Touch Pack 1.0. These are being released under the Creative Commons License and I hope that you all find some good use for them in your designs and experiences.

This is the first batch, for touch. I also have Spatial, Voice, and a few others in the works.

Updates

I’m using a simplistic graphic design language to represent the actions by a user. If we use OCGM to boil down each action, we get just a few basic actions that all can be constructed from. To combine these actions together I use only two different states. They either happen at the same time, or they happen consecutively.

After we have established exactly when the action takes place, we can then talk about the specific actions. I use only a few different types of basic actions as well. The only addition you see here is the Location Specific Icon. That means that the exact placing of that particular input is predetermined by the system for the manipulation or the gesture to be successful. As an example, the Red X at the top right of Windows is a location specific manipulation.

The path icon is pretty straightforward. It means that the path the user has to take to accomplish the goal is specific and is going to be bound by guidelines. Those rules are what you have devised, but the path is specific.

The rotation icons are dual purpose. They can mean an actual spin of the input or a spin of the action. This could be boiled down to a Path, because they have to follow a certain pathway to achieve success. I find it easy, but others find it difficult to also put a rotation as a simple path. So I added it here for ease of use. Notice the use of Twin when its dual simultaneous inputs.

To sum up the use of the Gesturcons, I present an example of how you could build your own gestures using this language. In this example I demonstrate the visual identifiers to show a gesture of the question mark.

I’ve also updated the Zip file with all these new gestures. Enjoy and happy designing.


Here is the ZIP, which contains all the PNGs, the Illustrator File and an EPS as well. These are being released under the Creative Commons, which means you can use them internally as much as you want but you cannot package them, redistribute them, or include them in any professional product.

License here
Creative Commons License
Gesturcons by Ron George is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License.
Based on a work at blog.rongeorge.com. Permissions beyond the scope of this license may be available at http://blog.rongeorge.com/design/gesturcons/.

I get this question quite frequently, so I thought it best to address it in its own post. Here is the question.

Any advice for someone with tons of experience as a designer and developer, but stuck in upstate NY with a dearth of telecommute opportunities?

Answer

The first thing I would tell you to do is to watch and read everything from Daniel Pink you can get your hands on. If you are like me and you just want the lazy route, atleast watch this video of a talk he gave at TED (embedded below).

I’m a big fan of Daniel and the things he has to say. Basically, he sums up the threat of telecommuting and how innovation and decision making will solve many problems. Anything that can be done by telecommute, WILL be done by that method. If it does not require decision making, it will be done by telecommute. It’s cheaper, easier, and faster. Many of the offshore development houses have an unlimited amount of resources they can throw on the project, so scalability is never an issue.

The key to success in this day and age is in design and decision making. Put yourself in the position to make decisions that directly affect the product’s success. Being a designer that can actually shape the product is the key to accelerating your career path. Make an impact and ensure it’s success.

Development is a great skill, but you only need to know enough to make good design decisions. The ability to work out a specific worker algorithm to accomplish a task is beyond the scope of your needs. If you are talking about web design, then development plays a much higher need. The ability to understand and incorporate web development into your designs will save you, your team, and the development team tons of wasted cycles.

Summary

Know enough development to propel your designs to the front of the pack. Concentrate on a specific part of design or interaction and own it. Become it.