August, 2014Comments are off for this post.

Avoid ‘Sinkholes’ by Using Closable Panels Instead of Accordions

Accordions suffer from a phenomenon I have coined as 'Sinkhole', which in its worst cases can cause both confusion and disorientation to the user. Sinkhole is however avoidable by using  Closable Panels, a pattern that is very similar looking, but behaves differently to Accordions.

What is an Accordion?

The Accordion component is made up of two distinct elements; the panel title and panel content. This pair of elements are repeated as needed to house different sets of information.

Accordion example - Sinkhole

Only one instance of the panel content is viewable at a time, and a user switches between content views by invoking the panel title. It is this event of changing content views that brings about the Sinkhole phenomenon.

Sinkhole

When an Accordion grows in height, the content below it is pushed downwards. Should an Accordion's height shrink, then elements below shift upwards to occupy the newly vacated space. An Accordion can grow or shrink each time the user invokes the Accordion's Panel Titles. This interaction brings about the phenomenon I have coined as a Sinkhole.

Microsoft's website uses a Mega Menu on large screen devices for its global navigation, and changes it into an Accordion for smaller screens. The small screen version of their navigation suffers from Sinkhole, exhibiting the undesirable traits I described, which is demonstrated below:

[kad_youtube url="https://www.youtube.com/watch?v=-809-1O0Vv8" modestbranding="true"]

In the video the user first opened 'Products', and after scrolling down the product list, they then chose to look at 'Downloads'. When 'Downloads' was tapped it caused 'Products' to collapse, which shrank the height of the Accordion substantially. At that moment in time (8 seconds passed) the user can no longer see the navigation menu. It has disappeared from view, as it is now above their viewport, and this can cause confusion and disorientation to the user.

Remedying Sinkhole

Sinkhole, what the video above just demonstrated, is quickly remedied by switching from an Accordion to Closable Panels. Then panels are independent from each other; they only close when a user clicks/taps it again (a toggle), and opened panels remain open should the user choose to open another panel. The opposite of what an Accordion does.

In order to close a panel, the user must first scroll up to see the panel title, so that they are able to click/tap it. The page content then flows upwards towards the user's viewport, which means the user's viewport is not changed by the page reflow, keeping them where they where before invoking the panel title.

Inline Scroll

An Accordion's height can be fixed using CSS. When the Accordion's content becomes too much for it to display, a scrollbar is provided so that all of the content can be viewed. The trouble then is how big to make it? Too small when displaying lots of content and it becomes troublesome to consume that content. Too large, and it starts taking up more page space than necessary, which contradicts its very purpose of existence.

Inline scroll has its very own phenomenon too, which I have called 'Treadmilling'. This happens when you scroll down a page using a mouse-wheel or trackpad gesture and your cursor passes over the area where the inline scroll is. Then your scroll begins to scroll through that content instead of the page, with you moving but the main page remaining still. This is a topic I plan to cover shortly, so I won't go into this any further here.

Considering this, I personally cannot find an argument against using Closable Panels instead of an Accordion, with or without inline scroll. Please let me know your thoughts using the comment section below!

July, 2014Comments are off for this post.

6 Tips to Make Applications Feel Faster

Speed is an important factor, and if it is not accounted for properly, can make or break an application or service. Often the focus is on measuring system performance; identifying where the most time is being spent, and then optimising the offending area.

Complementary to this, there are several strategies you can employ to make an application feel faster than it actually is. What the user sees on their screen shapes their perception on how fast they think your application/service is.

Why Faster?

It is well documented that users do not like slow websites, or software for that matter. In business terms that means:

  • Amazon: 100ms delay results in 1% sales loss
    (potential $191m lost in revenue in 2008)
  • Google: 500ms delay drops search traffic by 20%
    The cost of slower performance increases over time
  • Yahoo: 400ms delay results in 5-9% drop in full-page traffic
  • Bing: 1s delay results in 4% drop in revenue
  • AOL: Fastest 10% of users stay 50% longer than slowest 10%

- Stats taken from How to Make Apps Feel Faster by Luke Wroblewski

Next we will cover different techniques to make applications feel faster to the end user.

1. Progress Indicators

Users need reassurance that the system is dealing with their request, and that it has not frozen or is waiting for information from them. Progress indicators are used for this very purpose, to signify it is working, and to set expectations as to when it will be complete and ready for use again.

The style of a progress indicator can influence the perception of speed, so much so that it can appear to be 11% faster. These results are achieved by applying ‘a backwards moving and decelerating ribbed’ treatment.

You can view a video of this research from New Scientist below [requires Flash player]:

You can also read the paper about this research

2. Optimistically Perform Actions

You can allow a user to be more effective with their time, and get the impression that an application or service is fast by not requiring them to wait for actions to take place. Thereby freeing them, allowing them to move onto the next action they need to take.

Instagram for example, begins the upload of an image early; once the user has passed the filter stage. Even though they have not yet added a caption, location, or even committed to the posting the image.

Instagram performing actions optimistically. Taken from Secrets to Lightning Fast Mobile Design by Mike Krieger

Instagram performing actions optimistically.
Taken from Secrets to Lightning Fast Mobile Design by Mike Krieger

Once the user has finished the upload flow, the image appears in the user’s feed, even if the upload is still in progress. It just happens to be a local copy of the image. But to them it appears as though it's already on Instagram's service.

Instagram reaps three benefits by making the image upload flow optimistic:

  1. Starting the image upload early in the flow gives them a head start, meaning it will be available to other users, sooner.
  2. Showing the uploaded image in their feed, even though it may not be uploaded yet, gives the user task closure by appearing finished.
  3. Users think their service is quick, even though typically uploading images, especially from a mobile, can be a slow process.

For the second point, the same applies to commenting on Instagram. Once you submit your comment, it appears beneath the picture immediately. Yet, that update is not actually instantaneous, it just appears as though it is.

Mike Krieger, co-founder of Instagram, goes into more detail about how Instagram makes their app feel ‘lightning fast’ in his presentation.

3. Distract

"A watched pot never boils"

Sometimes it is unavoidable, and an application has to become temporarily unavailable for a user, whilst it works on their request.

One of Bruce Tognazzini’s principals for interaction design states:

Offer engaging text messages to [keep] users informed and entertained while they are waiting for long processes, such as server saves, to be completed.

- Latency Reduction in First Principles of Interaction Design by Bruce “Tog” Tognazzini

A prime example of this is on the simulation game Football Manager, which regularly involves long processes:

Football Manager - Loading Distraction

Hints and tips are displayed whilst the users wait for results to be generated, or for games to be created. Messages are used to bring attention to new features, or educate the user so that they can become more effective at playing the game.

Another example is the web-based version of Balsamiq, a wireframing software, which shows the user quotes whilst it is loading:

Balsamiq-Distraction

When providing distractions to the user, it is important to bear in mind that they have a limited time to view the content shown to them. Once the process is completed, that information is taken away from their screen.

4. Progressive Rendering

When designing the pages of an application, the components you place onto a page belong to zones, such as a header or footer. Progressive rendering sends those zones to the user in a prioritised order.

Priority of the page zones are determined by the following factors:

  1. Page placement. Is it near the top, is it ‘below the fold*’ of the screen?
  2. Importance to the user
  3. Is the page asset much slower to return?

*In this case when speaking below the fold, the only reason I am mentioning this is because it is not presently visible, therefore less important compared to content above the fold.

Below is an example of how different zones of a page may be prioritised for an e-commerce website:

Wireframe of a fictitious e-commerce website. Page zones are numbered, giving an example how progressive rendering prioritises different parts of the page over each other. For example, a header is more important than a footer

Page zones are numbered, giving an example how progressive rendering prioritises different parts of the page over each other. For example, a header is more important than a footer

Progressive rendering gets key information back to the user quicker , rather than them having to wait for the entire page to be ready. This was exemplified by a study conducted by UIE, which found:

About.com, rated slowest by our users, was actually the fastest site (average: 8 seconds). Amazon.com, rated as one of the fastest sites by users, was really the slowest (average: 36 seconds).

The Truth About Download Time by Christine Perfetti and Lori Landesman

In this example, Amazon appeared quicker to users because information was displayed to the user sooner. It prioritised what the user could see first, and what was most important to the user.

5. UI Skeleton

This technique is closely related to what was just covered, progressive rendering, and is also known as a ‘ghost screen’. The first thing it displays to the user is the page framework; rather like a blank template. This ‘blank template’ is what distinguishes it from plain progressive rendering. Polar, a polling service, uses this very technique:

Parts of the template filled in overtime, once the information is made available by the server.

Parts of the template filled in overtime, once the information is made available by the server.
Image modified from Mobile Design Details: Avoid The Spinner by Luke Wroblewski.

The left-most screen is made up of a set of placeholders. Those placeholders are then gradually filled once the content is made available to the user interface, as the screens to the right depict.

6. Acknowledging Clicks

When pressing the play button on a cassette player, very explicit feedback was given to the user. The button sank lower, and a click sound was heard due to its mechanical nature. This all happened within a very short period of time, due to it being mechanistic in nature.

In software design, the artefacts users interact with are virtual, they are pixels. Good design traits, such as acknowledging interaction must be programmed into those entities.

Acknowledge all button clicks by visual or aural feedback within 50 milliseconds.
First Principles of Interaction Design by Bruce Tognazzini

Tabs on the web are a key example where prompt visual feedback needs to come into play. Even if the tab's content is not yet present, ensure the click/tap is acknowledged within 50ms. You achieve this by styling the tab invoked to active and the previously tab to inactive. Should the content not yet be available, then display a progress indicator, or perhaps offer a distraction, should the request be notoriously long.

You can also apply styling to buttons to indicate something is happening. A good design practice is to also trap multiple clicks of the button, so that the request the button triggers only gets registered once. This will help with speed too.

Jakob Nielsen has written about the 3 different response time limits, which is well worth a read.

February, 2014Comments are off for this post.

Spotify Controller Using Leap Motion and Tobii EyeX

Two representatives from Tobii visited Avanade, where I work. They gave a presentation on the background of Tobii, the way their technologies are leveraged, and how developers can make use of their technology through the EyeX SDK.

Later that day we had a brainstorming session, and consequently developed a proof of concept using Tobii EyeX and Leap Motion to control a Spotify player.

Trying out EyeX

During the day we were able to try out Tobii’s EyeX controller on Windows 8.1. We used Modern UI apps such as Bing Maps, Windows Store, and Twitter. Since these kind of apps have been designed with touch in mind, it benefitted Eye Interaction as the ‘hit target’, so to speak, was much larger compared to UI’s designed for a mouse pointer. Larger hit targets allowed for improved accuracy when invoking UI elements such as tiles. Eye Interaction was facilitated by holding down a key with a special binding. This key allowed us to switch between modes, such as zooming in or out, or panning across a map, for instance.

Designing for Eye Interaction

Tobii shared their principals on what to consider when designing eye for interaction:

  • Eyes are made for looking around
  • Eyes and hands work well together
  • Eyes are curious
  • Eye movements provide information

Read more about these principals over at Tobii's blog.

Using these principles, we began a whiteboard session to explore how we use our eyes when using computers. We agreed that our eyes are “passive”, and that the clues our eyes give should supplement another method of interaction.

NUI - Whiteboard session

Whiteboard session on how to leverage eye tracking and motion detection.

We grounded this theory based on a study by UIE, where they looked at how users found flyout menus and rollovers, and discovered:

“We found users follow a pattern: they decide what they are going to click on before they move the mouse.”
Users Decide First; Move Second by Erik Ojakaar, UIE

In keeping with our Natural User Interface (NUI) theme, we wanted to try and combine Tobii EyeX with another gestural technology. We were fortunate to have both Microsoft Kinect v2 and Leap Motion available to us, which gave us some interesting capabilities to try and combine.

Prototype

The concept we developed that day was a Spotify controller using Tobii EyeX and Leap Motion.  EyeX detects when the user is looking at the Spotify icon in the task bar. Leap Motion provided an interface where a user can give hand gestures to control Spotify. Gestures recognised by Leap Motion would not be honoured unless the user was looking at the Spotify icon at the same time as performing the hand gesture. The proof of concept application supported the following gestures:

  • Poke to play or pause
  • Wave right to play next track
  • Wave left to play previous track
  • Circle clockwise to increase volume
  • Circle anticlockwise to decrease volume
Spotify controller prototype using Tobii EyeX and Leap Motion

Spotify controller prototype using Tobii EyeX and Leap Motion

Why Leap Motion Instead of Kinect v2?

We chose Leap Motion over Kinect for our Spotify controller for the following reasons:

  • The user needs to be beside the computer, as the EyeX controller has a limited range of view.
  • Leap Motion has a much smaller desktop footprint, which suits close range interaction.
  • Leap Motion specialises in hand gestures, detecting each finger and thumb.

Practicality

Having the eye tracking and motion capture capabilities as separate pieces of hardware quickly clutters your workspace. The fact that they are separate also means it is not really suitable for a laptop, as it requires a desk, and makes moving from one place to another quite cumbersome.

Many computers, like the one we used for the prototype, have media keys. Those keys allow you to change the volume, skip or return to a previous track, play, and pause. In terms of interaction speed, those media keys, although not formally measured, appeared to be considerably faster than using gestures.

Nevertheless, that day was a very thought-provoking experience. The capabilities on show were very impressive, and it will be interesting to see how they develop and are leveraged in the future.

February, 2013Comments are off for this post.

Object Oriented Approach to Responsive Design & Localisation

This article describes a method to decouple responsive design and localisation rules from numerous page instances, and instead apply them to a small set of screen classifications. By doing this, it should provide you with the following benefits:

  1. A predictable, consistent UI layout;
  2. A UI layout ready for responsive design;
  3. A UI layout ready for optimisation for different markets, such as transitioning a LTR page to RTL;
  4. Less documentation needed to explain how page layout should change for a particular context (market, device), allowing for a 'leaner' deliverable process;
  5. Promotes code reuse, and improved maintainability.

Object Orientated Analysis and Design principals are used in this post, which will resonate with the developers who are implementing the framework.

Method

Each product screen should be looked in terms of the following:

  • What is shared across the majority of screens;
  • How does the layout differ across screens;
  • What is the relationship between areas of the screen;

Items often found on a page, such as a composite like a Footer, are not very interesting as  the majority, if not all pages have one. Pieces like these should be ignored, as they get defined once within the Abstract Screen (discussed later).

What is most interesting are the the last two bullets. Whilst ignoring the common page assets, look at the screens for what makes them unique. Focusing on this uniqueness, start classifying them based on the kind of relationship they have with each other.

As you continue to move through the product(s) screens, you should begin to notice that some pages have things in common with regards to layout and page zone relationship. This should be an iterative process, and each time you should try to normalise the classifications you have found. Tell-tale classifications will often have very few members.

The resulting set of screens you identify then make up your set of Meta Screens, which will contain zones ready for creating responsive localisation rules.

Abstract Screen

All application screens start with the abstract screen. It is heavily generalised, with it specifying only the essential structure that each page must possess. For a web application, this being the header, some kind of content area, and a footer.

The abstract page is used as a foundation for all product screen types.

The abstract screen is used as a foundation for all product screen types.

The "Abstract Screen" allows rules shared for all screen instances to be specified in one place, and then inherited by all subsequent screens. The content of the screen zones do change based on factors such as component state, and device medium; this article focuses only on layout, but component variance is a topic I plan to blog about at a later date.

Meta Screen

The sole focus of "Meta Screens" is to define content area variance; we no longer need to focus on the Header or Footer zones, as that is handled by the Abstract Screen. What is within the content area is the reason the user is looking at that screen in the first place.

When viewing a web page on a large screen device, such as a desktop PC, it is common to see the content area divided up into 2 or more sections. This is to make full use of the available screen estate. These sections typically have a relationship, but these relationships are not always the same. Below is an example of SkyDrive, which uses a very common screen layout:

SkyDrive has a Master / Detail layout

SkyDrive has a Master / Detail layout

In the annotated screenshot above, the content area has been divided up into a Master and Detail zone. Choosing an option from the Master menu, such as "Shared" will make the Detail zone update to show all shared files.

Mobile

Small screen devices like a smart phone do not have the luxury of a large screen estate, and so the desktop screen layout will not suit. However, now that we have defined zones for a screen layout, we can now create rules for other devices.

Master detail screen layout in the context of a mobile device.

Master detail screen layout in the context of a mobile device.

In the SkyDrive example, one would expect that having the Master displayed before Detail would be most helpful; it tells the user where they are, and where the files listed belong. In this approach, the component responsible for listing folders would need to change. But, what this layout does is give a clear sign posting as to where the user is, and a quick and direct means to change that location.

Localisation

The layout used by SkyDrive is very much driven towards a Left to Right language (LTR) such as English. Right to Left languages (RTL), such as Arabic, often requires a different layout.

BBC Arabic (top) layout compared with BBC (UK).

BBC Arabic (top) layout compared with BBC UK (bottom).

Both of the above examples are of the BBC's news site, and when looking closely both are very similar. Both have a logo, main heading, navigation, and news ticker, to name a few. What differs is their placement. In the most case, it seems to be a mirror image.

The Meta Screens you identified earlier can also be utilised for these kinds of layout changes. Just like the responsive rules you write for its zones, the same can be done for a different writing system.

Master Detail screen layout for a right to left language.

Master Detail right to left language screen layout.

Relating this back to SkyDrive, when a user who is used to a RTL writing system looks at the application using the above layout, they will first see what folder they are in, and then the files belonging to this. Please note I have not tested this hypothesis, and it ought to be validated.

In terms of implementing this, CSS3 Flexboxes looks to be the ideal solution for this. However its support is still quite sketchy at the time of writing this article.

This method is still very much in its infancy. I have not found anything quite like this yet within the User Interface domain. Please feel free to contact me either through the comments feature of this post, or via email.

October, 2012Comments are off for this post.

Improving the Readability of a Restaurant Menu

To mark my recent birthday, my wife and I went to Vassa Eggen; great food and good service, I would absolutely recommend. Before going to restaurants I have a habit of checking the menu ahead of time. When doing this on this particular occasion, I found myself overriding their default style to make the menu easier to read. Below is an example of their unaltered menu:

Screenshot of Vassa Eggen's menu.

Screenshot of Vassa Eggen's menu.

They are using a receipt-based skeuomorphism. A common trend at present, particularly on Apple's iOS6 products.

There are various sites on the web that list readability guidelines. This post describes the following changes based on those guidelines:

  • Making the text left aligned by removing the center alignment declaration;
  • Removing 'Courier', leaving Arial to be used;
  • Changing the line-height to 1.5;
  • Increasing the font size to 10 point.
These changes result in the following concept:

An improved menu? Illustrating the listed changes.

The proposed changes took a matter of seconds to implement, and this really goes to show that a few quick changes can really make a difference.

Alignment

Center alignment does not allow for easy reading. It creates an inconsistent starting place for the reader, due to its "ragged" edges. In the below example, the orange rectangles represent the position the user's eye must go to before they start reading.

Contrasting center and left alignment.

It is quickly apparent that left alignment provides a uniform vertical position for the eye to go back to. For right-to-left languages such as Arabic, right alignment would be best.

The change of alignment does not impact the skeuomorphic approach they are using.  After checking some receipts I had, they all appear to be left aligned.

Font

Courier, a serif font, is declared first on the stylesheet, followed by Arial, a sans-serif font. For a long time, sans-serif has been the recommended font type for web content, due to it being more clearly rendered on the majority of screens (low definition).

Below compares Courier and Arial:

Comparing Courier (top) and Arial (bottom).

Granted the improvement here is not as dramatic as the change of alignment. If you look at the first two characters 'Ox', Arial does seem to be clearer when comparing, at least on my screen.

Serif fonts better suit large font sizes, such as headings. The font size for the menu is 12px (10pt), and due to this sans-serif really should be used. Although, it is most pressing for content that has long passages of text.

Speaking of font size, at least 10pt should be used, which roughly equals 13px; this assumes senior citizens are not the target audience, otherwise at least 12pt is needed. I made the 10pt call based on the kind of clientele when I was there. However, that assumption would need to be verified more thoroughly.

As a side note, in CSS px should not be used for font sizes. Instead use percent or em, as these allow the user to resize.

Line Height

When lines of text are placed too closely together, it becomes difficult to focus on a given line. Conversely, too far apart and they will look disassociated, like separate statements. A line height of 1.4 is currently being used, with the recommendation from classic typography books being 1.5, and the average being 1.48.

1.4 (left) compared to 1.5 (right).

For this particular situation, line height becomes very valuable. Each item on the menu is first displayed in Swedish and then followed by its English translation. A larger line height makes each line easier to focus on, whilst still making them appear related.