Accordions suffer from a phenomenon I have coined as 'Sinkhole', which in its worst cases can cause both confusion and disorientation to the user. Sinkhole is however avoidable by using Closable Panels, a pattern that is very similar looking, but behaves differently to Accordions.
What is an Accordion?
The Accordion component is made up of two distinct elements; the panel title and panel content. This pair of elements are repeated as needed to house different sets of information.
Only one instance of the panel content is viewable at a time, and a user switches between content views by invoking the panel title. It is this event of changing content views that brings about the Sinkhole phenomenon.
When an Accordion grows in height, the content below it is pushed downwards. Should an Accordion's height shrink, then elements below shift upwards to occupy the newly vacated space. An Accordion can grow or shrink each time the user invokes the Accordion's Panel Titles. This interaction brings about the phenomenon I have coined as a Sinkhole.
Microsoft's website uses a Mega Menu on large screen devices for its global navigation, and changes it into an Accordion for smaller screens. The small screen version of their navigation suffers from Sinkhole, exhibiting the undesirable traits I described, which is demonstrated below:
In the video the user first opened 'Products', and after scrolling down the product list, they then chose to look at 'Downloads'. When 'Downloads' was tapped it caused 'Products' to collapse, which shrank the height of the Accordion substantially. At that moment in time (8 seconds passed) the user can no longer see the navigation menu. It has disappeared from view, as it is now above their viewport, and this can cause confusion and disorientation to the user.
Sinkhole, what the video above just demonstrated, is quickly remedied by switching from an Accordion to Closable Panels. Then panels are independent from each other; they only close when a user clicks/taps it again (a toggle), and opened panels remain open should the user choose to open another panel. The opposite of what an Accordion does.
In order to close a panel, the user must first scroll up to see the panel title, so that they are able to click/tap it. The page content then flows upwards towards the user's viewport, which means the user's viewport is not changed by the page reflow, keeping them where they where before invoking the panel title.
An Accordion's height can be fixed using CSS. When the Accordion's content becomes too much for it to display, a scrollbar is provided so that all of the content can be viewed. The trouble then is how big to make it? Too small when displaying lots of content and it becomes troublesome to consume that content. Too large, and it starts taking up more page space than necessary, which contradicts its very purpose of existence.
Inline scroll has its very own phenomenon too, which I have called 'Treadmilling'. This happens when you scroll down a page using a mouse-wheel or trackpad gesture and your cursor passes over the area where the inline scroll is. Then your scroll begins to scroll through that content instead of the page, with you moving but the main page remaining still. This is a topic I plan to cover shortly, so I won't go into this any further here.
Considering this, I personally cannot find an argument against using Closable Panels instead of an Accordion, with or without inline scroll. Please let me know your thoughts using the comment section below!
Speed is an important factor, and if it is not accounted for properly, can make or break an application or service. Often the focus is on measuring system performance; identifying where the most time is being spent, and then optimising the offending area.
Complementary to this, there are several strategies you can employ to make an application feel faster than it actually is. What the user sees on their screen shapes their perception on how fast they think your application/service is.
It is well documented that users do not like slow websites, or software for that matter. In business terms that means:
Amazon: 100ms delay results in 1% sales loss
(potential $191m lost in revenue in 2008)
Google: 500ms delay drops search traffic by 20%
The cost of slower performance increases over time
Yahoo: 400ms delay results in 5-9% drop in full-page traffic
Bing: 1s delay results in 4% drop in revenue
AOL: Fastest 10% of users stay 50% longer than slowest 10%
Next we will cover different techniques to make applications feel faster to the end user.
1. Progress Indicators
Users need reassurance that the system is dealing with their request, and that it has not frozen or is waiting for information from them. Progress indicators are used for this very purpose, to signify it is working, and to set expectations as to when it will be complete and ready for use again.
The style of a progress indicator can influence the perception of speed, so much so that it can appear to be 11% faster. These results are achieved by applying ‘a backwards moving and decelerating ribbed’ treatment.
You can view a video of this research from New Scientist below [requires Flash player]:
You can allow a user to be more effective with their time, and get the impression that an application or service is fast by not requiring them to wait for actions to take place. Thereby freeing them, allowing them to move onto the next action they need to take.
Instagram for example, begins the upload of an image early; once the user has passed the filter stage. Even though they have not yet added a caption, location, or even committed to the posting the image.
Once the user has finished the upload flow, the image appears in the user’s feed, even if the upload is still in progress. It just happens to be a local copy of the image. But to them it appears as though it's already on Instagram's service.
Instagram reaps three benefits by making the image upload flow optimistic:
Starting the image upload early in the flow gives them a head start, meaning it will be available to other users, sooner.
Showing the uploaded image in their feed, even though it may not be uploaded yet, gives the user task closure by appearing finished.
Users think their service is quick, even though typically uploading images, especially from a mobile, can be a slow process.
For the second point, the same applies to commenting on Instagram. Once you submit your comment, it appears beneath the picture immediately. Yet, that update is not actually instantaneous, it just appears as though it is.
A prime example of this is on the simulation game Football Manager, which regularly involves long processes:
Hints and tips are displayed whilst the users wait for results to be generated, or for games to be created. Messages are used to bring attention to new features, or educate the user so that they can become more effective at playing the game.
Another example is the web-based version of Balsamiq, a wireframing software, which shows the user quotes whilst it is loading:
When providing distractions to the user, it is important to bear in mind that they have a limited time to view the content shown to them. Once the process is completed, that information is taken away from their screen.
4. Progressive Rendering
When designing the pages of an application, the components you place onto a page belong to zones, such as a header or footer. Progressive rendering sends those zones to the user in a prioritised order.
Priority of the page zones are determined by the following factors:
Page placement. Is it near the top, is it ‘below the fold*’ of the screen?
Importance to the user
Is the page asset much slower to return?
*In this case when speaking below the fold, the only reason I am mentioning this is because it is not presently visible, therefore less important compared to content above the fold.
Below is an example of how different zones of a page may be prioritised for an e-commerce website:
Page zones are numbered, giving an example how progressive rendering prioritises different parts of the page over each other. For example, a header is more important than a footer
Progressive rendering gets key information back to the user quicker , rather than them having to wait for the entire page to be ready. This was exemplified by a study conducted by UIE, which found:
About.com, rated slowest by our users, was actually the fastest site (average: 8 seconds). Amazon.com, rated as one of the fastest sites by users, was really the slowest (average: 36 seconds).
In this example, Amazon appeared quicker to users because information was displayed to the user sooner. It prioritised what the user could see first, and what was most important to the user.
5. UI Skeleton
This technique is closely related to what was just covered, progressive rendering, and is also known as a ‘ghost screen’. The first thing it displays to the user is the page framework; rather like a blank template. This ‘blank template’ is what distinguishes it from plain progressive rendering. Polar, a polling service, uses this very technique:
The left-most screen is made up of a set of placeholders. Those placeholders are then gradually filled once the content is made available to the user interface, as the screens to the right depict.
6. Acknowledging Clicks
When pressing the play button on a cassette player, very explicit feedback was given to the user. The button sank lower, and a click sound was heard due to its mechanical nature. This all happened within a very short period of time, due to it being mechanistic in nature.
In software design, the artefacts users interact with are virtual, they are pixels. Good design traits, such as acknowledging interaction must be programmed into those entities.
Tabs on the web are a key example where prompt visual feedback needs to come into play. Even if the tab's content is not yet present, ensure the click/tap is acknowledged within 50ms. You achieve this by styling the tab invoked to active and the previously tab to inactive. Should the content not yet be available, then display a progress indicator, or perhaps offer a distraction, should the request be notoriously long.
You can also apply styling to buttons to indicate something is happening. A good design practice is to also trap multiple clicks of the button, so that the request the button triggers only gets registered once. This will help with speed too.
Two representatives from Tobii visited Avanade, where I work. They gave a presentation on the background of Tobii, the way their technologies are leveraged, and how developers can make use of their technology through the EyeX SDK.
Later that day we had a brainstorming session, and consequently developed a proof of concept using Tobii EyeX and Leap Motion to control a Spotify player.
Trying out EyeX
During the day we were able to try out Tobii’s EyeX controller on Windows 8.1. We used Modern UI apps such as Bing Maps, Windows Store, and Twitter. Since these kind of apps have been designed with touch in mind, it benefitted Eye Interaction as the ‘hit target’, so to speak, was much larger compared to UI’s designed for a mouse pointer. Larger hit targets allowed for improved accuracy when invoking UI elements such as tiles. Eye Interaction was facilitated by holding down a key with a special binding. This key allowed us to switch between modes, such as zooming in or out, or panning across a map, for instance.
Designing for Eye Interaction
Tobii shared their principals on what to consider when designing eye for interaction:
Using these principles, we began a whiteboard session to explore how we use our eyes when using computers. We agreed that our eyes are “passive”, and that the clues our eyes give should supplement another method of interaction.
Whiteboard session on how to leverage eye tracking and motion detection.
We grounded this theory based on a study by UIE, where they looked at how users found flyout menus and rollovers, and discovered:
In keeping with our Natural User Interface (NUI) theme, we wanted to try and combine Tobii EyeX with another gestural technology. We were fortunate to have both Microsoft Kinect v2 and Leap Motion available to us, which gave us some interesting capabilities to try and combine.
The concept we developed that day was a Spotify controller using Tobii EyeX and Leap Motion. EyeX detects when the user is looking at the Spotify icon in the task bar. Leap Motion provided an interface where a user can give hand gestures to control Spotify. Gestures recognised by Leap Motion would not be honoured unless the user was looking at the Spotify icon at the same time as performing the hand gesture. The proof of concept application supported the following gestures:
Poke to play or pause
Wave right to play next track
Wave left to play previous track
Circle clockwise to increase volume
Circle anticlockwise to decrease volume
Spotify controller prototype using Tobii EyeX and Leap Motion
Why Leap Motion Instead of Kinect v2?
We chose Leap Motion over Kinect for our Spotify controller for the following reasons:
The user needs to be beside the computer, as the EyeX controller has a limited range of view.
Leap Motion has a much smaller desktop footprint, which suits close range interaction.
Leap Motion specialises in hand gestures, detecting each finger and thumb.
Having the eye tracking and motion capture capabilities as separate pieces of hardware quickly clutters your workspace. The fact that they are separate also means it is not really suitable for a laptop, as it requires a desk, and makes moving from one place to another quite cumbersome.
Many computers, like the one we used for the prototype, have media keys. Those keys allow you to change the volume, skip or return to a previous track, play, and pause. In terms of interaction speed, those media keys, although not formally measured, appeared to be considerably faster than using gestures.
Nevertheless, that day was a very thought-provoking experience. The capabilities on show were very impressive, and it will be interesting to see how they develop and are leveraged in the future.