On 20 January 2014 I found the time to attend a public lecture in London that was all about usability and user error. The lecture was presented by Tony Mann, from the University of Greenwich. The event was in a group of buildings just down the street from Chancery Lane underground station. Since I was keen on this topic, I arrived twenty minutes early only to find that the Gresham College lecture theatre was already full to capacity. User error (and interaction design), it seems, was apparently a very popular subject!
One phrase that I’ve made a note of is that ‘we blame ourselves if we cannot work something’, that we can quickly acquire feelings of embarrassment and incompetence if we do things wrong or make mistakes. Tony gave us the example that we can become very confused by the simplest of devices, such as doors.
Doors that are well designed should tell us how they should be used: we rely on visual cues to tell us whether they should be pushed or pulled (which is called affordance), and if we see a handle, then we regularly assume that the door should be pulled (with is our application of the design rule of ‘consistency’). During this part of Tony’s talk, I could see him drawing heavily on Donald Norman’s book ‘The psychology of everyday things’ (Norman’s work is also featured within the Open University module, M364 Fundamentals of Interaction design).
I’ve made a note of Tony saying that when we interact with systems we take information from many different sources, not just the most obvious. An interesting example that was given was the Kegworth air disaster (Wikipedia), which occurred since the pilot had turned off the wrong engine, after drawing from experience gained from different but similar aircraft.
Another really interesting example was the case where a pharmacy system was designed to in such a way that drug names could only be 24 characters in length and no more. This created a situation where different drugs (which had very similar names, but had different effects) could be prescribed by a doctor in combinations which could potentially cause fatal harm to patients. Both of these examples connect perfectly to the safety argument for good interaction design. Another argument (that is used in M364) is an economic one, i.e. poor interaction design costs users and businesses both time and money.
Tony touched upon further issues that are also covered in M364. He said, ‘we interact best [with a system] when we have a helpful mental model of a system’, and our mental models determine our behaviour, and humans (generally) have good intuition when interacting with physical objects (and it is hard to discard the mental models that we form).
Tony argued that it is the job of an interaction designer to help us to create a useful mental model of how a system works, and if there’s a conflict (between what a design tells us and how we think something may work), we can very easily get into trouble very quickly. One way to help with is to make use of metaphor. Tony Mann: ‘a strategy is to show something that we understand’, such as a desktop metaphor or a file metaphor on a computer. I’ve also paraphrased the following interesting idea, that a ‘designer needs to both think like a computer and think like a user’.
One point was clearly emphasised: we can easily choose not to report mistakes. This means that designers might not always receive important feedback from their users. Users may to easily think, ‘that’s just a stupid error that I’ve made…’ Good designs, it was argued, prevents errors (which is another important point that is addressed in M364). Tony also introduced the notion of resilience strategies; things that we do to help us to avoid making mistakes, such as hanging our scarf in a visible place so we remember to take it home after we’ve been somewhere.
The three concluding points were: we’re always too ready to blame ourselves when we make a blunder, that we don’t help designers as often as we ought to, and that good interaction design is difficult (because we need to consider different perspectives).
Tony’s talk touched upon wider (and related) subjects, such as the characteristics of human error and the ways that systems could be designed to minimise the risk of mistakes arising. If I were to be very mean and offer a criticism, it would be that there was perhaps more of an opportunity to talk about the ‘human’ side of error – but here we begin to step into the domain of cognitive psychology (as well as engineering and mathematics). This said, his talk was a useful and concise introduction to the importance of good interaction design.