Gresham College: Designing IT to make healthcare safer
Tuesday, 18 Feb 2014, 17:13
Visible to anyone in the world
On 11 February, I was back at the Museum of London. This time, I wasn’t there to see juggling mathematicians (Gresham College) talking about theoretical anti-balls. Instead, I was there for a lecture about the usability and design of medical devices by Harold Thimbleby, who I understand was from Swansea University.
Before the lecture started, we were subjected to a looped video of a car crash test; a modern car from 2009 was crashed into a car built in the 1960s. The result (and later point) was obvious: modern cars are safer than older cars. Continual testing and development makes a difference. We now have substantially safer cars. Even though there have been substantial improvements, Harold made a really interesting point. He said, ‘if bad design was a disease, it would be our 3rd biggest killer’.
Computers are everywhere in healthcare. Perhaps introducing computers (or mobile devices) might be able to help? This might well be the case, but there is also the risk that hospital staff might end up spending more time trying to get technology to do the right things rather than spending other time dealing with more important patient issues. There is an underlying question of whether a technology is appropriate or not.
This blog post has been pulled directly from my notes that I’ve made during the lecture. If you’re interested, I’ve provided a link to the transcript of the talk, which can be found at the end.
Infusion pumps
Harold showed us pictures of a series of infusion pumps. I didn’t know what an infusion pump was. Apparently it’s a device that is a bit like an intravenous drip, but you program it to dispense a fluid (or drug) into the blood stream at a certain rate. I was very surprised by the pictures: every infusion pump looked very different to each other and these differences were quite shocking. They each had different screens and different displays. They were different sizes and had different keypad layouts. It was clear that there was little in the way of internal and external consistency. Harold made an important point, that they were ‘not designed to be readable, they were designed to be cheap’ (please forgive my paraphrasing here).
We were regaled with further examples of interaction design terror. A decimal point button was placed on an arrow key. It was clear that there was not appropriate mapping between a button and its intended task. Pushing a help button gave little in the way of help to the user.
We were told of a human factors analysis study where six nurses were required to use an infusion pump over a period of two hours (I think I’ve noted this down correctly). The conclusion was that all of the nurses were confused. Sixty percent of the nurses needed hints on how to use the device, and a further sixty percent were confused by how the decimal point worked (in this particular example). Strikingly, sixty percent of those nurses entered the wrong settings.
We’re not talking about trivial mistakes here; we’re talking about mistakes where users may be fundamentally confused by the appearance and location of a decimal point. Since we’re also talking about devices that dispense drugs, small errors can become life threateningly catastrophic.
Calculators
Another example of devices where errors can become significant is the common hand-held calculator. Now, I was of the opinion that modern calculators were pretty idiot proof, but it seems that I might well be the idiot for assuming this. Harold gave us an example where we had to try to simply calculate percentages of the world population. Our hand held calculator simply threw away zeros without telling us, without giving us any feedback. If we’re not thinking, and since we implicitly know that calculators carry out calculations correctly, we can easily assume that the answer is correct too. The point is clear: ‘calculators should not be used in hospitals, they allow you to make mistakes, and they don’t care’.
Harold made another interesting point: when we use a calculator we often look at the keypad rather than the screen. We might have a mental model of how a calculator works that is different to how it actually responds. Calculators that have additional functions (such as a backspace, or delete last keypress buttons) might well break our understanding and expectations of how these devices operate. Consistency is therefore very important (along with the visibility of results and feedback from errors).
There’s was an interesting link between this Gresham lecture and the lecture by Tony Mann (blog summary), which took place in January 2014. Tony made the exact same point that Harold did. When we make mistakes, we can very easily blame ourselves rather than the devices that we’re using. Since we hold this bias, we’re also reluctant to raise concerns about the usability of devices and the equipment that we’re using.
Speeds of Thinking
Another interesting link was that Harold drew upon research by Daniel Kahneman (Wikipedia), explicitly connecting the subject of interface design with the subject of cognitive psychology. Harold mentioned one of Kahneman’s recent books entitled: ‘Thinking Fast and Slow’, which posits that there are two cognitive systems in the brain: a perceptual system which makes quick decisions, and a slower system which makes more reasoned decisions (I’m relying on my notes again; I’ve got Daniel’s book on my bookshelves, amidst loads of others I have chalked down to read!)
Good design should take account of both the fast and the slow system. One really nice example was with the use of a cashpoint to withdraw money from your bank account. Towards the end of the transaction, the cashpoint begins to beep continually (offering perceptual feedback). The presence of the feedback causes the slower system to focus attention on the task that has got to be completed (which is to collect the bank card). Harold’s point is simple: ‘if you design technology properly we can make the world better’.
Visibility of information
How do you choose one device or product over another? One approach is to make usually hidden information more visible to those who are tasked with making decisions. A really good example of this is the energy efficiency ratings on household items, such as refrigerators and washing machines. A similar rating scheme is available on car tyres too, exposing attributes such as noise, stopping distance and fuel consumption. Harold’s point was: why not create a rating system for the usability of devices?
Summary
The Open University M364 Fundamentals of Interaction Design module highlights two benefits of good interaction design. These are: an economic arguments (that good usability can save time and money), and safety.
This talk clearly emphasised the importance of the safety argument and emphasised good design principles (such as those created by Donald Norman), such as visibility of information, feedback of action, consistency between and within devices, and appropriate mapping (which means that buttons that are pressed should do the operation that they are expected to do).
Harold’s lecture concluded with a number of points that relate to the design of medical devices. (Of which there were four, but I’ve only made a note of three!) The first is that it’s important to rigorously assess technology, since this way we can ‘smoke out’ any design errors and problems (evaluation is incidentally a big part of M364). The second is that it is important to automate resilience, or to offer clear feedback to the users. The third is to make safety visible through clear labelling.
It was all pretty thought provoking stuff which was very clearly presented. One thing that struck me (mostly after the talk) is that interactive devices don’t exist in isolation – they’re always used within an environment. Understanding the environment and the way in which communications occur between different people who work within that environment are also considered to be important too (and there are different techniques that can be used to learn more about this).
Towards the end of the talk, I had a question that someone else asked. It was, ‘is it possible to draw inspiration from the aviation industry and apply it to medicine?’ It was a very good question. I’ve read (in another OU module) that an aircraft cockpit can be used as a way to communicate system state to both pilots. Clearly, this is subject of on-going research, and Harold directed us to a site called CHI Med (computer-human interaction).
Much food for thought! I came away from the lecture feeling mildly terrified, but one consolation was that I had at least learnt what an infusion pump was. As promised, here’s a link to the transcript of the talk, entitled Designing IT to make healthcare safer (Gresham College).
Gresham College: Designing IT to make healthcare safer
On 11 February, I was back at the Museum of London. This time, I wasn’t there to see juggling mathematicians (Gresham College) talking about theoretical anti-balls. Instead, I was there for a lecture about the usability and design of medical devices by Harold Thimbleby, who I understand was from Swansea University.
Before the lecture started, we were subjected to a looped video of a car crash test; a modern car from 2009 was crashed into a car built in the 1960s. The result (and later point) was obvious: modern cars are safer than older cars. Continual testing and development makes a difference. We now have substantially safer cars. Even though there have been substantial improvements, Harold made a really interesting point. He said, ‘if bad design was a disease, it would be our 3rd biggest killer’.
Computers are everywhere in healthcare. Perhaps introducing computers (or mobile devices) might be able to help? This might well be the case, but there is also the risk that hospital staff might end up spending more time trying to get technology to do the right things rather than spending other time dealing with more important patient issues. There is an underlying question of whether a technology is appropriate or not.
This blog post has been pulled directly from my notes that I’ve made during the lecture. If you’re interested, I’ve provided a link to the transcript of the talk, which can be found at the end.
Infusion pumps
Harold showed us pictures of a series of infusion pumps. I didn’t know what an infusion pump was. Apparently it’s a device that is a bit like an intravenous drip, but you program it to dispense a fluid (or drug) into the blood stream at a certain rate. I was very surprised by the pictures: every infusion pump looked very different to each other and these differences were quite shocking. They each had different screens and different displays. They were different sizes and had different keypad layouts. It was clear that there was little in the way of internal and external consistency. Harold made an important point, that they were ‘not designed to be readable, they were designed to be cheap’ (please forgive my paraphrasing here).
We were regaled with further examples of interaction design terror. A decimal point button was placed on an arrow key. It was clear that there was not appropriate mapping between a button and its intended task. Pushing a help button gave little in the way of help to the user.
We were told of a human factors analysis study where six nurses were required to use an infusion pump over a period of two hours (I think I’ve noted this down correctly). The conclusion was that all of the nurses were confused. Sixty percent of the nurses needed hints on how to use the device, and a further sixty percent were confused by how the decimal point worked (in this particular example). Strikingly, sixty percent of those nurses entered the wrong settings.
We’re not talking about trivial mistakes here; we’re talking about mistakes where users may be fundamentally confused by the appearance and location of a decimal point. Since we’re also talking about devices that dispense drugs, small errors can become life threateningly catastrophic.
Calculators
Another example of devices where errors can become significant is the common hand-held calculator. Now, I was of the opinion that modern calculators were pretty idiot proof, but it seems that I might well be the idiot for assuming this. Harold gave us an example where we had to try to simply calculate percentages of the world population. Our hand held calculator simply threw away zeros without telling us, without giving us any feedback. If we’re not thinking, and since we implicitly know that calculators carry out calculations correctly, we can easily assume that the answer is correct too. The point is clear: ‘calculators should not be used in hospitals, they allow you to make mistakes, and they don’t care’.
Harold made another interesting point: when we use a calculator we often look at the keypad rather than the screen. We might have a mental model of how a calculator works that is different to how it actually responds. Calculators that have additional functions (such as a backspace, or delete last keypress buttons) might well break our understanding and expectations of how these devices operate. Consistency is therefore very important (along with the visibility of results and feedback from errors).
There’s was an interesting link between this Gresham lecture and the lecture by Tony Mann (blog summary), which took place in January 2014. Tony made the exact same point that Harold did. When we make mistakes, we can very easily blame ourselves rather than the devices that we’re using. Since we hold this bias, we’re also reluctant to raise concerns about the usability of devices and the equipment that we’re using.
Speeds of Thinking
Another interesting link was that Harold drew upon research by Daniel Kahneman (Wikipedia), explicitly connecting the subject of interface design with the subject of cognitive psychology. Harold mentioned one of Kahneman’s recent books entitled: ‘Thinking Fast and Slow’, which posits that there are two cognitive systems in the brain: a perceptual system which makes quick decisions, and a slower system which makes more reasoned decisions (I’m relying on my notes again; I’ve got Daniel’s book on my bookshelves, amidst loads of others I have chalked down to read!)
Good design should take account of both the fast and the slow system. One really nice example was with the use of a cashpoint to withdraw money from your bank account. Towards the end of the transaction, the cashpoint begins to beep continually (offering perceptual feedback). The presence of the feedback causes the slower system to focus attention on the task that has got to be completed (which is to collect the bank card). Harold’s point is simple: ‘if you design technology properly we can make the world better’.
Visibility of information
How do you choose one device or product over another? One approach is to make usually hidden information more visible to those who are tasked with making decisions. A really good example of this is the energy efficiency ratings on household items, such as refrigerators and washing machines. A similar rating scheme is available on car tyres too, exposing attributes such as noise, stopping distance and fuel consumption. Harold’s point was: why not create a rating system for the usability of devices?
Summary
The Open University M364 Fundamentals of Interaction Design module highlights two benefits of good interaction design. These are: an economic arguments (that good usability can save time and money), and safety.
This talk clearly emphasised the importance of the safety argument and emphasised good design principles (such as those created by Donald Norman), such as visibility of information, feedback of action, consistency between and within devices, and appropriate mapping (which means that buttons that are pressed should do the operation that they are expected to do).
Harold’s lecture concluded with a number of points that relate to the design of medical devices. (Of which there were four, but I’ve only made a note of three!) The first is that it’s important to rigorously assess technology, since this way we can ‘smoke out’ any design errors and problems (evaluation is incidentally a big part of M364). The second is that it is important to automate resilience, or to offer clear feedback to the users. The third is to make safety visible through clear labelling.
It was all pretty thought provoking stuff which was very clearly presented. One thing that struck me (mostly after the talk) is that interactive devices don’t exist in isolation – they’re always used within an environment. Understanding the environment and the way in which communications occur between different people who work within that environment are also considered to be important too (and there are different techniques that can be used to learn more about this).
Towards the end of the talk, I had a question that someone else asked. It was, ‘is it possible to draw inspiration from the aviation industry and apply it to medicine?’ It was a very good question. I’ve read (in another OU module) that an aircraft cockpit can be used as a way to communicate system state to both pilots. Clearly, this is subject of on-going research, and Harold directed us to a site called CHI Med (computer-human interaction).
Much food for thought! I came away from the lecture feeling mildly terrified, but one consolation was that I had at least learnt what an infusion pump was. As promised, here’s a link to the transcript of the talk, entitled Designing IT to make healthcare safer (Gresham College).