OU blog

Personal Blogs

Christopher Douce

Software Engineering Radio: Design and Designing

Visible to anyone in the world
Edited by Christopher Douce, Sunday 5 October 2025 at 10:09

A quick search of software engineering radio yields a number of themes that relate to design: experience design, design for security and privacy, design of APIs, and also architectural design.

This post shares some useful points shared on two episodes. The first asks questions about and reflect on the software design process. The second, which is complementary, relates to how engineers share their designs with others through the use of diagrams. An earlier blog post, TM354 Working with diagrams addresses the same subject, but in a slightly different way, emphasising diagramming tools.

Before I get into what struck me as interesting in both of those podcasts, it is useful to have a quick look at Chapter 3 of SWEBOK, which contains a useful summary of some software Design principles (3-4):

  • Abstraction: Identify the essential properties of a problem.
  • Separation of concerns: Identify what concerns are important to stakeholders.
  • Modularization: Break larger components into smaller elements to make the problem easier to understand.
  • Encapsulation: Hides unnecessary detail.
  • Separation of interface and implementation: Every component should have a clear interface.
  • Uniformity: Ensure consistency in terms of how the software is described.
  • Completeness: It does what is supposed to “and leaves nothing out”.
  • Verfiability: A characteristic of software to determine whether what has been built can be checked to make sure it meets requirements.

These principles seem very ‘code centric’. When we look at the titles of other software engineering radio podcasts we can clearly see that we need to consider wider perspectives: we need to think about architecture, organisational structure what we need to do to realise important non-functional requirements such as accessibility and usability.

Software design

In SE Radio 333: Marian Petre and André van der Hoek on Software Design discuss not only software design, and the also the practice of expert software engineers. It opens with the point that it is important to distinguish between the outcome of design and the process of designing. There was also a reflection that design (and designing) ‘is something that is done throughout [the software development lifecycle]’, beginning with eliciting (designing) requirements.

We are also introduced to an important term: design thinking. This is defined as the process of ‘understanding the relationship between how we understand the problem and how we understand the solution, and thinking reflectively about the relationship between the two’. Design thinking is mentioned in the SWEBOK, where it is defined in a similar (but slightly similar) way “design thinking comprises two essentials: (1) understanding the need or problem and (2) devising a solution” (3-2). The SWEBOK definition of design thinking is, arguably, quite a narrow definition since the term can refer to both an approach and a mindset, and it is a mindset that can reside “inside all the individuals in the organisation”.

A related question is how do experts go about design? Experts go deep (into a problem) as well as going broad (when looking for solutions). When experts go deep, they can dive into a problem quickly. Van der Hoek shared an interesting turn of phrase, suggesting that developers “talked on the whiteboards”. It is important to and necessary to externalise ideas, since ideas can be then exposed to others, discussed and evaluated. An expert designer needs be able to listen, to disagree, and have the ability to postpone making decisions until they have gathered more information. Experts, it is said, borrow, collaborate, sketch, and take breaks.

Expert software designers also have an ability to identify which parts of a problem are most difficult, and then begin to solve those bits. The are able to see the essence of a problem. In turn, they know where to apply attention, investing effort ‘up front’. Rather than considering which database to choose, they might tackle the more fundamental question of what a database needs to do. Expert designers also have a mindset focussed toward understanding and strive for elegance since “an elegant solution also communicates how it works” (41:00)

Experts can also deliberately ‘break the rules’ to understand problem constraints or boundaries (43:50). Expert designers may also use design thinking practices to generate ideas and to reveal assumptions by applying seemingly odd or surprising activities. Doing something out of the ordinary and ‘using techniques to see differently’ may uncover new insights about problems or solutions. Designers are also able to step back and observe and reflect on the design process.

Organisational cultural constraints can also play a role. The environment in which a software designer (a software engineer or architect) is employed can constrain them from working towards results and applying creative approaches. This cultural context can ‘supress’ design and designers especially if organisational imperatives not aligned with development and design practices.

Marion Petre, an emeritus professor of computing at the OU referred to a paper by David Walker which describes a ‘soup, bowl, and table’ metaphor. A concise description is shared in the abstract of the article: “The soup is the mix of factors that stimulates creativity with respect to a particular project. The bowl is the management structure that nurtures creative outcomes in general. And the table is the context of leadership and vision that focuses creative energies towards innovative but realizable objectives.” You could also argue that soup gives designers energy.

The podcast also asked what do designers need to do to become better designer? The answer was simple, namely, experts find the time look the code and designs of other systems. Engineers ‘want to understand what they do and make it better’. An essential and important point is that ‘experts and high performing teams are reflective’; they think about what they do, and what they have done.

Diagrams in Software Engineering

An interesting phrase that was shared in Petre and van der Hoek’s podcast was that developers ‘talked using whiteboards’. The sketching and sharing of diagrams is an essential practice within software engineering. In SE Radio 566: Ashley Peacock on Diagramming in Software Engineering different aspects of the use and creation of diagrams are explored. Diagrams are useful because of “the ease in which information is digestible” (1:00). Diagrams and sketches can be short-lived/long lived. They can be used to document completed software systems, summarise software that is undergoing change, and be used to share ideas before they are translated into architecture and code.

TM354 Software Engineering makes extensive use of a graphical language called the Unified Modelling Language (UML). UML has 12 types of diagrams, of which 2 or 3 types are most frequently used. Class diagrams are used to share key abstractions, ideas within the problem domain and a design, and their dependencies (how the bits relate to each other). Sequence diagrams can be used to show the interactions between different layers of software. Activity diagrams can be used to describe the connections between software and the wider world. UML is important since it provides a share diagramming language that can be used and understood by software engineers.

Reflections

One of the aspects that I really appreciated from the first podcast was that it emphasises the importance and significance of the design process. One of my first duties after becoming a permanent staff tutor at the OU was to help to support the delivery of some of the design modules. I remember there were three of them. There was U101 Design thinking: creativity for the 21st century, an earlier version of T240 Design for Impact, T217, and T317 Innovation: Designing for Change. Even though I was familiar with a sister module from the School of Computing and Communications, TM356 Interaction design and the user experience, being exposed to the design modules opened my eyes to a breadth of different approaches that I had never heard of before and could have applicability within computing.

U101 introduced me to the importance of play. T217 (and the module that came before it, T211) introduced me to the contrasting ideas of divergent and convergent thinking. The idea of divergent thinking relates to the idea mentioned in the first podcast of thinking beyond the constraints. I was also introduced to the double-diamond design process (Design Council, PDF). Design processes are different in character to software development processes since they concern exploring different ways to solve problems as opposed to distilling solutions into architectures and code.

A really important point from the first podcast is that design can (and should) happen across the entire software development lifecycle. Defining (and designing) requirements at the start of a project is much as creative process as the creation and specification of tests and test cases.

It is important and necessary to highlight the importance of reflection. Thinking about what we have, how well our software has been created, and what we need all help to refine not just our engineered artefacts, but also our engineering processes. Another point that resonates is the role that organisational structures may play in helping to foster design. To create good designs, we rely on the support of others, but our creativity may be attenuated if the value of ‘play’ is views as frivolous or without value.

Effective designers will be aware of different sets of principles, why they are important, how they might be applied. This post opened by sharing a set of software design principles that were featured in the SWEBOK. As suggested, these principles are viewed as very code centric. There are, of course, other design principles that can be applied to user interfaces (and everyday things), such as those by Donald Norman. Reflecting on these two sets of principles, I can’t help but feel that there is quite a gap in the middle, and a need for software architecture design principles.  Bass et al. (2021) is a useful reference, but there are other resources, including those by service providers, such as Amazon’s Well-Architected guidance. Engineers should always work towards understanding. Reflecting on what we don’t yet full understand is as important as what we do understand.  

References

Bass, D. L., Clements, D. P and Kazman, D. R. (2021) Software Architecture in Practice, 4th edn, Upper Saddle River, NJ, Addison Wesley.

SWEBOK v.4 (2024) Software Engineering Body of Knowledge SWEBOK. Available at: https://www.computer.org/education/bodies-of-knowledge/software-engineering

Walker, D. (1993), The Soup, the Bowl, and the Place at the Table. Design Management Journal (Former Series), 4: 10-22. https://doi.org/10.1111/j.1948-7169.1993.tb00368.x

Permalink
Share post
Christopher Douce

Gresham College: Designing IT to make healthcare safer

Visible to anyone in the world

On 11 February, I was back at the Museum of London.  This time, I wasn’t there to see juggling mathematicians (Gresham College) talking about theoretical anti-balls.  Instead, I was there for a lecture about the usability and design of medical devices by Harold Thimbleby, who I understand was from Swansea University. 

Before the lecture started, we were subjected to a looped video of a car crash test; a modern car from 2009 was crashed into a car built in the 1960s.  The result (and later point) was obvious: modern cars are safer than older cars.  Continual testing and development makes a difference.  We now have substantially safer cars.  Even though there have been substantial improvements, Harold made a really interesting point.  He said, ‘if bad design was a disease, it would be our 3rd biggest killer’.

Computers are everywhere in healthcare.  Perhaps introducing computers (or mobile devices) might be able to help?  This might well be the case, but there is also the risk that hospital staff might end up spending more time trying to get technology to do the right things rather than spending other time dealing with more important patient issues.  There is an underlying question of whether a technology is appropriate or not.

This blog post has been pulled directly from my notes that I’ve made during the lecture.  If you’re interested, I’ve provided a link to the transcript of the talk, which can be found at the end.

Infusion pumps

Harold showed us pictures of a series of infusion pumps.  I didn’t know what an infusion pump was.  Apparently it’s a device that is a bit like an intravenous drip, but you program it to dispense a fluid (or drug) into the blood stream at a certain rate.  I was very surprised by the pictures: every infusion pump looked very different to each other and these differences were quite shocking.  They each had different screens and different displays.  They were different sizes and had different keypad layouts.  It was clear that there was little in the way of internal and external consistency. Harold made an important point, that they were ‘not designed to be readable, they were designed to be cheap’ (please forgive my paraphrasing here).

We were regaled with further examples of interaction design terror.  A decimal point button was placed on an arrow key.  It was clear that there was not appropriate mapping between a button and its intended task.  Pushing a help button gave little in the way of help to the user.

We were told of a human factors analysis study where six nurses were required to use an infusion pump over a period of two hours (I think I’ve noted this down correctly).  The conclusion was that all of the nurses were confused.  Sixty percent of the nurses needed hints on how to use the device, and a further sixty percent were confused by how the decimal point worked (in this particular example).  Strikingly, sixty percent of those nurses entered the wrong settings.  

We’re not talking about trivial mistakes here; we’re talking about mistakes where users may be fundamentally confused by the appearance and location of a decimal point.   Since we’re also talking about devices that dispense drugs, small errors can become life threateningly catastrophic.

Calculators

Another example of devices where errors can become significant is the common hand-held calculator.  Now, I was of the opinion that modern calculators were pretty idiot proof, but it seems that I might well be the idiot for assuming this.  Harold gave us an example where we had to try to simply calculate percentages of the world population.  Our hand held calculator simply threw away zeros without telling us, without giving us any feedback.  If we’re not thinking, and since we implicitly know that calculators carry out calculations correctly, we can easily assume that the answer is correct too.  The point is clear:  ‘calculators should not be used in hospitals, they allow you to make mistakes, and they don’t care’.

Harold made another interesting point: when we use a calculator we often look at the keypad rather than the screen.  We might have a mental model of how a calculator works that is different to how it actually responds.   Calculators that have additional functions (such as a backspace, or delete last keypress buttons) might well break our understanding and expectations of how these devices operate.  Consistency is therefore very important (along with the visibility of results and feedback from errors).

There’s was an interesting link between this Gresham lecture and the lecture by Tony Mann (blog summary), which took place in January 2014.  Tony made the exact same point that Harold did.  When we make mistakes, we can very easily blame ourselves rather than the devices that we’re using.  Since we hold this bias, we’re also reluctant to raise concerns about the usability of devices and the equipment that we’re using.

Speeds of Thinking

Another interesting link was that Harold drew upon research by Daniel Kahneman (Wikipedia), explicitly connecting the subject of interface design with the subject of cognitive psychology.  Harold mentioned one of Kahneman’s recent books entitled: ‘Thinking Fast and Slow’, which posits that there are two cognitive systems in the brain: a perceptual system which makes quick decisions, and a slower system which makes more reasoned decisions (I’m relying on my notes again; I’ve got Daniel’s book on my bookshelves, amidst loads of others I have chalked down to read!)

Good design should take account of both the fast and the slow system.  One really nice example was with the use of a cashpoint to withdraw money from your bank account.  Towards the end of the transaction, the cashpoint begins to beep continually (offering perceptual feedback).  The presence of the feedback causes the slower system to focus attention on the task that has got to be completed (which is to collect the bank card).   Harold’s point is simple: ‘if you design technology properly we can make the world better’.

Visibility of information

How do you choose one device or product over another?  One approach is to make usually hidden information more visible to those who are tasked with making decisions.  A really good example of this is the energy efficiency ratings on household items, such as refrigerators and washing machines.  A similar rating scheme is available on car tyres too, exposing attributes such as noise, stopping distance and fuel consumption.  Harold’s point was: why not create a rating system for the usability of devices?

Summary

The Open University M364 Fundamentals of Interaction Design module highlights two benefits of good interaction design.  These are: an economic arguments (that good usability can save time and money), and safety.

This talk clearly emphasised the importance of the safety argument and emphasised good design principles (such as those created by Donald Norman), such as visibility of information, feedback of action, consistency between and within devices, and appropriate mapping (which means that buttons that are pressed should do the operation that they are expected to do).

Harold’s lecture concluded with a number of points that relate to the design of medical devices.  (Of which there were four, but I’ve only made a note of three!)  The first is that it’s important to rigorously assess technology, since this way we can ‘smoke out’ any design errors and problems (evaluation is incidentally a big part of M364).  The second is that it is important to automate resilience, or to offer clear feedback to the users.  The third is to make safety visible through clear labelling.

It was all pretty thought provoking stuff which was very clearly presented.  One thing that struck me (mostly after the talk) is that interactive devices don’t exist in isolation – they’re always used within an environment.  Understanding the environment and the way in which communications occur between different people who work within that environment are also considered to be important too (and there are different techniques that can be used to learn more about this).

Towards the end of the talk, I had a question that someone else asked.  It was, ‘is it possible to draw inspiration from the aviation industry and apply it to medicine?’  It was a very good question.  I’ve read (in another OU module) that an aircraft cockpit can be used as a way to communicate system state to both pilots.  Clearly, this is subject of on-going research, and Harold directed us to a site called CHI Med (computer-human interaction).

Much food for thought!  I came away from the lecture feeling mildly terrified, but one consolation was that I had at least learnt what an infusion pump was.  As promised, here’s a link to the transcript of the talk, entitled Designing IT to make healthcare safer (Gresham College). 

Permalink Add your comment
Share post

This blog might contain posts that are only visible to logged-in users, or where only logged-in users can comment. If you have an account on the system, please log in for full access.

Total visits to this blog: 3183881