OU blog

Personal Blogs

Christopher Douce

Gresham College: Designing IT to make healthcare safer

Visible to anyone in the world

On 11 February, I was back at the Museum of London.  This time, I wasn’t there to see juggling mathematicians (Gresham College) talking about theoretical anti-balls.  Instead, I was there for a lecture about the usability and design of medical devices by Harold Thimbleby, who I understand was from Swansea University. 

Before the lecture started, we were subjected to a looped video of a car crash test; a modern car from 2009 was crashed into a car built in the 1960s.  The result (and later point) was obvious: modern cars are safer than older cars.  Continual testing and development makes a difference.  We now have substantially safer cars.  Even though there have been substantial improvements, Harold made a really interesting point.  He said, ‘if bad design was a disease, it would be our 3rd biggest killer’.

Computers are everywhere in healthcare.  Perhaps introducing computers (or mobile devices) might be able to help?  This might well be the case, but there is also the risk that hospital staff might end up spending more time trying to get technology to do the right things rather than spending other time dealing with more important patient issues.  There is an underlying question of whether a technology is appropriate or not.

This blog post has been pulled directly from my notes that I’ve made during the lecture.  If you’re interested, I’ve provided a link to the transcript of the talk, which can be found at the end.

Infusion pumps

Harold showed us pictures of a series of infusion pumps.  I didn’t know what an infusion pump was.  Apparently it’s a device that is a bit like an intravenous drip, but you program it to dispense a fluid (or drug) into the blood stream at a certain rate.  I was very surprised by the pictures: every infusion pump looked very different to each other and these differences were quite shocking.  They each had different screens and different displays.  They were different sizes and had different keypad layouts.  It was clear that there was little in the way of internal and external consistency. Harold made an important point, that they were ‘not designed to be readable, they were designed to be cheap’ (please forgive my paraphrasing here).

We were regaled with further examples of interaction design terror.  A decimal point button was placed on an arrow key.  It was clear that there was not appropriate mapping between a button and its intended task.  Pushing a help button gave little in the way of help to the user.

We were told of a human factors analysis study where six nurses were required to use an infusion pump over a period of two hours (I think I’ve noted this down correctly).  The conclusion was that all of the nurses were confused.  Sixty percent of the nurses needed hints on how to use the device, and a further sixty percent were confused by how the decimal point worked (in this particular example).  Strikingly, sixty percent of those nurses entered the wrong settings.  

We’re not talking about trivial mistakes here; we’re talking about mistakes where users may be fundamentally confused by the appearance and location of a decimal point.   Since we’re also talking about devices that dispense drugs, small errors can become life threateningly catastrophic.

Calculators

Another example of devices where errors can become significant is the common hand-held calculator.  Now, I was of the opinion that modern calculators were pretty idiot proof, but it seems that I might well be the idiot for assuming this.  Harold gave us an example where we had to try to simply calculate percentages of the world population.  Our hand held calculator simply threw away zeros without telling us, without giving us any feedback.  If we’re not thinking, and since we implicitly know that calculators carry out calculations correctly, we can easily assume that the answer is correct too.  The point is clear:  ‘calculators should not be used in hospitals, they allow you to make mistakes, and they don’t care’.

Harold made another interesting point: when we use a calculator we often look at the keypad rather than the screen.  We might have a mental model of how a calculator works that is different to how it actually responds.   Calculators that have additional functions (such as a backspace, or delete last keypress buttons) might well break our understanding and expectations of how these devices operate.  Consistency is therefore very important (along with the visibility of results and feedback from errors).

There’s was an interesting link between this Gresham lecture and the lecture by Tony Mann (blog summary), which took place in January 2014.  Tony made the exact same point that Harold did.  When we make mistakes, we can very easily blame ourselves rather than the devices that we’re using.  Since we hold this bias, we’re also reluctant to raise concerns about the usability of devices and the equipment that we’re using.

Speeds of Thinking

Another interesting link was that Harold drew upon research by Daniel Kahneman (Wikipedia), explicitly connecting the subject of interface design with the subject of cognitive psychology.  Harold mentioned one of Kahneman’s recent books entitled: ‘Thinking Fast and Slow’, which posits that there are two cognitive systems in the brain: a perceptual system which makes quick decisions, and a slower system which makes more reasoned decisions (I’m relying on my notes again; I’ve got Daniel’s book on my bookshelves, amidst loads of others I have chalked down to read!)

Good design should take account of both the fast and the slow system.  One really nice example was with the use of a cashpoint to withdraw money from your bank account.  Towards the end of the transaction, the cashpoint begins to beep continually (offering perceptual feedback).  The presence of the feedback causes the slower system to focus attention on the task that has got to be completed (which is to collect the bank card).   Harold’s point is simple: ‘if you design technology properly we can make the world better’.

Visibility of information

How do you choose one device or product over another?  One approach is to make usually hidden information more visible to those who are tasked with making decisions.  A really good example of this is the energy efficiency ratings on household items, such as refrigerators and washing machines.  A similar rating scheme is available on car tyres too, exposing attributes such as noise, stopping distance and fuel consumption.  Harold’s point was: why not create a rating system for the usability of devices?

Summary

The Open University M364 Fundamentals of Interaction Design module highlights two benefits of good interaction design.  These are: an economic arguments (that good usability can save time and money), and safety.

This talk clearly emphasised the importance of the safety argument and emphasised good design principles (such as those created by Donald Norman), such as visibility of information, feedback of action, consistency between and within devices, and appropriate mapping (which means that buttons that are pressed should do the operation that they are expected to do).

Harold’s lecture concluded with a number of points that relate to the design of medical devices.  (Of which there were four, but I’ve only made a note of three!)  The first is that it’s important to rigorously assess technology, since this way we can ‘smoke out’ any design errors and problems (evaluation is incidentally a big part of M364).  The second is that it is important to automate resilience, or to offer clear feedback to the users.  The third is to make safety visible through clear labelling.

It was all pretty thought provoking stuff which was very clearly presented.  One thing that struck me (mostly after the talk) is that interactive devices don’t exist in isolation – they’re always used within an environment.  Understanding the environment and the way in which communications occur between different people who work within that environment are also considered to be important too (and there are different techniques that can be used to learn more about this).

Towards the end of the talk, I had a question that someone else asked.  It was, ‘is it possible to draw inspiration from the aviation industry and apply it to medicine?’  It was a very good question.  I’ve read (in another OU module) that an aircraft cockpit can be used as a way to communicate system state to both pilots.  Clearly, this is subject of on-going research, and Harold directed us to a site called CHI Med (computer-human interaction).

Much food for thought!  I came away from the lecture feeling mildly terrified, but one consolation was that I had at least learnt what an infusion pump was.  As promised, here’s a link to the transcript of the talk, entitled Designing IT to make healthcare safer (Gresham College). 

Permalink Add your comment
Share post
Christopher Douce

Gresham College Lecture: Notations, Patterns and New Discoveries (Juggling!)

Visible to anyone in the world

On a dark winter’s evening on 23 January 2014, I discovered a new part of London I had never been to before.  Dr Colin Wright gave a talk entitled ‘notations, patterns and new discoveries’ at the Museum of London.   The subject was intriguing in a number of different ways.  Firstly, it was all about the mathematics of juggling (which represented a combination of ideas that I had never come across before).  Secondly, it was about notations.

 The reason why I was ‘hooked’ by the notation part of the title is because my home discipline is computer science.  Computers are programmed using notation systems (programming languages), and when I was doing some research into software maintenance and object-oriented programming I discovered a series of fascinating papers that was about something called the ‘cognitive dimensions of notations’.  Roughly put, these were all about how we can efficiently work with (and think about) different types of notation system.

In its broadest sense, a notation is an abstraction or a representation.  It allows us to write stuff down.  Juggling (like dance) is an activity that is dynamic, almost ethereal; it exists and time and space, and then it can disappear or stop in an instant.  Notation allows us to write down or describe the transitory.  Computer programming languages allow us to describe sets of invisible instructions and sequences of calculations that exist nowhere except within digital circuits.  When we’re able to write things down, it turns out that we can more easily reason about what we’ve described, and make new discoveries too.

It took between eight and ten minutes to figure out how to get into the Museum of London.  It sits in the middle of a roundabout that I’ve passed a number of times before.  Eventually, I was ushered into a huge cavernous lecture theatre, which clearly suggested that this was going to be quite ‘an event’.  I was not to be disappointed.

Within minutes of the start of the lecture, we heard names of famous mathematicians: Gauss and Liebniz.  One view was that ‘truths (or proofs) should come from notions rather than notations’.  Colin, however, had a different view, that there is interplay between notions (or ideas) and notations.

During the lecture, I made a note of the following sentence: a notation represents a ‘specialist terminology allows rapid and accurate communication’, and then moved onto ask the question, ‘how can we describe a juggling pattern?’  This led to the creation of an abstraction that could then describe the movement of juggling balls. 

Whilst I was listening, I thought, ‘this is exactly what computer programmers do; we create one form of notation (a computer program), using another form of notation (a computer language) – the computer program is our abstraction of a problem that we’re trying to solve’.  Colin introduced us to juggling terms (or high level abstractions), such as the ‘shower’, ‘cascade’ and ‘mill’s mess’.  This led towards the more intellectually demanding domain of ‘theoretical juggling’ (with impossible number of balls).

 My words can’t really do the lecture justice.  I should add that it is one of those lectures that you would learn stuff by listening to it more than once.  Thankfully, for those who are interested, it was recorded, and it available on-line (Gresham College)

Whilst I was witnesses all these great tricks, one thought crossed my mind, which was, ‘how much time did you have to spend to figure out all this stuff and to learn all these juggling tricks?!  Surely there was something better you could have done with your time!’ (Admittedly, I write this partially in jest and with jealousy, since I can’t catch and I fear that doing ‘a cascade’ with three balls is, for me, a theoretical impossibility). 

It was a question that was implicitly answered by considering the importance of pure mathematics.  Doing and exploring stuff only because it is intellectually interesting may potentially lead to a real world practical use – the thing is that you don’t know what it might be and what new discoveries might emerge.  (A good example of this is number theory leading to the practical application of cryptography, which is used whenever we buy stuff over the internet). 

All in all, great fun.  Recommended.

Permalink Add your comment
Share post
Christopher Douce

Gresham College Lecture: User error – why it’s not your fault

Visible to anyone in the world

On 20 January 2014 I found the time to attend a public lecture in London that was all about usability and user error. The lecture was presented by Tony Mann, from the University of Greenwich.  The event was in a group of buildings just down the street from Chancery Lane underground station.  Since I was keen on this topic, I arrived twenty minutes early only to find that the Gresham College lecture theatre was already full to capacity.  User error (and interaction design), it seems, was apparently a very popular subject!

One phrase that I’ve made a note of is that ‘we blame ourselves if we cannot work something’, that we can quickly acquire feelings of embarrassment and incompetence if we do things wrong or make mistakes.  Tony gave us the example that we can become very confused by the simplest of devices, such as doors. 

Doors that are well designed should tell us how they should be used: we rely on visual cues to tell us whether they should be pushed or pulled (which is called affordance), and if we see a handle, then we regularly assume that the door should be pulled (with is our application of the design rule of ‘consistency’).  During this part of Tony’s talk, I could see him drawing heavily on Donald Norman’s book ‘The psychology of everyday things’ (Norman’s work is also featured within the Open University module, M364 Fundamentals of Interaction design).

I’ve made a note of Tony saying that when we interact with systems we take information from many different sources, not just the most obvious.  An interesting example that was given was the Kegworth air disaster (Wikipedia), which occurred since the pilot had turned off the wrong engine, after drawing from experience gained from different but similar aircraft.

Another really interesting example was the case where a pharmacy system was designed to in such a way that drug names could only be 24 characters in length and no more.  This created a situation where different drugs (which had very similar names, but had different effects) could be prescribed by a doctor in combinations which could potentially cause fatal harm to patients.  Both of these examples connect perfectly to the safety argument for good interaction design.  Another argument (that is used in M364) is an economic one, i.e. poor interaction design costs users and businesses both time and money.

Tony touched upon further issues that are also covered in M364.  He said, ‘we interact best [with a system] when we have a helpful mental model of a system’, and our mental models determine our behaviour, and humans (generally) have good intuition when interacting with physical objects (and it is hard to discard the mental models that we form).

Tony argued that it is the job of an interaction designer to help us to create a useful mental model of how a system works, and if there’s a conflict (between what a design tells us and how we think something may work), we can very easily get into trouble very quickly.  One way to help with is to make use of metaphor.  Tony Mann: ‘a strategy is to show something that we understand’, such as a desktop metaphor or a file metaphor on a computer.  I’ve also paraphrased the following interesting idea, that a ‘designer needs to both think like a computer and think like a user’.

One point was clearly emphasised: we can easily choose not to report mistakes.  This means that designers might not always receive important feedback from their users.  Users may to easily think, ‘that’s just a stupid error that I’ve made…’  Good designs, it was argued, prevents errors (which is another important point that is addressed in M364).  Tony also introduced the notion of resilience strategies; things that we do to help us to avoid making mistakes, such as hanging our scarf in a visible place so we remember to take it home after we’ve been somewhere.

The three concluding points were: we’re always too ready to blame ourselves when we make a blunder, that we don’t help designers as often as we ought to, and that good interaction design is difficult (because we need to consider different perspectives).

Tony’s talk touched upon wider (and related) subjects, such as the characteristics of human error and the ways that systems could be designed to minimise the risk of mistakes arising.  If I were to be very mean and offer a criticism, it would be that there was perhaps more of an opportunity to talk about the ‘human’ side of error – but here we begin to step into the domain of cognitive psychology (as well as engineering and mathematics).  This said, his talk was a useful and concise introduction to the importance of good interaction design.

Permalink Add your comment
Share post
Christopher Douce

BCS Lecture: The Power of Abstraction

Visible to anyone in the world
Edited by Christopher Douce, Friday, 10 Aug 2018, 14:41

When I was a graduate student at the University of Manchester (or the bit of it that was once known as UMIST) I was once asked to show some potential computer science students around the campus.  At the end of the tour I ushered them to lecture which was intended to give the students a feel for what things would be like if they came to the university.

The lecture, given by one of the faculty, was all about the notion of abstraction.  We were told that this was a fundamental concept in computing.  In some respects, it felt less of a lecture about computing, but more of a lecture about philosophy.  I had never been to a lecture quite like it and it was one that really stuck in my mind.  When I left the lecture, I thought, 'why didn't I have this kind of lecture when I was an undergraduate?'  As an undergrad I had spent many a hour creating various kinds of computer programs without really being told that there was an essential and fundamental idea that underpinned what I was doing.

When I saw the British Computer Society (BCS) advertising a lecture that was about the 'power of abstraction', I knew that I had to try to make time to come along. The lecture, by Professor Barbara Liskov, was an annual BCS lecture (the Karen Spärck Jones lecture) that honours women in computing research.

All this sounds great, right?  But what, fundamentally, is abstraction?  An 'abstract' at the top of a formal research paper says, in essence, what it contains.  Abstraction, therefore, can be thought of as a process of creating a representation of something, and that something might well be a problem of some kind.  Admittedly, this sounds both confusing and vague...

Barbara began her lecture by stating that abstraction is the basis of how we implement computer software.  The real world is, fundamentally, a messy place.   Since computers are ultimately mathematical machines, we need a way to represent problems (using, ultimately, numbers) so that a computer can work with them.  As a part of her lecture, Barbara said that she was going to talk through some developments in the way that people (or computer programmers) could create and work with abstractions.  I was intrigued; this talk wasn't just about a history of programming languages, it was also a history of thought.

So, what history was covered?  We were immediately taken back to the 1970s.  This was a period in computing history where the term 'software crisis' gained currency. One of the reasons was that it was becoming increasingly apparent that creating complex software systems was a fundamentally difficult thing to do.  It was also apparent that projects were started, became excruciatingly late and then abandoned, costing astronomical amounts of money. (It might be argued that this still happens today, but that's a whole other debate which goes beyond this pretty short blog post).

One of the reasons why software is so fundamentally hard to create is that it is 'mind stuff'.  Software isn't like a physical artefact or product that we can see. The relationships between components can easily become incredibly complicated which can, in turn, make things unfeasibly difficult.  Humans, after all, have limited brain capacity to deal with complexity (so, it's important that we create tools and techniques that help us to manage this).

We were introduced to a number of important papers. The first paper was by Dijkstra, who wrote a letter to the Communications of the ACM entitled, 'Goto considered harmful'.  'Goto' is an instruction that can help to create very complicated (and unfathomable) software very quickly.  Barbara described the difficulty very clearly. One of the reasons why software is so hard is that there is a fundamental disconnect between how the program text might be read by programmers and how it might be processed or executed by a machine.  If we can create a program representation that tries to bridge the difference between the static (what is described should happen) and the dynamic (what actually happens when software does its stuff), then things would be a whole lot easier.

Another paper that was mentioned was Wirth's 'program development by stepwise refinement'. Wirth is famous for the design of two closely related languages: Pascal and Modula-2. It certainly is the case that it's possible to write software without the 'goto' instruction, but Barbara made the interesting point that it's also possible to write good, well-structured software in bad languages (providing that you're disciplined enough). The challenge is that we're always thinking about trade-offs (in terms of program performance and code economy), so we can easily be lured into doing clever things in incomprehensible ways.

Barbara spoke about the importance of modules whilst mentioning a paper by Parnas entitled, 'information distribution aspects of design methodology'. One of the great things about modules, other than that they can be used to group bits of code together, is that they enable the separation of the implementation and the interface.   This has reminded me of some stuff from my undergrad days and time spent in industry: modules are connected to the term 'cohesion'.  Cohesion is, simply, the idea that something should do only one thing.  A function that has one name and does two or more things (that are not suggested in its name) is a recipe for confusion and disaster.  But I fear I'm beginning to digress from the lecture and onto one of my 'coding hobbyhorses'.

Through a short mention of a language called Simula-67 (Wikipedia) we were then introduced to a paper by Liskov and Zilles entitled, 'programming with abstract data types'.  We were told that this paper represented a sketch of a programming language which eventually led to the creation of a language called CLU (Wikipedia), CLU being short for Clusters.

There is one question Barbara clearly answered, which is: why go to all the trouble of writing a programming language?  It's to understand whether an idea works in practice and to understand some of the barriers to performance.  Also, whenever a language designer describes a language in natural language there are always going to be some assumptions that the compiler writer must make. Only by going through the process of creating a working language are language designers able to 'smoke out' any potential problems.

Just diverting into programming language speak for a moment, CLU implemented static type checking, used a heap, and doesn't support concurrency, the goto statement or inheritance.  What it does implement is polymorphism (or the use of generics), iterators and exception handling.

Barbara also mentioned a very famous language called Smalltalk, developed by Alan Kay.  Different developments at different times and at different places have all influenced the current generation of programming languages.  Our current object-oriented languages enable programmers to define abstractions, or a representation of a problem in a way that wasn't possible during the earlier days of software.

Research directions

Barbara mentioned two research topics that continue to be of interest.  The first was the question of what might be the most appropriate design of a programming language for novices?  In various years, these have been BASIC (which introduced the dreaded Goto statement), Pascal, and more recently Java.  Challenges of creating a language that helps learners develop computational thinking skills (Wikipedia) include taking account of programming language design trade-offs, such as ease of use vs. expressive power, and readability vs. writeability, and how to best deal with modularity and encapsulation.

Another research subject is languages for massively parallel computers.  These days, PCs and tablets, more often than not, contain multiple processor cores (which means that they can, quite literally, be doing more than one calculation at once).  You might have up to four cores, but how might you best design a programming language that more efficiently allows you to define and solve problems when you might have hundreds of processors working at the same time?  This immediately took me back to my undergrad days when I had an opportunity to play with a language called Occam (Wikipedia).

There was one quote from Barbara's lecture that stood out (for me), and this was when she said, 'you don't get ideas by not working on things'. 

Reflections

I should say at the point that I haven't done Barbara's speech justice.  There were a whole lot of other issues and points that were mentioned but I haven't touched on.  I really enjoyed being taken on a journey that described how programming languages have changed.  I liked the way that the challenges of coding (and the challenge of using particular instructions) led to discussions about modules, abstract data types and then to, finally, object-oriented programming languages.

It's also possible to take a broader perspective to the notion of abstraction, one that has been facilitated by language design.  During Barbara's lecture, I was mindful of two related subjects that can be strongly connected to the notion of abstraction.  The first of these is the idea of design patterns.

Design patterns (Wikipedia) take their inspiration from architecture. Rather than design a new building from scratch every time you need to make one, why not buy a pre-existing design that has already solved some of the problems that you might potentially come up against?  There is a strong parallel with software: developers often have to solve very similar problems time and time again.  If we have a template to work from, we might arguably get things done more quickly and cheaply.

Developers can use patterns to gain inspiration about how to go about solving common problems.  By using well understood and defined patterns, the communication between programmers and developers can be enhanced since abstract concepts can be readily named; they permit short-cuts to understanding.

In some cases, patterns can be embedded into pre-existing code that can be used by developers to kick-start a development.  This can take the form of a framework, software code that solves well known problems that ultimately enables developers to get on and solve the problems that they really need to solve (as opposed to dealing with stuff such as reading and writing to databases).

Abstraction has come a long way in my own very short career as a developer. One of the biggest challenges that developers face is how to best break down a problem into structures that can be represented in a language that a machine can understand.  Another challenge lies with understanding the various tools that developers now have at their disposal to achieve this.

Note: The logo at the top of the blog is used to indicate that this blog relates to a BCS event and this post is not connected with the BCS in any other way. All mistakes and opinions are my own, rather than that of the OU or the BCS.

Permalink Add your comment
Share post
Christopher Douce

Breaking Enigma and the legacy of Alan Turing in Code Breaking, City University, London

Visible to anyone in the world
Edited by Christopher Douce, Friday, 18 May 2018, 09:08

City University logo

As soon as I received an email advertising a public lecture at City University by Processor David Stupples on 17 April about the life and legacy of Alan Turing, a couple of weeks after finishing reading Alan Hodge's biography, I knew I had to make the time to come along.  This blog is a summary of some aspects of the event, accompanied by a set of thoughts that the lecture inspired.  I should add that I'm neither a mathematician and nor a cryptographer, but the story of code breaking and the history of Bletchley Park (and how it came to be) is one that has and continues to fascinate me.

David Stupples is professor of systems and cryptography at City University.  His lecture is one of a series of lectures that are given at City, but this one coincides with the centenary of Alan's birth.  The lecture also celebrates the creation of the City University Centre for Cybersecurity Sciences (City University website).

David's lecture began at the end, beginning briefly with Alan Turing's death in 1954, before moving onto a number of subjects which relate to cryptography, the breaking of the enigma code, stories about daring plots to capture code books and then concluding by speaking briefly about Alan Turing's legacy.

Before I attempt to summarise (to the best of my abilities) some of the points that David spoke about during his lecture, he also mentioned an interesting connection between City University and the centre of wartime code breaking at Bletchley Park (website).  David mentioned a former faculty member, Arnold Lynch.  Apparently Arnold worked with electrical engineer Tommy Flowers (Wikipedia), helping to design a fast input device to the Colossus machines that were designed with help from the post office research station (Wikipedia) at Dollis Hill, London.  The work centred on the reading of paper tape loops using light as opposed to mechanics.  Colossus, Bletchley Park and Turing are intrinsically linked, but as far as I understand they are different stories.  They are linked through cryptography, which is a subject that David introduces.

Cryptography

What is cryptography?  In essence, it is study that is concerned with the hiding and writing of secret messages.  David began by introducing us all to the Caesar cipher (Wikipedia), a simple 'monoalphabetic substitution cipher'.  Simply put, you take one letter and replace it with another.  Such ciphers are easy to crack because you can eventually figure out which letter is which by looking at the structure of messages and also the frequency of individual letters.

A more sophisticated approach is to encode groups of letters (bigrams or trigrams) as a single code.  This method, we were told, dates back to Napoleonic times.  We were then introduced to the beginnings of the theory of Enigma codes through the Vigenère cipher (Wikipedia), which I had never heard of before.

David added an interesting aside, saying that this cipher was attacked by Lord Byron's daughter, Ada, Countess of Lovelace.  Ada is also known for her work with the Victorian computing pioneer, Charles Babbage, who proposed, designed and partially built different computing engines: the analytical engine and the difference engine.

Returning to the subject in hand, one approach to encrypt a message is to use a book of codes.  A character (or group of characters) are matched with an entry in a code book, which then have a precise meaning.  Using the technical phrases: there is ciphertext (the message that you can't read), and then the plaintext (the message that you can).

One of the biggest challenges is getting these code books to the people who need to read the messages, and this is one of the biggest challenges that need to be overcome.  David hinted at the mysterious but practical notion of asymmetric keys  (Wikipedia), mentioning their application of number theory.

The Enigma and Codebreaking

One of the most interesting parts of David's talk was his description of the different types of Enigma machine that were deployed; different parts of the German military used different variants.  An Enigma machine comprises of plug boards (which I understand to be a character substitution mechanism) along with a number of rotors, and a reflector which passes a signal back through each of the rotors.  These elements, in combination with each other, create cryptographic combinations in numbers that are quite literally astronomical and unimaginable.  Different machines would have slightly different configurations and different numbers of rotors.  The greater the numbers of rotors, the more 'secure' the code.

Another added complexity was that Enigma operators can also use code books.  Code books in combination with plug boards in combination with rotors which have all been used to encrypt messages in another language presents a problem that feel as if it should be impossible to solve.

So, how was it possible to break the Enigma, to recover plain text from cipher text? I have to confess when it came to following some of the detail, I became a little lost.  Understanding codes and ciphers, how they work and their weaknesses requires the application of an energetic amount of mental gymnastics.  Knowing the background and context behind the discoveries is a useful prerequisite to understanding the detail.

The first aspect lies with some work carried out by Polish cryptographers, whose work was invaluable (Bletchley Park has a permanent exhibit which acknowledges their essential contribution).  There was also, apparently, a spy involved, who managed to gather some essential intelligence (which was another part of the story I had not heard of).

The second aspect, the Polish cryptographers also worked on devices that helped to apply brute force to the decrypting of messages.  They created something called a Bombe (Wikipedia).  Their work inspired a new generation of devices (a reconstruction of which can be seen at Bletchley Park).

The third aspect (and there probably are more than just three aspects, of course) is the occurrence of human error.  Enigma operators would make mistakes (as would operators of TUNNY, too), which would convey clues as to how the machines operated and were configured.

Context

Towards the end of the talk, David connected work that was carried out in the second world war to the time of the cold war.  This was the first time I had heard anyone speak about this subject and the connections.  The audience were shown photographs of KL47 and KL7 devices (Wikipedia) that could be considered to be the successors of Enigma.  We were then treated to some spy stories, which reminded us all that keeping (and uncovering) secrets is as much a human challenge as it is a technical one.

Cryptography isn't a subject that is only applicable to the military (although I clearly sense that the military and military intelligence has been the main driver).  It isn't only about keeping secrets safe from spies.  Whenever you buy something over the internet, when the padlock symbol lights up on your internet browser, you make use of asymmetric keys.  (Incidentally, this mechanism was independently discovered by two different groups, but this is totally different story).  

Also, whenever you make a call on a digital mobile phone, encryption comes into play.  David mentioned the situation where cryptography is used from the point when you request money from a cash machine, and the resulting information about transaction is transmitted onwards to other banking machinery.

A really interesting point that I took a note of is that there is a constant battle between cryptographers (those wishing to keep secrets) and cryptanalysts (those who are wishing to break into codes and extract secrets).  This is a battle that is going to run and run, with both mathematics and computing being central tools for both sides.

Reflections

The biographies and Turing, the history of Bletchley Park, and the development of some of the most fundamental ideas within computer science are all intrinsically connected.  With any lecture on the subject, there is a difficult decision to make about what to focus on, what to touch upon and what to leave out.  It was great to hear of references to Turing's theory of computability and his connection with the ACE computer at the National Physical Laboratory, as well as his link to the development of the world's first stored-program computer at the University of Manchester.

The history of the code breaking and learning about the social, political and technological environment in which it took place is fascinating.  One thought that I did have was that perhaps Turing, as a man, might have featured more.  But, as mentioned, it's tough to separate out the different elements of a broader complex story.  Code breaking, Turing and computing are all connected.

All in all, a lively and informative talk that presented, for me, a new angle on some very interesting aspects of the code breaking story.  

Permalink Add your comment
Share post

This blog might contain posts that are only visible to logged-in users, or where only logged-in users can comment. If you have an account on the system, please log in for full access.

Total visits to this blog: 2378027