OU blog

Personal Blogs

Christopher Douce

Gresham College: Designing IT to make healthcare safer

Visible to anyone in the world

On 11 February, I was back at the Museum of London.  This time, I wasn’t there to see juggling mathematicians (Gresham College) talking about theoretical anti-balls.  Instead, I was there for a lecture about the usability and design of medical devices by Harold Thimbleby, who I understand was from Swansea University. 

Before the lecture started, we were subjected to a looped video of a car crash test; a modern car from 2009 was crashed into a car built in the 1960s.  The result (and later point) was obvious: modern cars are safer than older cars.  Continual testing and development makes a difference.  We now have substantially safer cars.  Even though there have been substantial improvements, Harold made a really interesting point.  He said, ‘if bad design was a disease, it would be our 3rd biggest killer’.

Computers are everywhere in healthcare.  Perhaps introducing computers (or mobile devices) might be able to help?  This might well be the case, but there is also the risk that hospital staff might end up spending more time trying to get technology to do the right things rather than spending other time dealing with more important patient issues.  There is an underlying question of whether a technology is appropriate or not.

This blog post has been pulled directly from my notes that I’ve made during the lecture.  If you’re interested, I’ve provided a link to the transcript of the talk, which can be found at the end.

Infusion pumps

Harold showed us pictures of a series of infusion pumps.  I didn’t know what an infusion pump was.  Apparently it’s a device that is a bit like an intravenous drip, but you program it to dispense a fluid (or drug) into the blood stream at a certain rate.  I was very surprised by the pictures: every infusion pump looked very different to each other and these differences were quite shocking.  They each had different screens and different displays.  They were different sizes and had different keypad layouts.  It was clear that there was little in the way of internal and external consistency. Harold made an important point, that they were ‘not designed to be readable, they were designed to be cheap’ (please forgive my paraphrasing here).

We were regaled with further examples of interaction design terror.  A decimal point button was placed on an arrow key.  It was clear that there was not appropriate mapping between a button and its intended task.  Pushing a help button gave little in the way of help to the user.

We were told of a human factors analysis study where six nurses were required to use an infusion pump over a period of two hours (I think I’ve noted this down correctly).  The conclusion was that all of the nurses were confused.  Sixty percent of the nurses needed hints on how to use the device, and a further sixty percent were confused by how the decimal point worked (in this particular example).  Strikingly, sixty percent of those nurses entered the wrong settings.  

We’re not talking about trivial mistakes here; we’re talking about mistakes where users may be fundamentally confused by the appearance and location of a decimal point.   Since we’re also talking about devices that dispense drugs, small errors can become life threateningly catastrophic.

Calculators

Another example of devices where errors can become significant is the common hand-held calculator.  Now, I was of the opinion that modern calculators were pretty idiot proof, but it seems that I might well be the idiot for assuming this.  Harold gave us an example where we had to try to simply calculate percentages of the world population.  Our hand held calculator simply threw away zeros without telling us, without giving us any feedback.  If we’re not thinking, and since we implicitly know that calculators carry out calculations correctly, we can easily assume that the answer is correct too.  The point is clear:  ‘calculators should not be used in hospitals, they allow you to make mistakes, and they don’t care’.

Harold made another interesting point: when we use a calculator we often look at the keypad rather than the screen.  We might have a mental model of how a calculator works that is different to how it actually responds.   Calculators that have additional functions (such as a backspace, or delete last keypress buttons) might well break our understanding and expectations of how these devices operate.  Consistency is therefore very important (along with the visibility of results and feedback from errors).

There’s was an interesting link between this Gresham lecture and the lecture by Tony Mann (blog summary), which took place in January 2014.  Tony made the exact same point that Harold did.  When we make mistakes, we can very easily blame ourselves rather than the devices that we’re using.  Since we hold this bias, we’re also reluctant to raise concerns about the usability of devices and the equipment that we’re using.

Speeds of Thinking

Another interesting link was that Harold drew upon research by Daniel Kahneman (Wikipedia), explicitly connecting the subject of interface design with the subject of cognitive psychology.  Harold mentioned one of Kahneman’s recent books entitled: ‘Thinking Fast and Slow’, which posits that there are two cognitive systems in the brain: a perceptual system which makes quick decisions, and a slower system which makes more reasoned decisions (I’m relying on my notes again; I’ve got Daniel’s book on my bookshelves, amidst loads of others I have chalked down to read!)

Good design should take account of both the fast and the slow system.  One really nice example was with the use of a cashpoint to withdraw money from your bank account.  Towards the end of the transaction, the cashpoint begins to beep continually (offering perceptual feedback).  The presence of the feedback causes the slower system to focus attention on the task that has got to be completed (which is to collect the bank card).   Harold’s point is simple: ‘if you design technology properly we can make the world better’.

Visibility of information

How do you choose one device or product over another?  One approach is to make usually hidden information more visible to those who are tasked with making decisions.  A really good example of this is the energy efficiency ratings on household items, such as refrigerators and washing machines.  A similar rating scheme is available on car tyres too, exposing attributes such as noise, stopping distance and fuel consumption.  Harold’s point was: why not create a rating system for the usability of devices?

Summary

The Open University M364 Fundamentals of Interaction Design module highlights two benefits of good interaction design.  These are: an economic arguments (that good usability can save time and money), and safety.

This talk clearly emphasised the importance of the safety argument and emphasised good design principles (such as those created by Donald Norman), such as visibility of information, feedback of action, consistency between and within devices, and appropriate mapping (which means that buttons that are pressed should do the operation that they are expected to do).

Harold’s lecture concluded with a number of points that relate to the design of medical devices.  (Of which there were four, but I’ve only made a note of three!)  The first is that it’s important to rigorously assess technology, since this way we can ‘smoke out’ any design errors and problems (evaluation is incidentally a big part of M364).  The second is that it is important to automate resilience, or to offer clear feedback to the users.  The third is to make safety visible through clear labelling.

It was all pretty thought provoking stuff which was very clearly presented.  One thing that struck me (mostly after the talk) is that interactive devices don’t exist in isolation – they’re always used within an environment.  Understanding the environment and the way in which communications occur between different people who work within that environment are also considered to be important too (and there are different techniques that can be used to learn more about this).

Towards the end of the talk, I had a question that someone else asked.  It was, ‘is it possible to draw inspiration from the aviation industry and apply it to medicine?’  It was a very good question.  I’ve read (in another OU module) that an aircraft cockpit can be used as a way to communicate system state to both pilots.  Clearly, this is subject of on-going research, and Harold directed us to a site called CHI Med (computer-human interaction).

Much food for thought!  I came away from the lecture feeling mildly terrified, but one consolation was that I had at least learnt what an infusion pump was.  As promised, here’s a link to the transcript of the talk, entitled Designing IT to make healthcare safer (Gresham College). 

Permalink Add your comment
Share post
Christopher Douce

Gresham College Lecture: Notations, Patterns and New Discoveries (Juggling!)

Visible to anyone in the world

On a dark winter’s evening on 23 January 2014, I discovered a new part of London I had never been to before.  Dr Colin Wright gave a talk entitled ‘notations, patterns and new discoveries’ at the Museum of London.   The subject was intriguing in a number of different ways.  Firstly, it was all about the mathematics of juggling (which represented a combination of ideas that I had never come across before).  Secondly, it was about notations.

 The reason why I was ‘hooked’ by the notation part of the title is because my home discipline is computer science.  Computers are programmed using notation systems (programming languages), and when I was doing some research into software maintenance and object-oriented programming I discovered a series of fascinating papers that was about something called the ‘cognitive dimensions of notations’.  Roughly put, these were all about how we can efficiently work with (and think about) different types of notation system.

In its broadest sense, a notation is an abstraction or a representation.  It allows us to write stuff down.  Juggling (like dance) is an activity that is dynamic, almost ethereal; it exists and time and space, and then it can disappear or stop in an instant.  Notation allows us to write down or describe the transitory.  Computer programming languages allow us to describe sets of invisible instructions and sequences of calculations that exist nowhere except within digital circuits.  When we’re able to write things down, it turns out that we can more easily reason about what we’ve described, and make new discoveries too.

It took between eight and ten minutes to figure out how to get into the Museum of London.  It sits in the middle of a roundabout that I’ve passed a number of times before.  Eventually, I was ushered into a huge cavernous lecture theatre, which clearly suggested that this was going to be quite ‘an event’.  I was not to be disappointed.

Within minutes of the start of the lecture, we heard names of famous mathematicians: Gauss and Liebniz.  One view was that ‘truths (or proofs) should come from notions rather than notations’.  Colin, however, had a different view, that there is interplay between notions (or ideas) and notations.

During the lecture, I made a note of the following sentence: a notation represents a ‘specialist terminology allows rapid and accurate communication’, and then moved onto ask the question, ‘how can we describe a juggling pattern?’  This led to the creation of an abstraction that could then describe the movement of juggling balls. 

Whilst I was listening, I thought, ‘this is exactly what computer programmers do; we create one form of notation (a computer program), using another form of notation (a computer language) – the computer program is our abstraction of a problem that we’re trying to solve’.  Colin introduced us to juggling terms (or high level abstractions), such as the ‘shower’, ‘cascade’ and ‘mill’s mess’.  This led towards the more intellectually demanding domain of ‘theoretical juggling’ (with impossible number of balls).

 My words can’t really do the lecture justice.  I should add that it is one of those lectures that you would learn stuff by listening to it more than once.  Thankfully, for those who are interested, it was recorded, and it available on-line (Gresham College)

Whilst I was witnesses all these great tricks, one thought crossed my mind, which was, ‘how much time did you have to spend to figure out all this stuff and to learn all these juggling tricks?!  Surely there was something better you could have done with your time!’ (Admittedly, I write this partially in jest and with jealousy, since I can’t catch and I fear that doing ‘a cascade’ with three balls is, for me, a theoretical impossibility). 

It was a question that was implicitly answered by considering the importance of pure mathematics.  Doing and exploring stuff only because it is intellectually interesting may potentially lead to a real world practical use – the thing is that you don’t know what it might be and what new discoveries might emerge.  (A good example of this is number theory leading to the practical application of cryptography, which is used whenever we buy stuff over the internet). 

All in all, great fun.  Recommended.

Permalink Add your comment
Share post
Christopher Douce

Gresham College Lecture: User error – why it’s not your fault

Visible to anyone in the world

On 20 January 2014 I found the time to attend a public lecture in London that was all about usability and user error. The lecture was presented by Tony Mann, from the University of Greenwich.  The event was in a group of buildings just down the street from Chancery Lane underground station.  Since I was keen on this topic, I arrived twenty minutes early only to find that the Gresham College lecture theatre was already full to capacity.  User error (and interaction design), it seems, was apparently a very popular subject!

One phrase that I’ve made a note of is that ‘we blame ourselves if we cannot work something’, that we can quickly acquire feelings of embarrassment and incompetence if we do things wrong or make mistakes.  Tony gave us the example that we can become very confused by the simplest of devices, such as doors. 

Doors that are well designed should tell us how they should be used: we rely on visual cues to tell us whether they should be pushed or pulled (which is called affordance), and if we see a handle, then we regularly assume that the door should be pulled (with is our application of the design rule of ‘consistency’).  During this part of Tony’s talk, I could see him drawing heavily on Donald Norman’s book ‘The psychology of everyday things’ (Norman’s work is also featured within the Open University module, M364 Fundamentals of Interaction design).

I’ve made a note of Tony saying that when we interact with systems we take information from many different sources, not just the most obvious.  An interesting example that was given was the Kegworth air disaster (Wikipedia), which occurred since the pilot had turned off the wrong engine, after drawing from experience gained from different but similar aircraft.

Another really interesting example was the case where a pharmacy system was designed to in such a way that drug names could only be 24 characters in length and no more.  This created a situation where different drugs (which had very similar names, but had different effects) could be prescribed by a doctor in combinations which could potentially cause fatal harm to patients.  Both of these examples connect perfectly to the safety argument for good interaction design.  Another argument (that is used in M364) is an economic one, i.e. poor interaction design costs users and businesses both time and money.

Tony touched upon further issues that are also covered in M364.  He said, ‘we interact best [with a system] when we have a helpful mental model of a system’, and our mental models determine our behaviour, and humans (generally) have good intuition when interacting with physical objects (and it is hard to discard the mental models that we form).

Tony argued that it is the job of an interaction designer to help us to create a useful mental model of how a system works, and if there’s a conflict (between what a design tells us and how we think something may work), we can very easily get into trouble very quickly.  One way to help with is to make use of metaphor.  Tony Mann: ‘a strategy is to show something that we understand’, such as a desktop metaphor or a file metaphor on a computer.  I’ve also paraphrased the following interesting idea, that a ‘designer needs to both think like a computer and think like a user’.

One point was clearly emphasised: we can easily choose not to report mistakes.  This means that designers might not always receive important feedback from their users.  Users may to easily think, ‘that’s just a stupid error that I’ve made…’  Good designs, it was argued, prevents errors (which is another important point that is addressed in M364).  Tony also introduced the notion of resilience strategies; things that we do to help us to avoid making mistakes, such as hanging our scarf in a visible place so we remember to take it home after we’ve been somewhere.

The three concluding points were: we’re always too ready to blame ourselves when we make a blunder, that we don’t help designers as often as we ought to, and that good interaction design is difficult (because we need to consider different perspectives).

Tony’s talk touched upon wider (and related) subjects, such as the characteristics of human error and the ways that systems could be designed to minimise the risk of mistakes arising.  If I were to be very mean and offer a criticism, it would be that there was perhaps more of an opportunity to talk about the ‘human’ side of error – but here we begin to step into the domain of cognitive psychology (as well as engineering and mathematics).  This said, his talk was a useful and concise introduction to the importance of good interaction design.

Permalink Add your comment
Share post
Christopher Douce

Gresham College: A history of computing in three parts

Visible to anyone in the world
Edited by Christopher Douce, Tuesday, 15 Oct 2019, 15:47

After a week and a half of continual exam and assignment marking, I was relieved to finally be able to turn my attention to other matters (and get out of my house).  I had an idle question: I wondered whether there were any professors or lecturers in London who shared an interest in the history of computing or technology.  Rather than trawling through university web pages (which was the first idea that crossed my mind), I decided to ask the internet, searching for the words, ‘history computing lecturer London’.

One name was clearly at the top of the list, but it was something else a bit lower down the search result that immediately attracted my attention.  It was a series of lectures entitled, ‘a history of computing in three parts’.  My first reactions were, ‘it’s probably too late’ and, ‘you’ve probably got to pay a lot of money to go along to this gig’.  All this computer history stuff that I’m interested in has to be folded into my day job which means that that it’s easier to justify time but a whole lot harder to justify expenses.

After reading the paragraph that described the event, I cast my eye back to the heading.  I realised that the date of the lecture was TODAY!  The very same day I had done my Google search, Thursday 31 October!  After a few more clicks I discovered that the event was also FREE!  Behold, it was a miracle!  I looked at my calendar; the lecture started at four in the afternoon and provided that I managed to sort out some admin stuff and have a meeting with a colleague, I would probably have enough time.

The only fly in the ointment was that it was all booked up; there were no tickets remaining.  Who knew that the history of computers was such a popular subject?  No matter.  I was looking reasonably smart – I would try to talk my way in.

Lecture 1: Pictures of computers

After a few false starts I managed to find my way to a place called Gresham College (website); navigating my way out of Chancery Lane tube proved to be quite tricky. It is only in retrospect that I realised that this was one of those places in London that I really ought to have known about.  I just know that people who I speak to about this event will chuckle, slap their thigh and say, ‘oh yes, Gresham College...’ and then will look at me as if I’m some kind of idiot if I said that I had visited there ‘by accident’.

I strode purposefully down a long alleyway and was confronted by a smartly dressed gentleman who obviously had an important role to play.  I began my attack: ‘I’m, erm, here for the lecture…’, and was swiftly gestured towards a flight of stairs without a word.   I felt deflated!  I was expecting to fight my way into the lecture!  I soon found myself in an anti-chamber filled with men (and women) in anoraks looking at a projector screen and noisily settled down to what was the first lecture by Martin Campbell-Kelly.

I joined the lecture at the point where people were being shown coloured photos of office equipment and pictures of steel filing cabinets.  The context was that computers are machines that allow us to process ever increasing amounts of data (and there’s a whole history of manual record keeping that we can easily overlook).  We were then told something about the history of the Rand Corporation followed by parts of the history of the computer company IBM.

On the subject of IBM, he mentioned someone called Eliot Noyes (Wikipedia).  Noyes was for IBM as Jonathan Ive (Wikipedia) is for Apple (if you’re into industrial design).  Martin mentioned that mainframe computers had a particular look; for a time there was a particular ‘design zeitgeist’.  I’ve made notes that Noyes used to look over catalogues from the Italian company Olivetti, and not only designed computers, but entire rooms.  We were shown photographs of various mock-ups. 

The creation of physical prototypes reminded me of some themes that are mentioned in a couple of design modules, either Design Essentials or Design for Engineers.  Martin also made reference to designer Norman Bel Geddes (Wikipedia).  He also showed us a whole host of other pictures of big machines, notably the ICL 2900 (Wikipedia) used in the Bankers’ Automated Clearing System (BACS).  (I have to confess being dragged into the depth of the Wikipedia page about that particular ICL computer.  Should I confess to such level of geekiness?  Probably not!)

Martin’s talk wasn’t really what I had expected but I found it pretty interesting (and it was a shame I missed the first quarter of it).  I was surprised by the detail that he provided about manual filing systems but I was also encouraged by the inclusion of information about designers.  The visual and industrial design aspect is an important part of computing history too.  Thinking back, one of my first computers had a very different aesthetic to the machines that I use today.  Function and fashion, combined with the wider perception of devices and machines are perspectives that are inexplicably linked.

After the lecture, it later dawned on me that I’ve actually read one of Martin’s books, ‘Computer: a history of the information machine’ which he co-authored with William Aspray.  It’s a pretty good read.  It covers a range of different strands; the pre-history, early electronic machines (such as the UNIVAC, which he touched on in his talk), before moving onto the emergence of the internet and software.  It’s tough to do everything but he has a good old go at it.

Lecture 2: Turing and his work

The second lecture of the day was by Professor Jonathan Bowen (website).  Jonathan talked about the life and work of Alan Turing (Wikipedia) and mentioned Alan Hodges’s scholarly biography, ‘the enigma of intelligence’. 

Jonathan spoke about three key areas of Turing’s work: his work that relates to the fundamentals of computer science, philosophical work relating to artificial intelligence and his later work on morphogenesis (which now has strong connections to the field of bioinformatics).  He mentioned his birth place, spoke about his PhD research which took place at Princeton University (with Alonzo Church being his doctoral supervisor), and also spoke about his work at Bletchley Park.  Other aspects of his life were touched on, such as his work in the National Physical Laboratory (NLP) in Teddington and his movement to the University of Manchester.  During his time in the NPL, he worked on the design of a computer which then became the Pilot Ace (Wikipedia).  When he was at Manchester, he was familiar with the Manchester Mark I computer (the world’s first stored program computer, and don’t let any American tell you otherwise).

What I liked about Jonathan’s talk was its breadth.  He covered many different aspects of Turing life in a very short space of time.  He also spoke of the ambiguity regarding his death, echoing what Hodges had written in his biography of Turing

At the end of his talk, we were directed to a set of web links that might be of interest to some.  Last year was the centenary of Turing’s birth, and there is a commemorative website that contains a whole host of different resources to celebrate this.  There is also a site that is maintained by his biographer, Alan Hodges (turing.org.uk).  Interestingly, we were also directed to an on-line archive of documents which can be accessed by computer scientists, historians or anyone else who might be interested.

Lecture 3: The grand narrative of the history of computing

The headline act of the night was Doron Swade.  I know of Doron’s work from the Science Museum where he headed up a project to construct a working version of Charles Babbage’s design for his Difference Engine number 2.  Babbage (for those who don’t know of him) is a Victorian inventor and raconteur whose lifelong quest was to build and design mechanical calculating machines.  During his life, he had a battle with his engineer, had the challenge of securing money for his ideas, travelled around Italy and hosted some famous parties (and did a whole lot more).

The title of Doran’s lecture was an intriguing and demanding one.  Could there really be a grand narrative about the history of computing?  If so, what elements or ingredients might it contain?  Doron told us that the history of computing is an emerging field and then posed a similar question: ‘what strings [the different] pieces together?’  He also reassured us that there was a clear narrative that appears to be emerging.

The narrative begins with methods for accounting and number systems, i.e. mechanisms to keep track of number.  We could consider the pre-history to comprise of artefacts such as tally sticks or physical devices that can be used to ‘relieve or replace mental calculation’.  This led to the emergence of mechanisms that used moving parts, such as an abacus and a slide rule.  The next ‘chapter’ would comprise of devices that embodied algorithms; their mechanisms carried out sequences or steps of calculations.  Here we have the work of Babbage and links to Hollerith (who was mentioned by Campbell-Kelly).

Doron then presented us with a challenge.  If we represent history in this way there is an implicit suggestion that there is a clear deterministic path from the past through to the present.  If I understand the point correctly, any narrative (or description of the past) is always going to be flawed, since there is so much more going on.  There could be situations in which nothing much happens.  A really interesting thought that Doron introduced was the idea of a ‘stored program’ being met with puzzlement and confusion, but this is an idea that distinctly defines what a computer is today.  (I haven’t made a word for word note of what Doron said, but this is something that has certainly stuck in my mind).

Another interesting point is that a serial narrative naturally excludes the parallel.  There is also an issue of reflexivity (to nick a posh word that I learnt from the social sciences); there is a relationship between history making machines and machines making history.  Linearity, it is argued, does a disservice.  One way to get over the challenge of linearity is to draw upon the stories of people.  These thoughts reminded me of a talk by Tilly Blyth, current keeper of technologies at the science museum, about the forthcoming ‘information age’ gallery.  Tilly also emphasised the importance of personal narratives and also cautioned about viewing history as a deterministic process.

One of the highlights of Doran’s talk was his ‘river diagram’ of the ‘history of computing’ (my ‘quotes’ at this point, since I don’t think I made a note of a ‘heading’).  Obviously, a picture is much better, but I’ll have a go at describing it succinctly. 

In essence, the grand narrative comprises of a bunch of different threads.  One thread that runs through it all is the history of calculation.  There is another thread about the history of communication.  In the middle, these threads are linked by ‘tributaries’ which relate to the subjects of automatic computation and information management.  These lead to another (current) thread of study which is entitled ‘electronic information age’.  I also made a note of a fabulous turn of phrase.  The current electronic information age emerged from the ‘fusion chamber of solid state physics’. Another bit of the diagram relates to different ways in which calculation or computation could be realised: mechanical, electromechanical or electronic. 

I also made a quick note of what were considered to be the core ideas in computing: mechanical processes, digital logic, algorithms, systems architecture, software and universality (I’m not sure what this means, though) and the internal stored program.  A narrative, it was argued, comes from a splicing together of different threads.

Returning to Babbage, Doran said that ‘[he] burst out of nowhere and confounds us with schemes that are unprecedented’; proposing mechanical calculating machines the size of rooms.  Doran also spoke about Ada Lovelace’s description of Babbage’s designs of his Analytical Engine, a machine that embodies many of the core ideas that are used in computing today: ‘a fetch execute cycle, transfer of memory form the processor, programmable, automatic execution, separation of program and memory’.

Doran ends with a question: ‘to what extent did this [Babbage’s work] influence modern computing?’  The answer is, ‘probably, not very much…’ (my quotes this time, rather than Doran’s), since many of Babbage’s discoveries and inventions were rediscovered and re-implemented as computing devices were realised in different forms, moving from the mechanical to the electrical.  Doran argued that perhaps because there is so much congruence between the different approaches, the ideas that have been rediscovered and re-implemented may well be really important and fundamental to the subject of computation.  To paraphrase from Doran’s book, Babbage isn’t so much a ‘great grandfather’ of computing, more of a ‘great uncle’.

Reflections

For me, Doron’s talk tied together aspects of the earlier talks.  Martin spoke about the history of information management and touched upon the electromechanical world of computing.  By describing the work of Turing, Jonathan spoke about and connected to the history of automatic computation.  One of the challenges that I’ve been grappling with is that there is so much history that is fundamentally interesting.  I’m interested in learning more, but it remains difficult to know which parts of a bigger picture to focus on. 

What I personally got from the day was a confirmation that my interest in related subjects such as communication technologies and the use, development and deployment of software (and algorithms) do indeed form an important piece of a ‘grand narrative’ in the history of computing and information technology.  Whilst I instinctively knew this to be true, Doran’s river diagram, for me, drew together different influences and connections in a very clear and obvious way.

Before heading home, I grabbed a brochure that had the title, ‘free public lectures’, vowing that I would have a good look  though it to see what else was going on.  After saying a few goodbyes to people I left the basement room and walked up a flight of stairs.  In the intervening hours, it had become dark; time had passed and I hadn’t really noticed.  When I reached the street I reached into by inside pocket for my smartphone to see if I had any messages.  A light was flashing.  I didn’t have any messages but I had a few alerts.  A theoretical Turing machine rendered into a physical device was alerting me to a comedy night that was to take place later on that week.  This was also a gentle reminder about how subtly technology had become entwined with my life.  Was I reliant on this little device?  That was a whole other question.

When I was heading home I asked myself, ‘how come I never knew this Gresham college place existed?’  Perhaps it is only one of those places that you hear about if you’re ‘in the know’.  London, for me, is gradually revealing some of its secrets.

Permalink Add your comment
Share post

This blog might contain posts that are only visible to logged-in users, or where only logged-in users can comment. If you have an account on the system, please log in for full access.

Total visits to this blog: 2335230