OU blog

Personal Blogs

Christopher Douce

SEAD/LERO Research Conference ‘23

Visible to anyone in the world
Edited by Christopher Douce, Thursday, 20 July 2023, 09:56

I attended my first joint OU SEAD/LERO research conference, which took place between 4 July and 6 July 23. SEAD is an abbreviation for Software Engineering and Design Research Group a research group hosted within the OU’s School of Computing and Communications. The conference was joined by members of LERO, the Science Foundation Ireland Research Centre for Software, which based in Limerick.

What follows is a summary of the two days I attended. There was a third day that I didn’t attend, which was all about further developing some of the research ideas that were identified during the conference, and researcher professional development

The summary is intended for the delegates of the conference, and for anyone else who might be interested in what happens within the SEAD research group. All the impressions (and any accompanying mistakes in my note taking) are completely my own. What is summarised here isn’t an official summary. Think of it as a rough set of notes intended to capture some of the themes that were highlighted. It is also used to share some potential research directions and areas that intend to be further developed and explored.

Day 1: Introductions and research discussions

Bashar Nuseibeh kicked off the day by highlighting the broad focus of the conference: to consider the role of software in society. Although I missed the first minutes of his opening address due to traffic, there was a clear emphasis on considering important related themes, such as social justice.

The first session was an ice breaker session. This was welcome, since I was an incomer to the group, and there were many delegates who I had not met before. We were asked to prepare the answers for three questions: (1) Who you are, including where you are based and your role? (2) What is your main research area/interest?, and (3) Something you love about your research and something you dislike. (Not bureaucracy!)

Having a go to answer these myself, I work as a staff tutor. My research interests have moved and changed, depending on what role I’ve been doing. Most recently, it has been about the pedagogy of online teaching and learning. When I was a researcher on an EU funded project, I was looking at the accessibility of online learning environments and supporting students who have additional requirements. Historically, my research has been situated firmly in the area of software engineering; specifically, the psychology of computer programming, maintenance of object-oriented software, and software metrics (informed by research about human memory). I have, however, returned to the domain of software engineering, moving from the individual to communities of developers by starting to consider the role of storytelling in software engineering, working with colleagues Tamara Lopez and Georgia Losasso.

What I like about the research is that it is really interesting to discover how different disciplines can be applied to create new insights. What can be difficult is that different disciplines can sometimes use different languages.

Invited talk: navigating the divided city

Next up was an invited talk by Prof. John Dixon from the OU’s Social Psychology research group. John’s presentation was about “intergroup contact, conflict, desegregation, and re-segregation in historically divided societies”. John described how technology was used to explore human mobility preferences. Drawing on research carried out as a part of the Belfast Mobility Project. The project studies, broadly speaking, where people go when they navigate their way through spaces, and can be said to sit within an intersection between social science and geography. Technology was used by researchers to study activity space segregation and patterns of informal segregation, which can shed light on social processes. 

John also highlighted tensions that a researcher must navigate, such as the tension between open science (where data ca be made available to other researchers) and the extent to which it is ethical to share detailed information about the movement of people across a city.

There was a clear link between the talk and the theme: the connection between software and society. This talk also resonated with me personally: as a regular user of an activity tracker called Strava, I was already familiar with some of the ethical concerns that were shared. After becoming a user of Strava, I changed a couple of settings to ensure that my identity is disguised. Also, a year ago I noticed that the activity tracker has started to hide the start point and the end point of any activity that I was publicly sharing. A final point from the part of the day is that both technology and software can lead to the development of new methods and approaches.

Fishbowl: Discussing society and software

Talking of new methods and approaches, John’s talk (and a lunch break) was followed by an event that was known as the ‘fishbowl session’, which introduce a ‘conference method’ that I had never heard of before.

In some respect, the ‘fishbowl’ session was a discussion with rules. Delegates sat on one of ten chairs in the middle of the room, and have a conversation with each other, whilst trying to connect together either the main theme of the discussion (software and society) or some of the topics that emerge from the discussions.  We were encouraged to discuss “anything where software has a role to play”.

The fishbowl discussed consequences of technology, collective education, critical thinking (of users), power of automation, concentration of power (in corporations), the use of AI (such as large language models), trade-offs, and complex systems. On the subject of AI, one view I noted down was that perhaps the use of AI ought to be limited to low risk domains, and leave people to the critical thinking (but this presupposes that we understand all the risks). There was also a call to ensure that AI tools to explain their “reasoning”, but this also implicitly links back to points about skills and knowledge of users. This is linked to the question: how do we empower people to make decisions about the systems that they use?

Choices were also discussed. Choices by consumers, and by developers, especially in terms of what is developed, and what is good to develop. Also, when uncovering and specifying requirements, it is important to consider what the negatives might be (an observation which reminds me of the concept of ‘negative use cases’ which is highlighted in the OU’s interaction design module).

I noted down some questions that were highlighted: how do we present our discipline? Do we research how to “do software” and leave it up to industry? Should we focus on the evaluation of the impact of software on communities and society? An interesting quote was shared by Bashar, which was: “working in software research is working for society”.

A final reflection I noted was that societal problems (such as climate change) can be thought as wicked problems, where there is no right answer. Instead, there might be solutions that are not very right or wrong, or solutions that are better or worse than others.

It was difficult to distil everything down to a group of neat topics, but here are some headings that captured some of points that were discussed during the fishbowl session: resilience, care, sustainability, education, safety and security, and responsibility.

At the end of the session, all delegates were encouraged to join a group that reflected their research interests. I joined the sustainability group.

Group Work 1 - Expansion of themes from the fishbowl

After a coffee break it was time to do some work. The guidance from the agenda was to “to develop some proposals for future research (problem; research objectives; research questions; methods; impact)”. 

The sustainability group comprised of four members: three from SEAD, one from LERO.

After broadly discussing the link between sustainability and software engineering, we produced a sketch of a poster that shared the following points:

  • How can we make connections and causal links between different (sub)systems explicit.
  • How can we engineer software to be holistically ‘resource aware’?
  • What is the meta-language for sustainable software systems?
  • What are the heuristics for sustainable software systems?

On the surface of it, all these points are pretty difficult to understand. 

The first point relates to the link between software, economics, and society. Put another way, what needs to be done to make sure that software systems can make a positive contribution to the various dimensions of our lives. By way of further context, the notion of Doughnut Economics was shared and discussed.

The second point relates to the practice of developing software. Engineers don’t only need to consider how to develop software systems that use resources in an efficient way, they also need to consider how software teams use and consume resources.

The third point sounds confusing, but it isn’t. Put another way: how do we talk about, or describe, or even rate the efficiency, or sustainability of software systems. Going even further, could it be possible to define an ISO standard that describes what elements a sustainable software system could or should contain?

The final point also sounds arcane, but when unpacked, begins to make a bit of sense. In other words: are there rules that software engineers could or should apply when evaluating the energy use, or overall sustainability of software systems? There are, of course, some links from this topic to the topic of algorithms and data structures (which is explored in modules such as M269 Algorithms, data structures and computability) which considers efficiency in terms of time and memory. A simple practical rule might be, for example: “rather than continually polling for a check in status of something, use signals between software elements”. There is also a link to the notion of software patterns and architecture (with patterns being taught on TM354 Software Engineering).

Day 2: Ideate and prototype

The second day kicked off with summaries from the various groups. The responsibility team spoke about the role of individuals, values, and organisations. The care group highlighted motivation, engagement, older users and how to help people to develop their technical skills. The education had been discussing computing at schools, education for informed choices, critical thinking, and making sure that the right problem is addressed. The resilience group discussed support through communities, and the safety and security group asked whether safety related to people, or to process.

A paraphrased point from Bashar: “look to the literature to make sure that the questions that are being considered haven’t been answered before” also, reflecting on the earlier keynote, “consider radical methods or approaches, and consider the context when trying to understand socio-economic systems”.

Group Work 2 - ideate and prototype

Back in our groups, our task was to try to operationalise (or to translate) some of our earlier points into clearer research questions with a view to coming up with a research agenda.

Discussing each of the points, we returned to the meaning of the term sustainability, along with what is meant by resource utilisation by code, also drawing upon the UN sustainable development goals https://sdgs.un.org/goals .

We eventually arrived at a rough agenda, which I have taken the liberty of describing in a bit more detail. The first point begins from a high level. Each subsequent points moves down into deeper levels of analysis, and concludes with a point about how to proactively influence change:

  1. What types of software systems or products consume the most energy?
  2. After identifying a high energy consuming product or system, use a case study approach to holistically understand how energy used, also taking into account software development practices and processes.
  3. What are the current software engineering practices of developers who design, implement and build low energy computing devices, and to what extent can sharing knowledge about practice inform sustainable computing?
  4. What are the current attitudes, perceptions and motivations about the current generation of software engineers and developers, and how might these be systematically assessed?
  5. After uncovering practices and assessing attitudes, how might the university sector go about influencing organisations to enact change?

Relating to the earlier call to “draw on the literature”, a member of our team knew of some references that could be added to the reference section of our emerging research poster:

Lago, P. et al. (2015) Framing sustainability as a property of software quality. Communications of the ACM, Volume 58, Issue 10, pp.70–78. https://doi.org/10.1145/2714560

Lago, P. (2019) Architecture Design Decision Maps for Software Sustainability. 2019 IEEE/ACM 41st International Conference on Software Engineering: Software Engineering in Society (ICSE-SEIS), 25-31 May 2019, IEEE. https://doi.org/10.1109/ICSE-SEIS.2019.00015

Lago, P. et al. (2021). Designing for Sustainability: Lessons Learned from Four Industrial Projects. In: Kamilaris, A., Wohlgemuth, V., Karatzas, K., Athanasiadis, I.N. (eds) Advances and New Trends in Environmental Informatics. Progress in IS. Springer, Cham. https://doi.org/10.1007/978-3-030-61969-5_1 

Manotas, I. et al. (2018) An Empirical Study of Practitioners' Perspectives on Green Software Engineering. 2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE). 14-22 May 2016. https://doi.org/10.1145/2884781.2884810

Wolfram, N. et al. (2018) Sustainability in software engineering. 2017 Sustainable Internet and ICT for Sustainability (SustainIT). 06-07 December 2017. https://doi.org/10.23919/SustainIT.2017.8379798

(A confession: I added the Manotas reference when I was writing up this blog, since it looked like a pretty interesting recommendation, especially have previously been interested in the empirical studies of programmers).

Conference visit: Bletchley Park

The second day concluded with a visit to Bletchley Park, which isn’t too far from the campus. It seemed appropriate to visit a place where socio-technical systems played such an important role. I had visited Bletchley Park a few times before (I also recommend the computing museum, which is situated on the same site), so I sloped off early to try to avoid the rush hour to London.

Day 3: Consolidate and plan next steps

This final day contained a workshop that had the title “consolidate and plan next steps” and also had a session about professional development. Unfortunately, due to my schedule, I wasn't able to attend these sessions.

Reflections

I really liked the overarching theme of the event: the connection between software and society. Whilst listening to the opening comments it struck me that there were some clear points of crossover between research carried out within the SEAD group, and the research aims of the OU Critical Information Studies research group.

It was great working with others in the sustainability group to try to develop a very rough and ready research agenda. It was also interesting to begin to discover how fellow researchers in other institutions had been thinking along similar lines and have already taken some of our ideas further. 

One of my next steps is to continue with reading and exploring with an aim of developing a more thorough understanding of the research domain.

It was interesting that I was the only staff tutor at the event. It is hard for us to do research, since our time split in three different ways: academic leadership and management (of part time associate lecturers), teaching, and whatever time remains can be dedicated to research. For the next few years, my teaching ‘bit’ of time will be put towards doing my best to support TM354 Software Engineering.

Looking forward, what I’m going to try to do is to integrate different aspects of my work together: integrate the teaching bit with the research bit, with the tutor management bit. I’m also hoping (if everything goes to plan) to tutor software engineering for the first time.

As well as integrating everything together, another action is to begin to work with SEAD colleagues to attempt to put together a PhD project that relates to sustainable computing.

Update 20 July 23: After doing a couple of internet searches to find more about DevOps, I discovered a new book entitled Building Green Software (O'Reilly), which is due to be published in July 24. I also found an interview with the lead author (YouTube), and learnt about something called the Green Software Foundation. I feel really encouraged by these discoveries.

Permalink
Share post
Christopher Douce

Bletchley Park archive course

Visible to anyone in the world

At end of January, I took a day off my usual duties and went to an event called the ‘Bletchley Park archive course’.  I heard about the course through the Bletchley Park emailing list.  As soon as I received the message telling me about it I contacted the organisers straight away, but unfortunately, I was already too late: there were no longer any spaces on the first event.  Thanks to a kind hearted volunteer, I was told about the follow up event.

This blog post is likely to be a number of blog posts about Bletchley Park, a place that is significant not only in terms of Second World War intelligence gathering and analysis, but is also significant in the history of computing.   It’s a place I’ve been to a couple of times, but this visit had a definite purpose; to learn more about their archives and what they might be able to tell a very casual historian of technology, like myself.

I awoke at about half six in the morning, which is the usual time when I have to travel to Milton Keynes and found my way to my local train station.  The weather was shocking, as it was for the whole of January.  I was wearing sturdy boots and had donned a raincoat, as instructed by the course organisers.  Two trains later, I was at Euston Station, ready to take the relatively short journey north towards Milton Keynes, and then onto the small town of Bletchley, just one stop away.

Three quarters of an hour later, after walking through driving rain and passing what appeared to be a busy building site, I had found the room where the ‘adult education’ course was to take place.

Introduction and History

The day was hosted by Bletchley Park volunteer, Susan Slater.  Susan began by taking about the history of the site that was to ultimately become a pivotal centre for wartime intelligence.  Originally belonging to a financier, the Bletchley Park manor house and adjoining lands were put up for auction in 1937. 

Bletchley was a good location; it was pretty incongruous.  It was also served by two railway lines.  One line that went to London and another that went from East to West, connecting the universities of Oxford and Cambridge.  Not only was it served well in terms of transport, the railway also offers other kinds of links too – it was possible to connect to telecommunication links that I understand ran next to the track.  Importantly, it was situated outside of London (and away from the never ending trials of the blitz).

Susan presented an old map and asked us what we thought it was.  It turned out to be a map of the telegraph system during the time of the British Empire; red wires criss-crossed the globe.  The telegraph system can be roughly considered to be a ‘store and forward’ system.  Since it was impossible (due to distances involved) to send a message from England to, say, Australia, directly, messages (sent in morse code) were sent via a number of intermediate stations (or hubs). 

Susan made the point that whoever ran the telecommunication hubs were also to read all the messages that were transferred through it.  If you want your communications to be kept secret, the thing to do is to encode them in some way.  Interestingly, Susan also referred to Edward II, where there was a decree in around 1324 (if I understand this correctly!) that stated ‘all letters coming from or going to parts overseas [could] be ceased’.  Clearly, the contemporary debates about the interception of communications have very deep historical roots.

We were introduced to some key terms.  A code is a representation of letters and words by other letters and words.  A cypher is how letters are replaced with other letters.  I’ve also noted that if that if something is formulaic (or predicable), then it can become breakable (which is why you want to hide artefacts of language - certain characters in a language are statistically more frequent than others, for example).  The most secure way to encode a message is to use what a one-time pad (Wikipedia).  This is an encoding mechanism that is used only once and then thrown away.

An Engima machine (Wikipedia), which sat at the front of the classroom, was an electro-mechanical implementation of an encoding mechanism.  Susan outlined its design to us: it had a keyboard like a typewriter, plug boards (to replace one letter with another), four or five rotors that had the same number of positions as there were characters (which moved every time you pressed a key), and wiring within the rotors that changed the ‘letters’ even further. 

Second session: how it all worked

After a swift break, we dived straight into the second session, where we were split into two teams.  One team had to encrypt a message (using the Enigma machine), and the second team had to use the same machine to decrypt the same message (things were made easier since the ‘decrypting side’ knew what all the machine settings were).   I think my contribution was to either press a letter ‘F’ or a letter ‘Q’ – I forget!  Rotors turned and lights lit up.  The seventy-something year old machine still did its stuff.

What follows is are some rough notes from my notebook (made quickly during the class).  We were told that different parts of the German military used different code books (and also the Naval enigma machine was different to other enigma machines).  Each code book lasted for around 6 weeks.  The code book contained information such as the day, rotor position, starting point of the rotor and plug board settings; everything you needed to make understandable messages totally incomprehensible.

The challenge was, of course, to uncover what the settings of an Engima machine were (so messages could be decrypted).  A machine called the Bombe (Wikipedia) was invented to help with the process of figuring what the settings might be.  When the settings were (potentially) uncovered, these were tested by entering them into a machine called the Typex (which was, in essence, a version of an Enigma machine) along with the original message, to see if plain text (an unencrypted message) appeared.

The Enigma wasn’t the only machine that was used to encrypt (and decrypt) messages. Enigma (as far as I understand) was used for tactical communications.  Higher level strategic communications used in the German high command were transmitted using the Lorenz cypher.  This more complicated machine contained a paper tape reader which allowed the automatic transmission of messages, dispensing with the need for a morse code operator.

In terms of the scale of the operation at Bletchley Park, we were told that three thousand Engima messages ever day were being decoded, and forty Lorenz messages.  To help with this, there were 210 Bombe machines to help with the Enigma codes, and a machine that is sometimes described as ‘the world’s first electronic computer’, the Colossus machine.  At its peak, there were apparently ten thousand workers (a quarter of whom were women), running three shifts. 

Bombe Demo

After a short break, we were gently ushered downstairs to one of the museum exhibits; a reconstruction of a Bombe machine.  This was an electro-mechanical device that ‘sped up’ the process of discovering Enigma machine settings.  Two operators described how it worked and then turned it on.  It emitted a low whirring and clicking noise as it mechanically went through hundreds of combinations.

As the Bombe was running, I had a thought.  I wondered how you might go about writing a computer program, or a simulation to do pretty much the same thing.  The machine operators talked about the use of something called a ‘code map’, which helped them to find the route towards the settings.  I imagined an application or interactive smartphone or tablet app that allowed you to play with your own version of a Bombe, to get a feel for how it would work...  There could even be virtual Enigma machine that you could play with; you could create a digital playground for budding cryptographers.

Of course, there’s no such thing as an original thought: a Bombe simulator has already been written by the late Tony Sale (who reconstructed the Colossus machine), and a quick internet search revealed a bunch of Engima machine simulators.  One burning question is how might we potentially make the best use of these tools and resources?

Archive Talk

The next part of the day was all about the archive; the real reason I signed up for this event.  I have to confess that I didn’t really know what to expect and this sense of uncertainty was compounded by having a general interest rather than having a very specific research question in mind.

The archive is run by the Bletchley Park Trust.  GCHQ, the Government Communication Headquarters, is the custodian for the records that have come from Bletchley Park.  I understand that GCHQ is going to use Bletchley Park is used as its ‘reading room’, having leant around one hundred and twenty thousand documents for a period of fifty years.

By way of a very general introduction, a number of samples from the archive were dotted around our training room.  These ranged from Japanese language training aids (and a hand-written Japanese-English dictionary), forms used to help with the decryption of transmissions, through to samples of transmissions that were captured during the D-Day landings.

Apparently, there’s a big project to digitise the archive.  There is a multi-stage process that is under way.  The first stage is to have the artefacts professionally photographed.  This is followed by (I believe) storing the documents in some kind of on-line repository.  Volunteers may then be actively needed to help create metadata (or descriptions) of each repository item, to enable them to be found by researchers.

Tour

The final part of the day was a tour.  As I mentioned earlier, I’ve been on a couple of Bletchley Park tours, but this was unlike any of the earlier tours I had been on before.  We were all given hard hats and told to don high visibility jackets.  We were then ushered into the driving rain.

After a couple of minutes of trudging, we arrived at a building that I had first seen when I entered the site.  The building (which I understand was known as ‘hut 3’) was to become a new visitor’s centre.  From what I remember, the building used to be one of the largest punched card archives in Europe, known as Deb’s delight (for a reason that completely escapes me).    It was apparently used to cross-reference stuff (and I’m writing in terrible generalisations here, since I really don’t know very much!) 

Inside, there was no real lighting and dust from work on the floors hung in the air.  There was a strong odour of glue or paint.  Stuff was clearly happening.  Internal walls had been stripped away to give way to reveal a large open plan area which would become an ideal exhibition space.  Rather than being a wooden prefabricated ‘hut’, we were walking through a substantial brick building. 

Minutes later, we were directed towards two other huts that were undergoing restoration.  These were the wooden ones.  It was obvious that these buildings had lacked any kind of care and attention for many years, and workmen were busy securing the internal structure.  Avoiding lights and squeezing past tools, we snaked through a series of claustrophobic corridors, passing through what used to be the Army Intelligence block and then onto the Navy Intelligence block.  These were the rooms in which real secrets became clear.   Damp hung in the air, and mould could be seen creeping up some of the old walls.  There was clearly a lot of work that needed to be done.

Final thoughts

Every time I visit Bletchley Park, I learn something new.  This time, I became more aware of what happened in the different buildings, and I certainly learnt more about the future plans for the archive.  Through the talks that took place at the start of the day, I also learnt of a place called the Telegraph museum (museum website), which can be found at Porth Curno, Cornwall.   When walking through the various corridors to the education room, I remember a large poster that suggested that all communication links come to Bletchley Park, and that Bletchley is the centre of everything.

When it comes to a history of computing, it’s impossible to separate out the history of the computer and the history of telecommunications.  In Bletchley Park, communications and computing are fundamentally intertwined.  There’s another aspect, which is computing (and computing power) has led to the obvious development of new forms of communication.  Before I go any further forward in time (from, say, 1940 onwards), there’s a journey that I have to make back in time, and that is to go on a diversion to discover more about telecommunications, and a good place to start is by learning more about the history of the telegraph system.

I’ll be back another day (ideally when it’s not raining), to pay another call to Bletchley Park, and will also drop into to The National Museum of Computing, which occupies the same site.

Permalink 1 comment (latest comment by Rebecca Kowalski, Thursday, 13 Feb 2014, 14:52)
Share post
Christopher Douce

Mathematics, Breaking Tunny and the First Computers

Visible to anyone in the world
Edited by Christopher Douce, Monday, 15 May 2017, 12:11

Pciture of the Colossus computer

One of my interests is the history of computing. This blog post aims to summarise a seminar that as given by Malcolm MacCallum, University of London, held at the Open University on 30 October 2012.  Malcolm used to be the director of the Heilbronn Institute for Mathematical Research, Bristol.  Malcolm began by saying something about the institute, its history and its research.

This blog complements an earlier blog that I wrote to summarise a lecture that was given at City University.  This earlier lecture was entitled Breaking Enigma and the legacy of Alan Turing in Code Breaking and took place back in April 2012, and was one of a series of events to celebrate the centenary of Alan Turing's birth.  Malcolm's talk was similar in some respects but had different focus: there was more of an emphasis on the story that led to the development of what could be arguably one of the world's first computers.

I'm not going to say much about the historical background that is obviously connected with this post, since a lot of this can be uncovered by visiting the various links that I've given (if you're interested).  Instead, I'm going to rush ahead and introduce a swathe of names, terms and concepts all of which connect with the aim of Malcolm's seminar.

Codes, Cyphers and People

In some respects the story of the Enigma code, which took place at the Government Code and Cypher School, Bletchley Park, is one that gains a lot of the historical limelight.  It is easy to conflate the breaking of the Enigma code (Wikipedia), the Tunny code (Wikipedia) and the work of Alan Turing (Wikipedia).  When it comes to the creating of 'the first computer' (quotes intentional), the story of the breaking of the Tunny code is arguably more important. 

The Tunny code is a code generated by a device called the Lorenz cypher machine.  The machine combined transmission, encryption and decryption.  The Enigma code was very different.  Messages encrypted using Enigma were transmitted by hand in morse code.

I'm not going to describe much of the machines since I've never seen a real one, and cryptography isn't my specialism.  Malcolm informed us that each machine had 12 wheels (or rotors).  Each wheel had a set of cams that were set to either 1 or 0.  These wheel settings were changed every week or month (just to make things difficult).  As each character is transmitted, the wheels rotate (as far as I know) and an electrical circuit is created through each rotor to create an encrypted character.  The opposite happens when you decrypt: you put in an encrypted character one side and a plain text (decrypted) character magically comes out the other side.

For everything to work, the rotors for both the encrypting and decrypting machines have to have the same starting point (as otherwise everything will be gibberish).  These starting points were transmitted in unencrypted plain text at the start of a transmission

Through wireless intercept stations it was possible to capture the signals that the Lorenz cypher machines were transmitting.  The codebreakers at Bletchley Park were then faced with the challenge of figuring out the structure and design of a machine that they had never seen.  It sounds like an impossible challenge to figure out how many rotors and wheels it used, how many states the rotors had, and what these states were.

I'll be the first to admit that the fine detail of how this was done pretty much escapes me (and, besides, I understand that some of the activities performed at Bletchley Park remains classified).  What I'm really interested in is the people who played an important role in designing the physical hardware that helped with the decryption of the Tunny codes.

Depths and machines

Malcolm hinted at how the codebreakers managed to begin to gain an insight into how the Lorenz machine (and code) worked.  He mentioned (and I noted) the use of depths (Wikipedia), which is where two or more messages were sent using the same key (or machine setting).  Another note that I made was something called a Saltman break, which is mentioned in a book I'll reference below (which is one of those books which is certainly on my 'to read' list).

Malcolm mentioned two different sections of Bletchley Park: the Testery (named after Ralph Tester), and the Newmanry (named after Max Newman).  Another character that was mentioned was Bill Tutte who applied statistical methods (again, the detail of which is totally beyond me and this presentation) to the problem of wheel setting discovery.

It was realised that key aspects of code breaking could be mechanised.  Whilst Turing helped to devise the Bombe (Wikipedia) equipment that was used with the decryption of the Enigma code, another machine called the Heath Robinson (Wikipedia) was built.

One of the difficulties with the Heath Robinson was its speed. It made use of electromechanical relays which were slow, restricting the code breaking effort. A new approach was considered: the creation of a calculating machine that made use of thermionic valves (a precursor to the transistor).  Valves were perceived to be unreliable but it was realised that if they were continually powered up they were not stressed.

Colossus

Tommy Flowers (Wikipedia) engineered and designed a computer called Colossus (Wikipedia), drawing experience gained working at the Dollis Hill Post Office research station in North London.  

Although Colossus has elements of a modern computer it could be perhaps best described as a 'special purpose cryptographic device'.  It was not programmable in the same way that a modern computer has become (this is a development that comes later), but its programs could be altered; perhaps by changing its circuitry (I don't yet know how this would work).  It did, however, made use of familiar concepts such as interrupts, it synchronised its operation by a clock-pulse, stored data in memory, used shift registers and did some parallel processing.  Flowers also apparently introduced the term 'arithmetic and logic unit'.

Colossus was first used to break a message on 5 February 1944.  A rather different valve based calculator, the ENIAC (Wikipedia), built by the Moore School of Electrical Engineering, University of Pennsylvania, was used two years later.

Final points

Malcolm told us that ten Collosi were built (I might have spelt that wrong, but what I do know is that Collosus-es certainly isn't the right spelling!), with the last one being dismantled in 1960.  A total of twenty seven thousand messages were collected, of which thirteen thousand messages were decrypted.  Malcolm also said that Flowers was 'grossly under rewarded' for his imaginative and innovative work on Colossus.  I totally agree.

Research into the Colossus was carried out by Brian Randell from the Univerisity of Newcastle in the 1970s.  A general report on the Tunny code was only recently released in 2000.  Other sources of information that Malcolm mentioned was a book about the Colossus by Jack Copeland (Wikipedia)  (which is certainly on my 'to read' list), and a biography of Alan Turing by Andew Hodges (Wikipedia).

Malcom's talk reminded me of how much computing history is, quite literally, on our doorstep.  I regularly pass Bletchley on the way to the Open University campus at Milton Keynes.  There are, of course, so many other places that are close by that have played an important role in the history of computing.  Although I've already been twice to Bletchley Park, I'm definitely going to go again and take a longer look at the various exhibits.

(Picture: Wikipedia)

Permalink 1 comment (latest comment by Robert McCune, Monday, 12 Nov 2012, 12:23)
Share post

This blog might contain posts that are only visible to logged-in users, or where only logged-in users can comment. If you have an account on the system, please log in for full access.

Total visits to this blog: 2312738