OU blog

Personal Blogs

Christopher Douce

New Technology Day - June 2014

Visible to anyone in the world
Edited by Christopher Douce, Tuesday, 8 Oct 2019, 17:42

This post is a quick summary of a New Technology Day event that took place at The Open University London regional centre on Saturday 14 June 2014.  I’ve written this post for a number of reasons: for my esteemed colleagues who came along to the day, so that I help to remember what happened on the day, so that I can share with my bosses what I’m getting up to on a day to day basis, and for anyone else who might be remotely interested.

One of the challenges that accompanies working in the area of technology, particularly information technology and computing, is that the pace of change is pretty relentless.  There are always new innovations, development and applications.  There are always new businesses doing new things, and using new techniques.  Keeping up with ‘the new stuff’ is a tough business.  If you spent all your time looking at what was ‘new’ out there, we simply wouldn’t get any work done – but we need to be understanding ‘the new’, so we can teach and relate to others who are using ‘all this new stuff’.

The idea for this day came from a really simple idea.  It was to ask colleagues the question, ‘have you heard of any new technology stuff recently?  If so, can you tell me about it?’  Rather than having a hard and fast ‘training’ agenda the idea was to create a space (perhaps a bit like an informal seminar) to allow us to have an opportunity to share views and chat, and to learn from each other.

Cloud computing

After a brief introduction, I kicked off with the first presentation, which was all about cloud computing.  A couple of weeks back, I went to a conference that was all about an open source ‘cloud operating system’ called OpenStack as a part of some work I was doing for a module team.  The key points from the presentation are described in a series of two blog posts (OU Blog)

Towards the end of the presentation, I mentioned a new term called Fog Computing.  This is where ‘the cloud’ is moved to the location where the data is consumed.  This is particularly useful in instances where fast access times are required.  It was interesting to hear that some companies might also be doing something similar.  One example might be companies that deliver pay-on-demand streaming video.  It obviously doesn’t make a lot of sense if the movies that you want to see are located on another continent; your viewing experience may well be compromised by unforeseen network problems and changes in traffic.

It was useful to present this since it helped to clarify some of my understandings, and I also hoped that others found it interesting too.  Whilst the concept of a ‘cloud’ isn’t new (I remember people talking about the magic of an X.25 cloud), the technologies that realise it are pretty new.   I also shared a new term that I had forgotten I had written on one of my slides: the concept of a devop – someone who is also a developer and an operator.

JuxtaLearn project

The second presentation was about the JuxtaLearn project, by Liz Hartnett, who was unable to attend.  Liz, however, still was able to make an impact on the event since she had gone the extra mile to make an MP3 recording of her entire presentation.  Her talk adopted the PechaKucha format.  This is where a presenter uses 20 slides which change every 20 seconds.  Since her slide deck was setup to change automatically, it worked really well.

We learnt about the concept of the threshold concept (which can be connected to the concept of computer programming) and saw how videos could be made with small project groups.  I (personally) connected this with activities that are performed on two different modules: TU100 My Digital Life, and T215 Communication and Information Technologies, which both ask students to make a presentation (or animation).

OU Live and pedagogy

The next talk of the day was by Mandy Honeyman, who also adopted the PechaKucha format.  Mandy talked about a perennial topic, which is the connection between OU Live and pedagogy.  I find this topic really interesting (for the main reason that I haven’t yet ‘nailed’ my OU Live practice within this format, but it’s something that I’m continuing to work on).  I can’t speak for other people, but it has taken me quite a bit of time to feel comfortable ‘teaching’ using OU Live, and I’m always interesting in learning further tips.

Mandy has taken the time and trouble to make a version of her presentation available on YouTube.  So, I’ve you’ve got the time (and you were not at the event), do give this a look.  (She prepared it using PowerPoint, and recorded it using her mobile phone).

The biggest tip that I’ve made a note of is the importance of ‘keeping yourself out of it’, or ‘taking yourself out of it [the OU Live session]’.  When confronted by silence it’s easy to feel compelled to fill it with our own chatter, especially in situations where students are choosing not to use the audio channel.

One really interesting point that came out during the discussion was how important it is to try to show how to use OU Live right at the start of their journey with the OU.  I don’t think this is done as it could be at the moment.  I feel that level 1 tutors are implicitly given the challenging task of getting students up to speed with OU Live, but they will already have a lot on their hands in terms of the academic side of things.  I can’t help think that we could be doing a bit better when it comes to helping students become familiar with what is increasingly become a really important part of OU teaching and learning.

It was also mentioned that application sharing can run quite slowly (especially if you do lots of scrolling) – and one related thought is that this might well impact on the teaching and learning of programming.

A final point that I’ll add is that OU Live can be used in a variety of different way.  One way is to use it to record a mini-lecture, which students can see during their own time.  After they’ve seen them, they can then attend a non-recorded discussion seminar.  I’ve also heard of it being used to facilitate ‘drop in sessions’ over a period of a couple of hours (which I’ve heard is an approach that can work really well).

Two personal reflections that connect to this session include: we always need good clear guidance from the module team about how they expect tutors to use OU Live, and secondly, we should always remember to give tutors permission to use the tool in the ways that make the best use of their skills and abilities, i.e. to say, ‘it’s okay to go ahead and try stuff; this is the only way you can develop your skills’.

The March of the MOOCs

Rodney Buckland, a self-confessed MOOCaholic, gave the final presentation of the morning.  The term MOOC is an abbreviation for Massive Open Online Course.  From the sound of it, Rodney has taken loads.  (Did he really say ‘forty’?  I think he probably did!)

He mentioned some of the most popular platforms.  These include: Coursera, Udacity and FutureLearn (which is a collaboration between the OU and other universities).  Rodney also mentioned a swathe of less well known MOOC platforms, such as NovoEd.   A really interesting link that Rodney mentioned was a site called MOOCList which is described as ‘an aggregator (directory) of Massive Open Online Courses (MOOCs) from different providers’. 

Rodney spoke about his experience of taking a module entitled, ‘Science of the solar system’.  He said that the lecturer had really pushed his students. ‘This was a real surprise to me; this was a real third level physics module’.

A really important point was that MOOCs represented an area that was moving phenomenally quickly.  After his talk had finished there was quite a lot of discussion about a wide range of issues, ranging from the completion rates (which were very low), to the people who studied MOOCs (a good number of them already had degrees), and to the extent to which they can complement university study.  It was certainly thought provoking stuff.

Assistive technology for the visually impaired: past, present and future

The first presentation after lunch was by my colleague Richard Walker.  Richard is a visually impaired tutor who has worked with visually impaired students.  He made the really important point that if an associate lecturer works for an average of about ten years, there is a very significant chance that a tutor will encounter a student who has a visual impairment.  Drawing on his previous presentation, there is an important point that it is fundamentally important to be aware of some of the challenges that visually impaired students can face.

Richard recently interviewed a student who has a visual impairment by email.  Being a persuasive chap, Richard asked me to help out: I read out the role of his student from an interview transcript.  The point (to me) was very clear: students can be faced with a whole range of different issues that we may not be aware of, and everything can take quite a lot longer.

Another part of Richard’s presentation (which connects the present and the future) was all mobile apps.  We were introduced to the colour recogniser app, and another app called EyeMusic (iTunes) which converts a scene to sound.   Another really interesting idea is the concept of the Finger Reader from the Fluid Interface group at MIT.

A really enjoyable part of Richard’s session was when he encouraged everyone to explore the accessibility sessions of their smartphones.  Whilst it was easy to turn the accessibility settings on (so your iPhone spoke to you), it proved to be a lot more difficult to turn the settings off.  For a few minutes, our meeting room was filled with a cacophony of robotic voices that proved to be difficult to silence.

Towards utopia or back to 1984

The penultimate session of the day was facilitated by Jonathan Jewell. Jonathan’s session had a more philosophical tone to it.  I’ve made a note of an opening question which was ‘how right or wrong were we when predicting the future?’

Jonathan referenced the works of Orwell, Thomas More (Wikipedia) and a vision of a dystopian future depicted in THX 1138, George Lucas’s first film.  Other subjects included economic geography (a term that I hadn’t heard before), and the question of whether Moore’s Law (that the number of transistors in a microprocessor doubles every two years) would continue.  On this subject, I have sometimes wondered about what the effect of software design may be if and when Moore’s law fails to continue to hold.

Other interesting points included the concept of the technological singularity and a connection to a recent news item (BBC) where a computer was claimed to have passed the Turing test.

A great phrase was infobesity – that we’re all overloaded with too much information.  This connects to a related phrase that I have heard of before, which is the ‘attention economy’.  Jonathan made a similar point that information is not to much a scare resource.  Instead, we’re limited in terms of what information we can attend to.

We were also given some interesting thoughts which point towards the future.  Everything seems to have become an app: computing is now undeniably mobile.  A final thought I’ve noted down is Jonathan’s quote from security expert, Bruce Schneider: ‘surveillance is the business model of the internet’.  This links to the theme of Big Data (Wikipedia).  Thought provoking stuff!

Limits of Computing

The final talk of the day was by Paul Piwek.  Paul works as a Senior Lecturer in the Department of Computing and Communications at The Open University.  Paul works on a number of module teams, and has played an important role in the development of a new module: M269 Algorithms, Data Structures and Computability.  It is a course that allows students to learn about some of the important fundamentals of computer science.

Paul’s brief was to talk about new technologies – and chose to explore this by considering the important question of ‘what are the limits of computability?’  This question is really important in computer science, since it connects to the related questions: ‘what exactly can we do with computers?’ and ‘what can they actually be used to calculate?’

Paul linked the title of his talk to the work of Alan Turing, specifically an important paper entitled, ‘on computable numbers’.  Paul then went onto talk about the differences between problems and algorithms, introduced the concept of the Turing Machine and spoke about a technique called proof by contradiction.

Some problems can take a long time to be solved.  When it comes to computing, speed is (obviously) really important.  An interesting question is: how might we go faster?  One thought is to look towards the subject of quantum computing (an area that I know nothing about; the page that I’ve linked to causes a bit of intellectual panic!)

Finally, Paul directed us to a Canadian company called DWave that is performing research into the area.

Reflections

After all the presentations had come to an end we all had a brief opportunity to chat.  Topics included location awareness and security, digital forensics, social media, the question of equality and access to the internet.  We could have chatted for a whole lot longer than we did.

It was a fun day, and I really would like to run another ‘new technology day’ at some point (I’ve just got to put my thinking hat on regarding the best dates and times).  I felt that there was a great mix of presentations and I personally really liked the mix of talks about technology and education.  It was a great opportunity to learn about new stuff.

By way of additional information, there is also going to be a London regional ‘research day’ for associate lecturers.  This event is going to take place during the day time on Tuesday 9 September 2014.  This event will be cross-faculty, cross-disciplinary event, so it’s likely that there might be a wide range of different events.  If you would like some more information about all this, don’t hesitate to get in touch, and I’ll point you towards my colleague Katy who is planning this event.

Permalink 1 comment (latest comment by Mandy Honeyman, Tuesday, 17 Jun 2014, 15:40)
Share post
Christopher Douce

OpenStack conference, June 2014 (part 2 of 2)

Visible to anyone in the world
Edited by Christopher Douce, Saturday, 7 Jun 2014, 13:21

This blog post is the second of two that summarises an OpenStack conference that I attended on 4 June in London. 

This second half of the conference had two parallel sessions.  Delegates could either go to the stream that was intended for novices (which is what I did), or go to a more technical session. 

I was quite tempted by the technical session, especially by a presentation that was all about what it means to be an OpenStack developer.  One of the key points that I did pick up on was that you need to know the Python language to be an OpenStack developer, which is a language that is used within the OU’s data structures and algorithms module, M269 Algorithms, data structures and computability

Introduction to OpenStack

The first session of the afternoon was by Kevin Jackson who works at Rackspace.

Kevin said that OpenStack and Linux are sometimes spoken about in similar terms.  Both can be created from distributions, and both are supported by companies that can offer consultancy support and help to move products forward. ‘OpenStack is like a pile of nuts’, said Kevin, and the nuts represent different components.

So, what are the nuts?  Nova is a compute engine, which hosts a virtual machine running in a Hypervisor.  I now understand that a hypervisor can host one or more virtual machine.  You might have a web server and your application code running within this bit of OpenStack.

Neutron is all about networking.  In some respects, Neutron is a virtual network that has been all written in code.  There is more about this in later presentations.  If you have different parts of an OpenStack implementation, Neutron allows the different bits to talk to each other; it pretends to be a physical network.

Swift is an object store, which is something that was spoken about during an earlier presentation.  Despite my earlier description, Swift isn’t really like a traditional file system.  Apparently, it can be ‘rack or cabinet aware’, to take account of the design of your physical data centre.

Cinder is another kind of storage; block storage.  As mentioned earlier, to all intents and purposes, Cinder looks like a ‘drive’ to a virtual machine.  I understand a situation where you might have multiple virtual machines accessing the same block storage device.

Ceilometer is a component that was described as telemetry.  This is a block which can apparently say how much bandwidth is being used.  (I don’t know how to describe what ‘bandwidth’ is in this instance – does it relate to the network, the available capacity within a VM, or the whole installation?  This is a distinct gap in my understanding).

Heat is all about orchestration.  Heat monitors ‘the cloud’, or its environment.  Kevin said, ‘if it knows all about your environment and suddenly you have two VMs and not three, it creates a third one’. This orchestration piece was described as a recipe for how your system operates.

All these bits and pieces are controlled by a web interface called Horizon, which I assume makes calls to the APIs of each of these components.  You can use Horizon to look at the components of the network, for example.  I have to confess to being a bit confused about the distinction between JuJu and this standard piece of OpenStack – this is another question that I need to ask myself.

At the end of Kevin’s presentation, I’ve made a note of a question from the floor which was: ‘why should I go open source and not go for a proprietary solution?’  The answer was interesting: you can get locked into a vendor if you choose a proprietary solution.  If you use an open source solution, such as OpenStack you can move your ‘cloud’ different providers, say, from Rackspace to HP.  With Amazon web services, you’re stuck with using Amazon web services.  In some respects, these arguments echo arguments that are given in favour of Linux and other open source products.  The most compelling arguments are, of course, freedom and choice.

A further question was, ‘how mainstream is this going to go?’  The answer was, ‘there’s many companies around the globe who are using OpenStack as a solution’, but I think it was also said that OpenStack is just one of many different solutions that exist.

OpenStack and Storage made easy at Lush Cosmetics

The second presentation of the day was made by Jim Liddle who works for a cloud consultancy called Storage Made Easy.

Jim presented a case study about his work with Lush Cosmetics.  I’ve made note of a number of important requirements: the data that is stored to the cloud should be encrypted, and there should be ways to help facilitate auditing and governance (of the cloud). 

It’s interesting that the subject of governance was explicitly addressed in this case study.  The importance of ‘institutional control’ and the need to carry out checks and balances is one of reasons why organisations might choose private clouds over public clouds. In the case of Lush, two key drivers included the cost per user, and the need to keep their data within the UK.

A new TLA that I heard was OVF (Wikipedia), an abbreviation for Open Virtualization Format, and is a way to package virtual machines in a way that is not tied to any particular hypervisor (VM container), or architecture.  Other technologies and terms that were referred to included: MySQL, which is touched on in TT284 Web Technologies (OU), Apache, MemCached (Wikipedia) and CentOS.

Deploying Windows Workloads into OpenStack using JuJu

A lot of the presentations had a strong Linux flavour to them.  Linux, of course, isn’t the only platform that can be used to power clouds. Alessandro Pilotti from Cloudbase solutions spoke on the connections between Windows and OpenStack.

Terms that cropped up during his presentation included Hyper-V (a hypervisor from Microsoft), KVM (Kernel based virtual machine, which is Linux hypervisor), MaaS (metal as a service, an Ubuntu term), GRE Tunnels (GRE being an abbreviation for Generic Routing Encapsulation), NVGRE (Network Virtualization using Generic Routing Encapsulation), and RDP (Remote Desktop Protocol).

It was all pretty complicated, and even though I’m reasonably technical, this was at a whole other level of detail.  Clicking through some of the above links soon takes me into a world of networking and products that are pretty new to me.  This clearly suggests that there is a whole lot of ‘new technology’ out there that I need to try to make a bit of sense of, and this, of course, takes time.

Alessandro also treated us to a live video link that showed a set of four small computers that were all hooked up together (I have no idea what these small desktop computers without screens were called; they used to have a special name).  The idea was to show LEDs flashing to demonstrate some remote rebooting going on.

This demo didn’t quite work out as planned, but it did get me thinking: to really learn how to do cloud stuff, a good idea would be to spend time actually playing with bits of physical hardware. This way you can understand the relationships between logical and physical architectures.  The challenge, of course, is finding the time to get the kit together, and to do the learning.

Using Swift in Entertaining Ways

This presentation was made by a number of people from Sohonet a company that offers cloud services to the film and TV industry.  An interesting application of cloud computing is film and video post-production, the part of production where when recordings are digitally edited and manipulated. An interesting challenge is that when it comes to video post-production we’re talking about huge quantities of data, and data that needs to be kept safe and secure.

Sohonet operates two clusters that are geographically separate.  Data needs to be held over different timescales, i.e. short, medium and long-term, depending upon the needs of a particular project.

A number of interesting products and companies were mentioned during this talk.  These include Expandrive where an OpenStack Swift component can become a network drive.  Panzura was mentioned in terms of Swift as a network appliance.  Zmanda and Cloudberrylab were all about backup and recovery.  Interesting stuff; again, a lot to take in.

Bridges and Tunnels – a drive through OpenStack networking

Mark McClain from the OpenStack foundation, talked about the networking side of things, specifically, the OpenStack networking component that is called Neutron.  Even though I didn’t understand all of it, I really enjoyed this presentation.  On a technical level, it was very dense; it contained a lot of detail.

Mark spoke about some of the challenges of using the cloud.  These included a high density of servers, the difficulties of scaling and the need for on-demand services.  A way to tackle some of these challenges is to use network virtualisation and something called overlay tunnelling (but I’m not quite sure what that means!)

Not only can virtual machines talk to virtual drives (such as the block storage service, Cinder), but they can also talk to a virtual network.  The design goals of the network component were to have a small core, and to have a pluggable open architecture which is configurable and extensible.  You can have DHCP configuration agents and can specify network traffic rules.  Neutron is also (apparently) backed by a database and a message queue.  (I also heard that there is a REST interface, if I’ve understood it correctly and my notes haven’t been mangled in the rush to write everything down).

A lot of network hardware can now be encoded within software (which links back nicely to the point about abstraction that I mentioned in the first block).  One example is something called Openvswitch (project website).  I’ve also noted down that you can have a load balancer as a service, a VPN as a service and a firewall as a service (as highlighted by the earlier vArmour talk).

Hybrid cloud workloads

The final talk of the day was by Monty Taylor who is also from the OpenStack foundation.  A hybrid cloud is a cloud that is a combination of public and private clouds (which could, arguably be termed an ‘ecosystem of clouds’).  Since it was the end of the day, my brain was full, and I was unable to take a lot more on board.

Reflections

All this was pretty interesting and overwhelming stuff.  I remember one delegate saying, ‘this is all very good, but it’s all those stupid names that confuse me’.  I certainly understand where he was coming from, but when it comes to talking about technical stuff, the names are pretty important: they allow developers to share understandings.  I’m thankful for those names, although each name does take quite a bit of remembering.

One of the first things I did after the conference was to go look on YouTube.  I thought, ‘there’s got to be some videos that helps me to get a bit more of an understanding of everything’, and I wasn’t to be disappointed – there are loads.  Moving forward, I need to find some time to look through some of these.

One of the things that I’ll be looking for (and something that I would have liked to see in the conference) was a little bit more detail about case studies that explicitly show how parts of the architecture work.  We were told that virtual machines can spin up in situations where we need to attend to more demand, but perhaps the detail of the case studies or explanations passed me by.

This is a really important point.  Some aspects of software development are changing.  I’ve always held the view that good software developers need to have an appreciation of system administration (or the ‘operations’ side of things).  When I had a job in industry there was always a separation between the systems administrators and the developers.  When the developers are done, they throw software over the wall to the admins who deploy the software.

This conference introduced me to a new term: a devop – part developer, part programmer.  Devops need to know systems stuff and programming stuff.  This is a reflection of software being used at new levels of abstraction: we now have concepts such as network as a service, and software defined security.  Cloud developers (and those who are responsible for keeping clouds running) are system software developers, but they can also be (and have to understand) application development too. 

A devop needs a very wide skill set: they need to know about networks, hardware, operating systems, and different types of data store.  They might also need to know about a range of different scripting languages, and other languages such as Python.  All these skills take time (and effort) to acquire.  A devop is a tough and challenging job, not only due to the inherent complexity of different components, but also due to the speed that technologies change and evolve.

When I arrived at the conference, I knew next to nothing about what OpenStack was all about, and who was using it.  By the end of the conference I ended up knowing the names of some of its really important components; mists of confusion had started to lift.  There is, however, a huge amount of detail to get my head around, and one of the things that I’m also going to do is to look at some user stories (OpenStack foundation).  This, I think, will help to consolidate some of my learning.

Permalink Add your comment
Share post
Christopher Douce

OpenStack conference, June 2014 (part 1 of 2)

Visible to anyone in the world
Edited by Christopher Douce, Friday, 6 Jun 2014, 17:43

On 4 June, I went to an event that was all about something called OpenStack.  OpenStack is an open source software framework that is used to create cloud computing systems.  The main purpose of this blog is to share my notes with some of my colleagues, but also to some of the people who I met during the conference.  Plus, it might well be of interest to others too.

Cloud computing is, as far as I understand it, a broad terms that relates to the consumption and use of computing resources over a network.  There are a couple of different types of cloud: there are public clouds (which are run by large companies such as Amazon and Google), private clouds (which are run by a single organisation), and hybrid clouds (which is a combination of public and private clouds).  There’s also the concept of a community cloud - this is where different organisations come together and share a cloud, or resources that are delivered through a cloud.

This is all very well, but what kind of computing resources are we talking about?  As far as I know, there are a couple.  There’s software as a service (or SaaS).  There’s PaaS, meaning, Platform as a Service, and there’s IaaS, which is Infrastructure as a Service.  Software as a Service is where you offer software through a web page, and you don’t ever touch the application code underneath.  Infrastructure as a Service is where you might be able to manage a series of ‘computers’ or servers remotely though the cloud.  More often than not, these computers are running in something called virtual machines.

These concepts were pretty much prerequisites for understanding what on earth everyone was talking about during the day.  I also picked up on a whole bunch of new terms that were new to me, and I’ll mention these as I go.

Opening Keynote : The OpenStack Foundation

Mark Collier opened the conference.  Mark works for the OpenStack Foundation (OpenStack website). During his keynote he introduced us some of the parts that make up OpenStack (a storage part, a compute part and a networking part), and said that there is a new software release every six months.  To date there are in the order of approximately 1.2k developers.  The community was said to comprise of approximately 350 companies (such as RedHat, IBM, HP, RackSpace) and 16k individual members.

Mark asked the question: ‘what are we trying to solve?’  He then went onto quote Mark Andreessen who said, ‘software is eating the world’.  Software, Mark said, is said to be transforming the economy and disrupting industries. 

One of the most important tools in computer science is abstraction.  OpenStack represents a way to create a software defined data centre (a whole new level of abstraction), which allows you to engineer flexibility to enable organisations to move faster and software systems to scale more quickly.

Mark mentioned a range of different companies who are using OpenStack.  These could be considered to be superusers (and there’s a corresponding superuser page on the OpenStack website which presents a range of different case studies).  Superusers include organisations such as Sony, Disney and Bloomberg, for example.

I remember that Mark said that OpenStack is a combination of open source software and cloud computing.  Another link that I noted down was to something called the OpenStack marketplace (OpenStack website).  Looking on this website shows a whole range of different Cloud distributions (many of which come from companies that offer Linux distributions).

Keynote: Canonical, Ubuntu and OpenStack

Mark Shuttleworth from Canonical (Canonical website) offered an industry perspective.  Canonical develops and supports Ubuntu which is a widely used Linux distribution.  (It is used, as far as I can remember in the TM129 Technologies in Practice module).  As well as running on the desktop, Ubuntu is widely used on the server side, running within data centres.  A statistic I’ve noted down is that Ubuntu accounts for ‘70% of guest workloads’.  What this means is that we’re talking about instances of the Linux operating system that have been configured and packaged by Ubuntu (that are running on a server within a datacentre, somewhere).

A competitor to Ubuntu is another Linux distribution called CentOS.  There is, of course, also Microsoft Windows Server.  When you use public cloud networks, such as those provided by Amazon, I understand that you’re offered a choice of the operating system that you want to ‘host’ or run.

An interesting quote is, ‘building your cloud is a bit like building your own mainframe – users will always want it to be working’.  We also heard of something called OpenStack Interoperability Laboratory.  Clouds can be built hundreds of times a day, we were told – with different combinations of technology from different vendors.  ‘Iteration is the only way to understand the optimal architecture for your use case’.

A really important aspect of cloud computing is the way that a configuration can dynamically adapt to changing circumstances (and user demands).  The term for how this is achieved (in the cloud computing world) seems to be ‘orchestration’.  In OpenStack, there is a tool called JuJu (Wikipedia).  JuJu enables (through a dashboard interface) different combinations of components to be defined.  There is a concept of a ‘charm’ (which was described as scripts which contain some operational coding).  If you would like to look at what it is all about, there’s a website called JuJu Charms that I’ve yet to spend time exploring.

I’ve also noted down something called a Service Orchestration Framework, which lets you place services where you want, and on what services.  There are some reference installations for certain types of cloud installations (which reminds me of the idea of ‘design patterns’ in software).

Mark referred to a range of different technologies during his talk, some of which I had only very briefly heard of.  One technology that was referred to time and time again was the concept of the hypervisor (Wikipedia).  I understand this to be a container (either hardware or software) that runs one or more virtual machines.  Other terms that he mentioned or introduced include KVM (Kernel-based virtual machine), Ceph (a way to offer shared storage), and MaaS, or Metal as a Service (Ubuntu), which ‘brings the language of the cloud to physical servers’.

A further bunch of mind boggling technical terms that were mentioned include ‘lightweight hyppervisors’ such as LXC (LinuX Containers), Hadoop, which is a data storage framework, and TOSCA (Wikipedia), which is an abbreviation for Topology and Orchestration Specification for Cloud Applications.  In terms of databases, some new (and NoSQL) technologies that were mentioned included MongoDB and Cassandra.

At this point, it struck me how much technologies have changed in such an incredibly short time, reminding me that we live in interesting times.

Keynote: Agile infrastructure built in OpenStack

The second keynote of the day was by John Griffith, Project Technical Lead, SolidFire.  John’s presentation had the compelling subtitle: ‘building the next generation data centre with OpenStack’.

A lot of people started using Amazon, who I understand to be the most successful public cloud provider, to use IT resources more efficiently.  There are, of course, other providers such as Google compute engine (Google), Windows Azure (Microsoft), and SoftLayer (which appears to be an IBM company).

A number of years ago, at an OU postgrad event, I overheard a discussion between two IT professionals that began with the question, ‘so, what are the latest developments in servers?’  The reply was something about server consolidation: putting multiple services on a single machine, so you can use that one machine (a physical computer or server) more efficiently.  This could be achieved by using virtual machines, but you can only do so much with virtual machines.  What happens if you run out of processing power?  You need to either get a faster machine, or move one of your virtual machines to another machine that might be under-utilised.

The next generation data centre will be multi-tenant (which means multiple customers or organisations using the same hardware), have mixed workloads (I don't really know what this means), and have shared infrastructure.  A key aspect is that an infrastructure can become software defined, as opposed to hardware defined, and the capacity of a cloud configuration or setup can change depending upon local demand.

There were a number of attributes of cloud systems.  I think there were: agility, predictability, scalability and automation.

In the cloud world applications can span many virtual machines, and data can be stored in scalable databases that are structured in many tiers.  The components (that make up a cloud installation) can be configured and managed through sets of predefined interfaces (or APIs).  I also made a note of a mobile app that can be used to manage certain OpenStack clouds.  One example of this is the Cloud mobile app from Rackspace.

Another interesting quote was, ‘[the] datacentre is one big computer and OpenStack is the operating system’.  Combining servers together has potential benefits in terms of power consumption, cooling and the server footprint.

One thing that developers need to bear in mind is how to create applications.  Another point was: consider scalability and plan for failure.  A big challenge lies with uncovering and deciphering what all the options are.  Should you use, for example, block storage services, or object storage?  What are the relative advantages and disadvantages of each?

Parts of this presentation started to demystify some of the terms that have baffled me from the start.  Cinder was, for example, is OpenStack’s block storage.  Looking outwards from the operating system, a block storage device could be a hard disk, or a USB drive.  Cinder, in effect, mimics what a hard drive looks at, and you can store stuff to a Cinder service as if it was a disk drive.  Swift is an object database where you can store object.  So, you might think of it in terms of sets of directories, the contents of which are replicated over different hard drives to ensure resilience and redundancy.

There is a difference between a service that is an abstraction to store and work with data, and how physical data is actually stored.  To make these components work with actual devices, there are a range of different plug-ins.

Presentation: vArmour

I have to admit that I found this presentation thoroughly baffling.  I had no idea what was being presented until I finally picked up on the word ‘firewall’, and the penny dropped: if a system architecture is defined in software, the notion of a firewall as a physical device suddenly becomes very old fashioned, if not a little bit quaint.

In the cloud world, it’s possible to have something a ‘software firewall’.  A term that I noted down was ‘software defined security’.  Through SDS, you can define what traffic is permissible between nodes and what isn’t, but in the ‘real world’ of physical servers, I’m assuming that physical ‘top layer’ firewalls are important too.

I also came across two new terms (or metaphors) that seem to make a bit of sense in the ‘cloud world’.  Data could, for example, move in a north-south direction, meaning it goes up and down through various layers.  If you’ve got east-west movement of data, it means you’re dealing with a situation where you might have a number of different virtual machines (that might have been created to respond to end user demand), which may share data between each other.  The question is: how do you maintain security when the nature of a configuration might dynamically change? 

Another dimension to security which crossed my mind was the need for auditability and disaster recovery, and both were subjects that were touched upon by other presenters.

In essence, I understood vArmour to be a commercial software defined security product that works akin to a firewall that can be used within a cloud system.

Presentation: The search for the cloud’s ‘God Particle’

Chris Jackson, who works for Rackspace (a company which has the tagline ‘the open cloud company’), gave the final presentation before we all broke for lunch.  Chris confessed to being a physicist (as well as a geek) and referred to research at CERN to find ‘the God particle’.  I also seem to remember him mentioning that CloudStack was used by CERN; there’s an interesting superuser case study (OpenStack website), for those who might be interested.

Here’s the question: if there is a theory that can describe the nature of matter, is there a theory that might explain why a cloud solution might not be adopted?  (He admitted that this was a bit of fun!)  He presented three different theories and asked us to vote on which were, perhaps, the most significant.

The first was: application.  Some applications can be rather fragile, and might need a lot of cosseting, whereas other forms of application might be very robust; they’re all different.  Cloud applications, it is argued, embrace chaos and build failure into applications.  Perhaps the precise character of certain applications might not lend it to being a cloud application?

Theory two: integration.  There could be the challenge of integration and connection with existing systems, which might themselves have different characteristics. 

The third theory is all about operations.  This is more about the culture of an organisation.

So, which theory is the reason why organisations don’t adopt a cloud solution?  The answer is: quite possibly all of them.

Permalink Add your comment
Share post
Christopher Douce

Disabled student services conference 2014 – day 2

Visible to anyone in the world
Edited by Christopher Douce, Tuesday, 22 Jan 2019, 09:39

Keynote: positive thinking

The first keynote of the day was by motivational speaker, David Hodgson.  The title of his session was, ‘the four key ways to happiness and success’ (which was a really very ambitious title, if you ask me!)  I’ve seen David talk before at a staff development day in London, where he encouraged us to reflect upon our Myers-Briggs personality profile.  Apparently, this was the focus of a later workshop that he ran later during the morning.

So, what are the four key ways?  Thankfully, I was sufficiently caffeinated to be able to take a note of them.  They are: (1) know yourself (and the great things that you’re capable of), (2) having self-belief, (3) have a plan (of some kind), and (4) have a growth attitude.   Of course, I’m paraphrasing, but, all in all, these are pretty good points to think about.

David also presented us with a quote from Abraham Maslow, who proposed his eponymous Hierarchy of Needs (Wikipedia).  The quote goes:  ‘If you plan on being anything less than you are capable of being, you will probably be unhappy all the days of your life.’  Maslow might have accompanied that quote with a wagging finger and the words, ‘you really need to sort yourself out’.  I had these words rattling around my head for next two days.

Workshop: Learning design for accessibility

The first workshop of the day was facilitated by Lisette Toetenel and Annie Bryan from the OU’s Institute of Educational Technology.  The focus of the event was a learning design tool that IET had created to help module teams consider different pedagogic approaches.  It has been embedded into the module design process, which means that module chairs have to signify that they’ve engaged with IET’s learning design framework.  Through my involvement with a new module, I had heard a little about it, but I didn’t know the detail.

Learning design was defined as, ‘the practice of planning, sequencing and managing learning activities’, usually using ICT tools to support both design and delivery.  An important point was that both accessibility and important areas such as employability skills need to be considered from the outset (or be ‘woven into’ a design) and certainly not ‘bolted on’ as an after-thought.

The learning design framework is embedded into a tool, which takes the form of a template that either module members or a module chair has to complete.  Its objective is to improve quality, sharing of good practice, speed up decision making process, and manage (and plan) student workload.  The tool has an accompanying Learning Design website  (but you might have to be a member of the university to view this).

During the workshop we were divided up into different tables and asked to read through a scenario.  Our table was given an ‘environmental sciences’ scenario.  We were asked three questions: ‘what exactly do students do [in the scenario], and how do (or might) they spend their time?’ and what accessibility problems they might be confronted with.

The point was clear: it’s important to consider potential barriers to learning as early and as soon as you can.

Keynote: SpLDs – The Elephant in the Counselling Room: recognising dyspraxia in adults

The final keynote of the conference was given by Maxine Roper (personal website).  Maxine describes herself as freelance journalist and writer, and a member of the dyspraxia foundation.

One of the main themes of her keynote was the relationship between dyspraxia and mental health.  Now, I’ll be the first to say that I don’t know very much about dyspraxia.  Here’s what I’ve found on the Dyspraxia Foundation website: ‘Dyspraxia, a form of developmental coordination disorder (DCD) is a common disorder affecting fine and/or gross motor coordination, in children and adults. … dyspraxia [can also refer] to those people who have additional problems planning, organising and carrying out movements in the right order in everyday situations.’

I was struck by how honest and personal Maxine’s talk was.  Dyspraxis is, of course, a hidden disability.  Maxine said that dyspraxics are good at hiding their difficulities and their differences, and spoke at length about the psychological impact.  An interesting statistic is that ‘a dyspraxic child is 4 times more likely to develop significant psychological problems by the age of 16’ (from the Dyspraxia Foundation).

Some of the effects can include seeing other people more capable, being ‘over givers’ with a view to maintaining friendships, but other people might go the other way and become unnecessarily aggressive (as a strategy to covering up ‘difference’).  Sometimes people may get reactive depression in response to the continual challenge of coping.

I found Maxine’s description of the psychological impact of having a hidden disability fascinating – it is a subject that I could very easily relate to because I also have a hidden disability (and one that I have also tried a long time to hide).  This made me ask myself an obvious question that might well have an obvious answer, which was ‘are these thoughts, and the psychological impact common across other types of hidden disabilities?’ 

So, what might the solutions be?  Maxine offered a number of answers: one solution could be to raise awareness.  This would mean awareness amongst students, and amongst student councillors and those who offer support and guidance.

I noted down another sentence that was really interesting and important, and this was the point about coping strategies.  People develop coping strategies to get by, but these coping strategies might not necessarily be the most appropriate or best approach to adopt.  In some cases it might necessary to unpick layers of accumulated strategies to move forward, and doing this has the potential to be really tough.

Maxine’s presentation contained a lot of points, and one of the key one for me (the elephant in the room), was that it’s important to always deal with the person as a whole, and that perhaps there might be (sometimes) other reasons why students might be struggling.

Workshop : Through new eyes – understanding the experience of blind and partially sighted learners

The final workshop of the conference was given by my colleague Richard Walker, who works as an associate lecturer for the Maths Computing and Technology Faculty.  Like Maxine’s keynote, Richard’s spoke from his own experience, and I found his story and descriptions compelling and insightful.

Richard told us that he had worked with a number of blind and partially sighted students over the years.  He challenged us with an interesting statistic: if we consider the number of people in the general population who have visual impairments, and if an associate lecturer tutors a subject for around ten or so years, this means there is a 90% chance that a tutor will encounter a student who has a visual impairment.  The message is clear: we need to be thinking about how to support our students, which also means how we need to support our associate lecturers too.

Richard has had a stroke which has affected his vision.  Overnight, he became a partially sighted tutor.  ‘This changed how I saw the world’, he said. 

One of his comments has clearly stuck in my mind.  Richard said that when he was in hospital he immediately wanted to get back to work.  Richard later started a blog to document and share his experiences, and I’ve also made a note of him saying that he ‘couldn’t wait to start my new career’, and ‘when I got home from hospital I wanted to download some software so I can continue to be an Open University tutor’.

Richard spoke about the human visual system, which was fascinating stuff, where he talked about the working of the eye and our peripheral vision.  He presented simulations of different visual impairments though a series of carefully drawn PowerPoint slides.    On the subject of PowerPoint, he also spoke briefly about how to make PowerPoint accessible.  His tips were: keep bullet points very short, choose background and foreground colours that have a good contrast, and ensure that you have figure descriptions.

I was struck by Richard’s can-do attitude (and I’m sure others were too).  He said, ‘the whole world looks a bit different, and I like learning new stuff, so I learnt it’.  An implication of becoming partially sighted was that this affected his ability it read.  It was a skill that had to be re-learnt or re-discovered, which sounds like a pretty significant feat.  ‘I just kept looking at the lines, and I’ve learnt to read again.  You just experiment [with how to move your eyes] and you see what works’.

When faced the change in his vision, he contacted his staff tutor for advice, and some accommodations were put in place.  Another point that stood out for me was the importance of trust; his line manager clearly trusted Richard’s judgement about what he could and could not do.

Sharing experience

Richard tutors on a module called M250 Object-oriented programming (OU website).  When student study M250 they write small programs using a software development environment.    Richard made the observation that some software development environments can be ‘hostile to assistive technology’, such as screen readers.

Richard is currently tutoring a student who has a visual impairment.  To learn more about the student’s experience, he interviewed the student by email – this led to creation of a ‘script’.  With help from a workshop delegate, Richard re-enacted his interview, where he asked about challenges, assistive technologies, study strategies and what could be done to improve things. We learnt about the use of Daisy talking books (Wikipedia), the fact that everything takes longer, about strategies for interactive with computers, and the design of ‘dead tree’ books that could be read using a scanner.  After the performance, we were set an activity to share views about what we learnt from the interview (if I remember correctly).

Towards the end of the workshop, Richard facilitated a short discussion about new forms of assistive technologies and ubiquitous computing, and how devices such as Google Glass might be useful; thought provoking stuff.

I enjoyed Richard’s session; it was delivered with an infectious enthusiasm and a personal perspective.  The final words that I’ve noted down in my notebook are: ‘it’s not because I’ve got a strength of character, it’s because I love my work … you just have got to get on with it’

Reflections

Like all the others, this year’s disabled student services conference was both useful and enjoyable.  These events represent an invaluable opportunity to learn new things, to network with colleagues, and to take time out from the day job to reflect on the challenges that learner’s face (and what we might be able to do to make things easier).

For me, there were a couple of highlights.  The first was Keith’s understated but utterly engaging keynote.  The second was Richard Walkers workshop: I had never seen Richard ‘in action’ before, and he did a great job of facilitation.  In terms of learning, I learnt a lot from Maxine’s talk, and it was really interesting to reflect upon the emotional and psychological impact that a hidden disability can have on someone.  I feel it’s an issue that is easily overlooked, and is something that I’ll continue to mull over.  In some respects, it has emphasised, to me, how demanding and important the role of learning support advisors role is to the university.

One question that I have asked myself is: ‘what else could be done within the conference?’  This, I think, is a pretty difficult question to ask, since everything was organised very well, and the whole event was very well attended.

One thought is about drama.  Richard’s session contained a hint of drama, where he used a fellow delegate to read a script of his email interview.  I’ve attended a number of excellent sessions in the East Grinstead region (which is now, sadly, going to be closed) that made use of ‘forum theatre’.  Perhaps this is an approach that could be used to allow us to expose issues and question our own understandings of the needs of our students.  Much food for thought.

Permalink 1 comment (latest comment by Jonathan Vernon, Wednesday, 28 May 2014, 14:13)
Share post
Christopher Douce

Disabled student services conference 2014 – day 1

Visible to anyone in the world
Edited by Christopher Douce, Tuesday, 22 Jan 2019, 09:40

I recently attended the university’s disabled student services conference held between 13 and 14 May 2014.  I think this was the third time I’ve been to this event, and every time I go I always learn something new.

This is a quick blog summary of the sessions I attended.  I guess this summary serves a number of purposes.  Firstly, it’s a summary of some of the continuing professional development I’ve been getting up to this year.  Secondly, it might be of interest to any of my students who might be studying H810 accessible e-learning (OU website).  Thirdly, it might be useful to some of my colleagues, or for anyone who accidentally stumbles across this series of two posts.

The complexities of co-occurrence

The first session of the day was presented by my colleague Jonathan Jewell, who works as an associate lecturer for a least three different faculties.  My first thought was, ‘what is meant by co-occurrence?’ - it wasn’t a term I had heard before.  I quickly figured out that it means that a person can have a number of different conditions at the same time.  A big part of his session was about what this might mean in terms of understanding a profile that contains quite a lot of information.

During Jonathan’s session I remember a debate about the terms ‘student-centred’ and ‘person-centred’.  The point was that although a student might be studying a particular module, they are on a programme, and this can, of course relate to a broader set of personal objectives that they might hold.

Every student who discloses a disability may have their own disability profile. The aim of the profile is tell a tutor something about their students to help them to understand what adjustments (in terms of their tuition) they could make.

During Jonathan’s session we looked at a sample profile and thought about it in terms of its strengths and weaknesses.  Our group concluded that the profile we were given contained a lot of information.  A particular weakness was that it contained a lot of quite technical jargon that was quite hard to understand.  A later task was to devise a ‘tutor plan of action’ based on the profile.  A clear point that was mentioned was the importance of establishing early contact with students to ensure that they feel comfortable and supported.

Towards the end of the session, I remember a debate that student profiles can change; some disabilities are temporary.  I also understand that there are now clearer university guidelines about how profiles should be written; a profile written today might be different to how it was written a couple of years ago.

Keynote: REAL services to assist students who identify with Asperger syndrome (AS)

The first keynote of the day was by Nichola Martin who I understand works for the University of Cambridge.  The ‘REAL’ bit of her presentation title is an abbreviation for: reliable, empathic, anticipatory and logical – this idea is that we should be these attributes when we work with people who identify with having Asperger syndrome (AS).  Very early on during her presentation she made the key point that ‘if you’ve met one person with AS, you’ve only met one person with AS’. 

Nichola also exposed us to stereotypes from the media, which she asked us to question.  The use of language is fundamentally important too, i.e. the term ‘condition’ is better than ‘disorder’ which suggests that something is fundamentally wrong.  Another interesting point is that the characteristics of people can change over time, a point that neatly connects back to the previous session about the changing nature of student profiles.

A big part of Nichola’s presentation was to share some findings from a research project that studied the views of students.  Its aim was to develop a model of best practice for student with AS, improve access to diagnosis, raise awareness and develop networks.

One really important point is about the importance of clear language; always be clear in what you either say or write.  An important point that I have noted is that if we make accommodations for one group, this is likely to help all students.  Stating clear assumptions in a clear and respectful way is, of course, useful for everyone. 

Another point is that institutions can be difficult to negotiate, particularly during the early stage of study.  If things are chaotic at the beginning of university study, it might be difficult to get back onto an even keel.  Some challenges that students might face can include finding their way through new social environments.  I’ve noted down a quote which goes, ‘my main barriers have been social and I find large groups of people I don’t know intimidating – as a result, I rarely attend lectures and often feel alone’.

There were some really interesting points about disability and identity which deserve further reflection.  Some students choose not to disclose and don’t go anywhere near the disability services part of the university.  Students may not want ‘special services’, since this hints at the notion of ‘othering’, or the emphasis of difference.  If people don’t want to talk about their personal circumstances, that is entirely their right.

We were told that Asperger’s and autism are terms that are used interchangeably, and this is reflected in the most recent publication of the DSM (Wikipedia, Diagnostic and Statistical Manual of Mental Disorders).

There were a number of things that were new to me, such as The Autism Act 2009 (National Autistic Society), and The Autism Strategy 2010 (National Autistic Society), which has been recently updated.  Another interesting and useful link is a video interview produced by the National Autistic Society (YouTube).   It was also great to hear that Nichola also mentioned OU module SK124 understanding the autism spectrum (OU website). 

All in all, a thought provoking talk.

Workshop: Student Support Teams and Disabled Students Support

The next event I went to was a workshop where different members of the newly formed student support teams (SSTs) were brought together to discuss the challenges of supporting students who have disabilities.  Again, the subject of student profiles was also discussed.

My own perspective (regarding student support teams) is one that has been really positive.  Whenever I’ve come across an issue when I needed to help a student (or a tutor) with a particular problem, I’ve always been able to speak with a learning support advisor who have always been unstintingly helpful.  I personally feel that now there are more people who I can speak to regarding advice and guidance.

Keynote: The life of a mouth artist

The final keynote of the day was a really enjoyable and insightful talk by artist, Keith Jansz.  Keith began by telling us about his background.  After being involved in a car accident, in which he was significantly paralysed, he started to learn how to draw and paint after being given a book about mouth artists by his mother in law. 

Keith spoke how he learnt how to paint, describing the process that he went through.  Being someone who has a low opinion of my own abilities when it comes to using a pencil, I found his story fascinating.  I enjoyed Keith’s descriptions of light, colour, and the creative process. What struck me were the links between creativity, learning and self-expression; all dimensions that are inextricably intertwined. 

I thought his talk was a perfect keynote for this conference.   It was only afterwards that the implicit connections between Keith’s talk and the connections with university study became apparent. Learning, whatever form it may take, can be both life changing and life affirming.

During the conference, there was an accompanying exhibition of Keith’s work.  You can also view a number of his paintings on his website.

Permalink Add your comment
Share post
Christopher Douce

Ten Forum Tips

Visible to anyone in the world

I spend quite a lot of time using on-line discussion forums that are used as a part of a number of Open University modules I have a connection with.  I also wear a number of different ‘hats’; as well as being an Open University tutor, I also spend time visiting forums that are run by other Open University tutors in my role as a staff tutor.

A couple of years ago, I was sent a copy of a book called e-moderating (book website) by Gilly Salmon, who used to work at the Open University business school.  The e-moderating book is really useful in situations where the discussion forums constitute a very central part of the teaching and learning experience.  Salmon offers a raft of useful tips and offers us a helpful five stage model (which can be used to understand the different types of interaction and activities that can take place within a forum).

Different modules use discussion forums in different ways.  In some modules, such as H810 Accessible on-line learning they are absolutely central to the module experience.  In other modules, say, M364 Interaction Design, they tend to adopt more of a ‘supporting’ rather than a ‘knowledge creation’ role.

Just before breaking for Christmas and the New Year, I started (quite randomly) to write a list of what I thought would be my own ‘killer tips’ to help tutors with forums.  This is what I’ve come up with so far.

1. Be overly polite

One phrase that I really like is ‘emotional bandwidth’.  In a discussion forum, we’re usually dealing with raw text (although we can pepper our posts with emoticons and pictures). 

This means that we have quite a ‘narrow’ or ‘low’ emotional bandwidth; our words and phrases can be very easily misunderstood and misinterpreted by others, especially in situations when we’re asking questions with the objective of trying to learn some new concept or idea.  Since our words are always ambiguous, it’s important to be overly polite.   

Be more polite than you would be in real life!

2. Acknowledge every introduction

The start of a module is really important.  The first days or weeks represent our chance to ‘set the tone’.  If we set the right tone, it’s possible to create momentum, to allow our discussion forums to attract interaction and conversation.

A good idea at the start of a module is to begin an ‘introduction’ thread.  Start this thread by posting your own introduction: set an example.  When other introductions are posted, take the time to send a reply to (or acknowledge) each one.

3. Use pins

Some discussion forums have a feature that allows you to ‘pin’ a discussion thread to the top of a forum. 

The act of ‘pinning’ a thread highlights it as something that is important.  Pins can be really useful to highlight discussions that are current or important (such as an activity that needs to be completed to prepare for an assignment, for example).  Subsequently, it’s important not to pin everything.  If you do, students will be unclear about what is important and what is not and this risk hiding important discussions. 

Use ‘pinned threads’ in a judicious way and regularly change what threads are pinned (as a module progresses) – this suggests that a forum is alive and active.

4. Tell your students to subscribe

There are a couple of ways to check the OU discussion forums.  One way is to login regularly and see whether anyone has made any new posts.  Another way is to receive email updates, either from individual threads or from whole discussion forums.  At the start of a module presentation, it’s a good idea to tell your student group to subscribe to all the forums that are used within a module.  One way to do this is by sending a group email.  When a student has subscribed they are sent an email whenever anyone makes a post or sends a reply (the email also contains a copy of the message that was posted).

One of the really good things about using emails to keep track of forums is that it’s possible to set up ‘rules’ on an email client.  For example, whenever a forum related email is received, it might be possible to transfer it to a folder based on the module code that is contained within the message subject.  This way, you can keep on top of things without overloading your inbox.

5. Encourage and confirm

Busy forums are likely to be the best forums.  One approach to try to create a busy forum is to do your best to offer continual encouragement; acknowledge a good post and emphasise key points that have been raised.  (Salmon writes about weaving together and summarising a number of different discussions). 

Another really great thing to do is to seek further confirmation or clarifications.  You might respond to a message by writing something like, ‘does this answer your question?’  This keeps a discussion alive and offers participants an opportunity to present alternative or different perspectives.

6. Push information about TMAs

Tutor marked assignments (TMAs) are really important.  As soon as a TMA is submitted, students will generally expect them back within 10 working days (which is the university guideline).  Sometimes TMAs are returned earlier, and in some situations (with permission from the staff tutor), it can take a bit longer.  A forum can be used to provide ‘push’ updates to students about how marking is progressing.  Once a TMA cut-off date has been met, a tutor could start a forum thread entitled, ‘TMA x marking update’. 

When you’re approximately half way through the marking, one idea is to make a post to this thread telling them.  Also, if your students have subscribed, they’ll automatically receive the updates.  This reduces pre-TMA result anxiety (for the students), since everyone is kept in the loop about what is happening.  (The thread can also be used to post some general feedback, if this is something that is recommended by the module team).

7. Advertise tutorials

Open University tutorials can be either face to face (at a study centre, which might be at a local university or college), or can take place on line through a system called OU Live.  A post to a discussion forum can be used to remind students about tutorials.   They can also be used to offer some guidance to students to help them to prepare for the session.  You could also ask whether students (in your group) have any particular subjects or topics that they would like to be discussed or explored.

After the tutorial, a forum can be used to share handouts that were used during either an on-line session or a day school.  It also offers students an opportunity to have a discussion about any issues that (perhaps) were not fully understood.  Also, during a tutorial, a tutor might set up or suggest a long running research task.

There are a number of advantages of connecting tutorials to forum discussions.  Those people who could not attend can benefit from any resources that were used during an event.  It also allows a wider set of opinions and views to be elicited from a greater number of students.

8. Provide links

A subject or topic doesn’t begin and end with the module materials.  During the presentation of a module, you might inadvertently see a TV programme that addresses some of the themes that are connected to a particular topic of study.  A forum is a great way to contextualise a module by connecting it to current stories in the media and one way you can do this is by either posting links to a news story (or series of stories), or perhaps by starting a discussion.

As well as sharing news stories, you can also use discussion forums to alert students to some of the study skills resources that have been developed by the Open University.  There are also some library resources that might be useful too.  Other resources might include OpenLearn resources, for example.  A forum is a great way to direct students to a wide array of useful and helpful materials.  You might also want to ask other tutors (using the tutors’ forum) about whether any other tutors have suggestions or ideas.

9. Visit other forums

Every tutor does things slightly different; no tutor is exactly the same, and this is a good thing.  If you tutor on a module, there’s a possibility that you might be able to view another tutor’s discussion forum.  If you have the time, do visit another tutor’s forum.  Some good questions to ask are: ‘at a glance, do the students look engaged?’ or ‘how busy is this forum?’  Other questions might be, ‘what exactly is the tutor doing?’ and ‘ how are they asking questions?’  This allows you to get a view on how well a forum is being run.  When you see a busy and well run forum, ask the question: ‘is the tutor doing something special here?’  If so, what is it?  Sometimes, of course, certain cohorts can just be pretty quiet; some years are more busy than others.

After visiting a forum, the best questions to ask are, ‘what have I learnt?’ and ‘is there anything that I could or should be doing with my forum?’

10. Form forum habits

The more that you’re active on a forum, the more useful a forum can become for students.  Find some time, every day, or every couple of days, to read through and respond to forum posts.  This will keep your forums fresh and alive; they may even acquire a ‘stickiness’ of their own and become pages that students are drawn to time and time again.

Summary

This quick blog post summarises a number of ‘forum tips’ that I’ve discovered over the last couple of years of working with different modules.  Some of these ideas have, of course, been shaped by the e-moderating book that I have mentioned earlier.  E-moderating is a book that is useful for some modules and not others since different modules use forums in slightly different ways.  Although a module team might use a forum in a particular way, it is always going to be up to you, a tutor, to take ownership of this important learning space.

Finally, if you would like to add to these tips (or even disagree with them), please don’t hesitate to let me know!

Permalink 1 comment (latest comment by Ravina Talbot, Sunday, 3 Feb 2019, 15:44)
Share post
Christopher Douce

eSTEeM Conference – Milton Keynes, May 2014

Visible to anyone in the world
Edited by Christopher Douce, Tuesday, 20 May 2014, 09:50

On 5 May 2014, I was at Milton Keynes again.  I had something called a module team meeting in the morning.  In the afternoon I attended an OU funded conference that had the title (or abbreviation) eSTEeM (project website).

eSTEeM is an initiative to conduct research into STEM education.  STEM is an abbreviation for Science Technology Engineering and Mathematics.  Since I have some connections with some computing modules, which can cross the subjects of Engineering and Mathematics, I decided to submit a proposal that had the objective of learning more about the tutor’s experience of teaching computer programming.  The aim was simple: if we learn more about the tutor’s challenges, we can support them better, and subsequently they will be able to support the students better too.

I have been lucky enough to receive a small amount of funding from the university.  This, of course, is great news, but it also means that I’ve got even more work to do!  (But I’m not complaining - I accept that it’s all self-inflicted, and it’s work that will allow us to get at some insights).  If you’re interested, here’s some further information about the project (eSTEeM website).

A 'pilot' project

The ‘understanding the tutors and the students when they do programming’ project is a qualitative study.  In this case, it means that I’ll be analysing a number of interviews with tutors.  I’ll be the first to admit that it’s been quite a while since I have done any qualitative research, so I felt that I needed to refamiliarise myself with what I needed to do by, perhaps, running a pilot study.

It wasn’t long before I had an idea that could become a substantive piece of research in its own right. I realised that there was an opportunity to run a ‘focus group’ to ask tutors about their experience of tutoring on another module: T320 Ebusiness Technologies (OU website).  The idea was that the outcome from this study could feed directly into discussions about a new module.

During my slot at the conference, rather than talking about my research about programming (which was still at the planning stage), I talked about T320 research, that was just about finished.  I say finished, when what I actually mean is ‘transcribed’; there is a lot more analysis to do.  What has struck me was how generous tutors are with both their opinions and their time.  Their views will really help when it comes to designing and planning the future module that I have a connection.

Final thoughts and links

In case you’re interested, here’s a link to the conference programme.

What struck me was how much ‘internal research’ was there was going on; there are certainly a lot of projects to look through.  From my perspective, I’m certainly looking forward to making a contribution to the next conference and sharing results from the web technologies and programming research project with colleagues.   The other great thing about getting my head into research again is that when you have one idea about what to look at, you suddenly find that get a whole bunch of other ideas.

Permalink Add your comment
Share post
Christopher Douce

Widening Participation through Curriculum Conference (day 2 of 2)

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 24 Apr 2019, 17:29

The second day of the conference was to be slightly different to the first; there were fewer sessions, and there were a number of ‘talking circle’ workshop events to go to.  On the first day I arrived at the conference ridiculously early (I was used to the habit of travelling to Milton Keynes in time for meetings, and catching a scheduled bus to the campus).   On the second day, I was glad to discover that I wasn’t the first delegate to arrive.

Opening remarks

The second day was opened by Professor Musa Mihsein from the OU.  He presented an interesting story of how he became to work at the university as a PVC.  Musa talked about changes to funding, making the point that there has also been a change in the use of language.  There is more of a need to ‘maximise impact’.  The accompanying question is, of course, ‘how can we best evaluate projects and programs?’

A couple of points I noted down was that we haven’t got a full understanding of curriculum and its role within the institution, and that collaborations are important.  There is also a continual need to communicate in different ways to policy makers.

Keynote 4: Liberating the curriculum

The first keynote of the day was by Kelly Coate, Senior Lecturer in Higher Education, from Kings College, London.  Kelly’s talk was interesting since it spoke directly to the ‘curriculum’ part of conference title.  She has been researching about curriculum for the last 20 years and made the point that, ‘decisions about curriculum are decisions about what we can think’ (if I’ve taken that down correctly).

Here’s some of my notes: we’re accustomed to certain view of what ‘curriculum’.  The word derives from a Latin word that means to run/to proceed.  This makes a lot of sense: most participants make it to the finish line, there are often a couple of really high scorers and a couple who are, perhaps, left behind. 

If we dig around in history, the notion of curriculum used to be associated with the ‘liberal arts’.  This contains the disciplines of grammar, logic, rhetoric, music theory, astronomy, arithmetic, and geometry, with the word liberal being derived from libra, meaning ‘free’.

Kelly’s talk gave way an interesting twist.  Since she studies what people are studying, she was asked to comment on a story that Miley Cyrus was to be the subject of a university course.  If you’re interested, here’s a related news story: Back to twerk … Miley Cyrus to be studied on new university course (The Guardian).   Thinking about it for a moment, the subject of Miley can readily be used to facilitate discussions about femininity, power, exploitation, celebrity,sexuality…

A bit of theorising is always useful.  We could thing about curriculum in three different domains: knowing, acting and being. Importance of relating teaching to the now, which opens up the possibility of students considering suggesting their own curricula by performing research into how ‘the now’ relates to the broad subject area.

Another way of thinking about curriculum might be in terms of gravity and density.  Gravity is the extent to which a subject can be related to a particular context.  Density relates to how much theory there is (some subject can be incredibly theoretical).  I really like these metaphors: they’re a really good (and powerful) way to think about how a lecturer or teacher might be able to ‘ground’ a particular concept or idea.

We were briefly taken through a couple of ideas about learning and pedagogy.  The first one was the transmission model (which, I think, was described as being thoroughly discredited), where a lecturer or teacher stands in the front of the class and talks, and the students magically absorb everything. The second idea (which I really need to take some time out to look at) is actor-network theory (wikipedia).  Apparently it’s about thinking about systems and networks and how things are linked through objects and connections.  (This is all transcribed directly from my notes - I need to understand in a whole lot more than I do at the moment!)

I’ve also made a note about a researcher called Jan Nespor  who has applied actor-network theory to study physics and business studies classes.  The example was that lecturers can orchestrate totally different experiences, and these might be connected with the demands and needs of a particular discipline (if I’ve understood things correctly!)

I’ve made a note of some interesting points that were made by the delegates at the end of Kelly’s speech.  One point was that different subjects have different cultures of learning, i.e. some subjects might consider professional knowledge to be very important.  Musa mentioned the importance of problem-based learning, particularly in subjects such as engineering. 

Session 3: Innovation in design and pedagogy

There was only one presentation in the third session which was all about pedagogy.  This was entitled ‘Creating inclusive university curriculum: implementing universal design for learning in an enabling programme’, by Stuart Dinmore and Jennifer Stokes.  The presentation was all about how to make use of universal design principles within a module.  We were introduced to what UD is (that it emerges from developments in design and architecture), that it aims to create artefacts that are useful for everyone, regardless of disability.

Connecting their presentation to wider issues, there are two competing (yet complementary) accessibility approaches: individualised design and universal design.  There is also the way in which accessibility can be facilitated by the use of helpers, to enable learners to gain access to materials and learning experiences.

It was great that this presentation explicitly spoke to the accessibility and disability dimension of WP, also connecting to the importance of technology.  During Stuart and Jennifer’s presentation, I was continually trying to relate their experiences with my own experience of tutoring on the OU module, H810 Accessible online learning: supporting disabled students (OU web page)

Talking circle

I chose to attend innovation in design and pedagogy.  I do admit that I did get a bit ‘ranty’ (in a gentle way) during this session.  This was a good opportunity to chat about some of the issues that were raised and to properly meet some of the fellow delegates.  Some of the views that I expressed within this session are featured in the reflection section that follows.

Closing keynote:  class, culture and access to higher education

The closing keynote was by John Storan from the University of East London.  John’s keynote was a welcome difference; it had a richly personal tone.  He introduced us to members of his family (who were projected onto a screen using PowerPoint), and talked us through the early years of his life, and his journey into teacher training college, whilst constantly reflecting on notions of difference.

He also spoke about a really interesting OU connection too.  John was a participant in a study that gave way to a book entitled, Family and kinship in East London (Wikipedia), by Michael Yong and Peter Willmott.  (This is one of those interesting looking books that I’m definitely going to be reading – again, further homework from this conference).  ‘We were the subject’, John told us.  He also went onto make the point about the connections between lived experience, research, policy and curriculum.

I’ve made a note in my notebook of the phrase, ‘not clever, able enough’.  I have also been subject to what I now know to be ‘imposter syndrome’.  In the question and answer session, I’ve made a note about that the codes of language can easily become barriers.

Reflections

One of the really unexpected things about this conference was the way that it accidentally encouraged me to think about my own journey to and through higher education.  Although for much of my early life I didn’t live in an area that would feature highly in any WP initiatives, higher education was an unfamiliar world to my immediate family.

Of course, my journey and my choices end up being quite nuanced when I start to pick apart the details of my biography, but I think there was one intervention that made a lasting impression.  This intervention was a single speech given by a member of staff at my former college about the opportunity that university study gave.  I remember coming away thinking, ‘I’m going to apply; I have nothing to lose, and everything to gain’.  A number of my peers thought the same.

The conference presented a number of different perspectives: the importance of assessing the effectiveness of interventions and the importance of theory, how to design WP curriculum, how to make curriculum accessible, and how to make materials engaging for different groups.  One aspect that I thought was lacking was that of the voices of the students.  It’s all very well discussing between ourselves what we think that we should be doing, but I felt it would be really valuable to hear the views of students. 

An area that would be particularly useful is to hear about instances of failure, or to hear about what went wrong when students tried university level study but couldn’t complete for some reason.  There are some really rich narratives that have the potential to tell researchers in WP and curriculum a lot about what institutions (and individuals) need to do.  The challenge, of course, is finding those people who would like to come forward and share their views.

In the sessions that I attended, there were clear discussions about class, socio-economic status and disability, but there seemed to be an opportunity to discuss more about ethnicity.  Quantitative research has shown that there is an attainment gap.   There was an opportunity for some qualitative discussions and more sharing of views regarding this subject.

Another thought relates to the number of keynote speeches.  Keynote speeches are really important, and it was great that they were varied – and they are very important in tone and agenda setting, but more paper sessions (and perhaps a plenary discussion?) might expose different issues and allow more contacts to be made.

I appreciate that these final reflections sound a bit ‘whingey’; they’re not intended to be.  WP is an important issue, and from the amount of follow-up homework I’ve got to do this clearly tells me that the conference was a great success. 

In some ways I guess the conference was slightly different to what I had expected (in terms of the debate and discussions).  I was expecting it to be slightly less ‘academic’ and slightly more practitioner focussed (or oriented to those who deal with WP issues on a day to day basis).   The unexpected difference, however, was very welcome; I’ve learnt some new stuff.

Permalink Add your comment
Share post
Christopher Douce

Widening Participation through Curriculum Conference (day 1 of 2)

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 24 Apr 2019, 17:31

There are some days when I feel very lucky; lucky in the sense that my transition from school, to college and to university happened pretty painlessly.  Although my background has been far from privileged, I feel that I ended up making the right choices the exactly right time, all by accident rather than by design.

Some of these thoughts were going through my head as I walked towards the hotel where the Widening Participation through Curriculum conference was held.  Other thoughts were connected with my day job, which all about supporting the delivery of a range of undergraduate computing and ICT modules.  WP (as it seemed to be known within the conference), is something that I consider to be fundamentally important; it touches on my interactions with students, and the times that I work with members of a module team.  I also had a question, which was, ‘what more could I do [to help with WP]?’

This post is a summary of my own views of the Widening Participation through Curriculum conference that was held on two days from 30 April 2014 in Milton Keynes.  It’s intended as a rough bunch of notes for myself, and might be of distant interest to other delegates who were at the event (or anyone else who might find these ramblings remotely interesting).

Opening remarks

The opening address was by Martin Bean, Vice chancellor of the university.  He asked the question, ‘how do we ensure that widening participation is achieved?’  This is an easy question to ask, but a whole lot more difficult to answer.  Martin talked about moving from informal to formal learning, and the challenge of reaching out and connecting with adult learners in a sustainable way.  Other points included the importance of access curriculum (pre-university level study).  Access curriculum has the potential to encourage learners and to develop confidence.

Martin also touched upon the potential offered by MOOCs, or, massive open online course.   The OU has created a company called FutureLearn, which has collaborations with other UK and international universities.  A question is whether it might be possible to create level 0 (or access) courses in the form of MOOCs that could help to prepare learners for formal study (connecting back to the idea of transitions from informal to formal learning).  One thought that I did have is about the importance and use of technology.  Technology might not be the issue, but figuring out strategies to use it effectively might be.

Keynote 1: WP and disruption – global challenges

The first keynote of the conference was by Belinda Tynan, PVC for teaching and learning.  As she spoke, I made some rough notes, and I’ve scribbled down the following important points: models of partnerships, curriculum theory, impact of curriculum reform, and how students are being engaged.

Belinda touched upon a number of wide issues such as changing demographics, discrepancy between rich and poor, unemployment, and the relationship between technology and social inclusion; all really great points.

Another interesting point was about the digital spaces where the university does not have a formal presence.  We were told that there are in the order of 150 Facebook groups that students have set up to help themselves.  As an aside, I’ve often wondered about these spaces, and whether they can tell us something that the university could be doing better, in terms of either technology, interactive system design, or how to foster and develop collaboration.  Another thought relates to the research question: how much learning actually occurs within these spaces?   How much are we able to see?

A phrase that jumped out at me was, ‘designing curriculum that fits into people’s lives’.  Perhaps it is important that curriculum designers create small fragments of materials to allow students can manage the complexity of their studies.  Other key phrases include the importance of motivation, the role of on-line discussions, and the challenge of finding time.

We were shown a short video about learning analytics.  Learning analytics is a pretty simple concept.  Whenever we interact with a system, we leave a trace (often in the form of a web request).  The idea is the perhaps the sum total of traces will be able to tell us something about how students are getting along.  By using clever technology (such as machine learning algorithms), it might be possible to uncover and initiate targeted interventions, perhaps in collaboration with student support teams.

One thought that I had during this presentation was, ‘where is the tutor in this picture?’  Technology was mentioned a lot, but there was little mention about the personal support that OU tutors (or lecturers) offer.   There are many factors in helping students along their journey, and my own view is that tutors are a really important part of this.

The concluding points in Belinda’s keynote (if I’ve noted this down properly) return to the notion of challenges: the importance of the broader societal context, and the importance of connecting learning theory to student journeys.

Session 1: Measuring and demonstrating impact

Delegates could go to a number of parallel sessions about different topics.  The first paper session I dropped into was entitled ‘measuring and demonstrating impact’.  This session comprised of two presentation.

The first presentation was entitled, ‘Impact of a pre-access curriculum on attainment over 10 years’, and it was from representatives of an organisation called Asdan Education, which is a charity which grew out of research from the University of West of England.  I hadn’t heard of this organisation before, so all this was news to me.  Asdan have what is called Certificate of personal effectiveness (Asdan website).  The presentation contained a lot of data suggested that the curriculum (and the work by the charity) led to an improvement to some GCSE results.

The second presentation of the morning, given by Nichola Grayson and Johanna Delaney was entitled, ‘can the key principles of open skills training enhance the experience of prospective students’. Interestingly, Nichola and Johnanna were from the library services at the University of Manchester.  Their talk was all about revision of library resources called ‘my learning essentials’.

The university currently has something called a ‘Manchester access programme’, which includes visits from schools, and an ‘extended project qualification’ (which I think allows students to gather up some UCAS points, used for university entry).  The open new training programme (if I have understand it correctly) has an emphasis on skills, adopts a workshop format and makes use of online resources.

During this presentation, I was introduced to some new terms and WP debates.  I heard the concept of the ‘deficit model’ for the first time, and there were immediate comments about its appropriateness (but more of this problematic concept later).

Session 2: Theory revisited

I went to this session because I had no idea what ‘theory’ means in the context of Widening Participation; I was hoping to learn something!

The first presentation was by my colleague Jonathan Hughes who gave a presentation entitled, ‘developing a theoretical framework to explore what widening participation has done for ‘non-traditional students’ and what it has done to them.’  Jonathan and his colleague Alice Peasgood has been interviewing WP experts, which includes mostly professors who had been published.  Interviews recorded and transcribed, and then analysed.

Johnathan made an interesting comment (or quip) that this is a technique that can be considered to be a short-cut to a literature review.  This is an idea that I’m going to take away with me, and it has actually inspired some thinking about an idea about how to understand the teaching of programming.

His analysis is to use a technique called thematic analysis (Wikipedia) drawing on the work of Braun and Clarke.  This was also interesting: in terms of qualitative research, I’m more familiar with grounded theory (Wikipedia).  This alerted me to one of the dangers of going to conferences: that you can easily give yourself lots of homework to do.

Jonathan highlighted three main themes: the policy context (tuition fees in higher education), wider context of marketised higher education, and how policies are interpreted and operationalised.  (He has written more about these in his paper).  I’ve made a note of a comment that there are different theoretical frameworks to understand WP: one is to enable the gifted and talented to study, another is how best to meet the needs of employers, and how to transform the university rather than the students.

The second talk by Jayne Clapton, was entitled, ‘seeing a ‘complex’ conceptual understanding of WP and social inclusion in HE’.  Jane presented a graphic of a metaphor of a complex mechanism which had a number of interlocking parts (which, I believe, represent various drivers and influences).

The discussion section was really interesting, particularly since the deficit model was attacked pretty comprehensively.  To add a bit more detail, the ‘model’ is where potential students have some kind of deficit, perhaps in terms of socio-economic background, for instance.  To overcome this there is the idea of having some kind of intervention done to them to help prepare them for higher education.  An alternative perspective is to view students in terms of ‘assets’; development opportunities can represent investments in individuals.

A concluding discussion centred upon the importance of research.  Research always has the potential to inform and guide government policy.  The point was that ‘we need effective research to back up any arguments that we make, and we need to know about the effectiveness about projects or interventions’.

Keynote 2: The ‘academic challenge’ in HE: intersectional dimensions and unintended affects on pedagogic encounters

The second keynote was by Professor Gill Crozier from Roehampton University.  I’ve made a note that Gill was talking about transition; that the transition to higher education is more difficult for working class, and black and ethnic minority students.  Some students can be unsure what university was all about (I certainly place myself in that category).  Studying at university can expose students to unequal power relations between class, gender and race.

A really interesting point that I’ve noted down is one that relates to attitude.  In some cases, some lecturers are not happy giving additional support, since this requires them to ‘become nurturing’ in some senses, and some might consider it to beyond the remit of their core ‘academic’ duties.  I personally found this view surprising.  I personally view those moments of additional support as real opportunities to help learners find the heart of a discipline, or get to the root of a problem that might be troublesome.  These moments allow you to reflect on and understand core ideas within your own discipline.  In comparison to lecturing in front of a room, you need to be dynamic; you need to get to the heart of the problem, and try your best to be as engaging as possible.   I also made a note about the importance of creating a ‘learner identity’.

There was a lot in terms of content in this presentation.  Two interesting notes that I made in my notebook are, ‘social identifies profoundly shape dispositions’ (I’m not quite sure what context I’ve written this), and ‘little attention given to the experience of students at university’ (which is something that I’ll come back to in the final part of this blog).

Keynote 3: Widening success through curriculum: innovation in design and pedagogy

Stephanie Marshall, CEO of the Higher Education Academy (HEA website) gave the third keynote speech.  Stephanie began with an interesting anecdote, and one that I really appreciated.  Stephanie spoke about her early days of being a lecturer at (I think) the University of York.  She spoke to a colleague who apparently told her that ‘the OU had taught me to do all this’, meaning, how to become a lecturer by running training sessions that allows associate lecturers to understand how to run group sessions, and how to choose and design effective activities.

My ears pricked up when Stephanie mentioned the HEA’s Professional Standards Framework (HEA website).  The UKPSF relates to the HEA’s accreditation process where lecturers have to submit cases to demonstrate their teaching and learning skills in higher education.

Like so many HE institutions, the HEA has also been through a period of substantial change, which has recently included a substantial reduction in funding.  This said, the HEA continues to run projects that aim to influence the whole of the sector.  Work streams currently include curriculum design, innovative pedagogies, transitions, and staff transitions (helping staff to do the things that they need to do).

There are also projects that relate to widening participation.  One that I’ve explicitly taken a note of is the retention and success project (HEA website) (it appears that there’s a whole bunch of interesting looking resources, which I didn’t know existed).  Other projects I’ve noted connect to themes such as attainment and progression, learning analytics and employability.

On the subject of WP, Stephanie gave a really interesting example.  During the presentation of a module, students studying English at one university expressed concerns about the relevance of particular set text to the students who were studying them.   This challenge led to the co-development of curriculum, a collaboration between students and lecturers to choose text that were more representative (in terms of the ethnicity of the student body), thus allowing the module to be more engaging.  This strikes me as one of the fundamental advantages of face to face teaching; lecturers can learn, and challenging (and important) debates can emerge.

A final resource (or reference) that I wasn’t aware of was something called the Graduate attributes framework (University of Edinburgh).  Again, further homework!

Permalink Add your comment
Share post
Christopher Douce

OU e-learning Community – Considering Accessibility

Visible to anyone in the world
Edited by Christopher Douce, Sunday, 4 May 2014, 17:36

On April 23, I visited the Open University campus to attend an event to share lessons about how the university can support students who have disabilities. The event, which took place within a group called the ‘e-learning community’ had two parts to it: one part was about sharing of research findings, and the other part was about the sharing of practice.

This blog aims to summarise (albeit briefly) the four presentations that were made during the day.  It’s intended for a couple of audiences: colleagues within the university, students who might be taking the H810 accessible online learning (OU website) module that I tutor, and anyone else who is remotely interested.

Like many of these blogs, these event summaries are written (pretty roughly) from notes that I made during the sessions.  (This is a disclaimer to say that there might be mistakes and I’m likely to have missed some bits).

Academic attainment among students with disabilities in distance education

Professor John Richardson, from the OU’s Institute of Educational Technology gave the first presentation of the day.  John does quantitative research (amongst a whole load of other things), and he began by staying that there is an increase in knowledge about our understanding of the attainment of students who have disabilities, but the knowledge is fragmented.  John made a really important point, which was that it is patent nonsense to consider all disabled students as a single group; everyone is different, and academic performance (or attainment) is influenced by a rich combination of variables.  These include age, gender, socio-economic status, prior qualifications (and a whole bunch of others too).

When we look at qualitative data, it’s important to define what we’re talking about.  One of the terms that John clearly defined was the phrase ‘a good degree’.  This, I understand, was considered to be a first or an upper second class honours degree.  John also mentioned something that is unique about the OU; that it awards degree classifications by applying an algorithm that uses scores from all the modules that contribute towards a particular degree (whereas in other institutions, the classification comes from decisions made by an examination board).

We were given some interesting stats.  In 2009 there were 196,405 registered students, of which 6.8% of students declared a disability.  The most commonly disclosed disability was pain and fatigue, followed by dyslexia.  Out of all disabled students, 55% of students declared a multiple disability.

In 2012 the situation was a little different. In 2012 there were 175,000 registered students, of which 12% (21,000) students declared (or disclosed) a disability.  John said that perhaps this increase might be an artifact of statistics, but it remains a fact.  He also made the point (raised by Martyn Cooper, a later speaker on the day) that this number of students represents the size of an average European university.  From these statements I personally concluded that supporting students with disabilities was an activity that the university needs to (quite obviously) take very seriously.

If I’ve got this right John’s research drew upon a 2009 data set from the OU.  There were some interesting findings.  When controlling for other effects (such as socio-economic class, prior qualification and so on), students who had declared pain and fatigue and autistic spectrum disorders exhibited greater levels of gaining good degrees that non-disabled students.  Conversely, students who had disclosed dyslexia, specific learning disabilities or multiple disabilities gained a lower percentage of good degrees when compared with non-disabled students.

I’ve made a note of a couple of interesting conclusions.  To improve completion rates, it is a good idea to somehow think about how we can more readily support students who have disclosed mental health difficulties and mobility impairments.  To improve degree levels, we need to put our focus on students who have disclosed dyslexia and specific learning disabilities.  One take away thought relates to the university’s reliance on text (which is a subject that crops up in a later presentation).

Quantitative research can only tell us so much; it can tell us that an artifact exists, but we need to use other approaches to figure out the finer detail.  Qualitative research, however, can provide detail, but the challenge with qualitative approaches lies with the extent to which findings and observations can be generalised.  My understanding was that we need both to clearly create a rich picture of how the university supports students with disabilities. 

Specific learning differences, module development and success

The second presentation was a double act by Sarah Heiser (a colleague from the London region), and Jane Swindells, who works in the disability advisor service.  Jane introduced the session by saying that it was less about research and more about sharing a practitioner perspective.  I always like these kind of sessions since I find it easy to connect with the materials and I can often pick up some useful tips that you can use within your own teaching.

An important point is that dyslexia has a number of aspects and is an umbrella term for a broader set of conditions.  It can impact on different cognitive processes, such as the use of working memory, speed of information processing, time management, co-ordination and automaticity of operations.  It can also affect how information is received and decoded. 

On-line or electronic materials offer dyslexic learners a wealth of advantages; materials can be accessed through assistive technologies, and users can personalise how content is received or consumed.  An important point that I would add is that the effectiveness of digital resources depends on the user being aware of the possibilities that it gives.  Developing a comprehensive awareness of the strategies of use (to help with teaching and learning) is something that takes time and effort.

Sarah spoke about a project where she has been drawing out practice experience from associate lecturers through what I understand to be a series of on-line sessions (I hope I’ve understood this correctly).  Important themes to include challenges that accompany accuracy, text completion, following instructions, time, and the importance of offering reassurance.

I’ve made a note of the term ‘overlearning’.  When I had to take exams I would repeat and repeat the things I had to learn, until I was sick of them.  (This is a strategy that I continue to use to this day!)

One point that I found especially interesting relates to the use of OU live recordings.   If a tutor records a session, a student who may have dyslexia can go over them time and time again, choosing to pick up sections of learning at a time and a pace that suits them.  This depends on two points: the first is the availability of the resource (tutors making recordings), and students being aware that they exist and know how they can access them.

Towards the end of the session, Sarah mentioned a tool called Language Open Resources on-line, or LORO for short.  LORO allows tutors to share (and discover) different teaching resources.  I was impressed with LORO, in the sense that you can enter a module code and find resources that tutors might (potentially) be able to use within their tutorial sessions.

SeGA guidance: document accessibility/accessible methods and other symbolic languages

The third presentation of the day was from Martyn Cooper, from the Institute of Educational Technology.  Martyn works as a Senior Research fellow, and he has been involved with a university project called SeGA, known as Securing Greater Accessibility.  A part of the project has been to write guidance documents that can help module teams and module accessibility specialists.  An important point is that each module should have a designed person who is responsible for helping to address accessibility issues within its production.  (But, it should also be argued that all members of a module team should be involved too).

The documents are intended to provide up to date guidance (or, distilled expertise) to promote consistency across learning resources. The challenge with writing such guidance is that when we look at some accessibility issues, the detail can get pretty complicated pretty quickly.

The guidance covers a number of important subjects, such as how to make Word documents, PDFs, and pages that are delivered through the virtual learning environment as accessible as possible.  Echoing the previous talk, Martyn made the point that electronic documents have inherent advantages for people who have disabilities – the digital content can be manipulated and rendered in different ways.

Important points to bear in mind include the effective use of ALT texts (texts that describe images), the use of scalable images (for people who have visual impairments), effective design of tables, use of web links, headings and fonts.  An important point was made that it’s important to do ‘semantic tagging’, i.e. design a document using tags that describe its structure (so it becomes navigable), and deal with its graphical presentation separately.

I noted down an interesting point about Microsoft Word.  Martyn said that it is (generally speaking) a very accessible format, partly due to its ubiquity and the way that it can be used with assistive technologies, such as screen readers.

Martyn also addressed the issue about how to deal with accessibility of mathematics and other symbolic notations.  A notation system or language can help ideas to be comprehended and manipulated.  An important point was that in some disciplines, mastery of a notation system can represent an important learning objective.  During Martyn’s talk, I remembered a lecture that I attended a few months back (blog) about a notation scheme to describe juggling.  I also remember that a good notation can facilitate the discovery of new ideas (and the efficient representation of existing ones).

One of the challenges is how to take a notation scheme, which might have inherently visual and spatial properties and convert it into a linear format that conveys similar concepts to users of assistive technologies, such as screen readers.  Martyn mentioned a number of mark-up languages that can be used to represent familiar notations: MathML and ChemML (Wikipedia) are two good examples.  The current challenge is these notations are not supported in a consistent way across different internet browsers.  Music can be represented using something called music braille (but it is also a fact that only a relatively small percentage of visually impaired people use braille languages), or MIDI code.

A personal reflection is that there is no silver bullet when it comes to accessibility.  Notation is a difficult issue to grapple with, and it relies on users making effective use of assistive technologies.  It’s also important to be mindful that AT, in itself, can be a barrier all of its own.  Before one can master a notation, one may well have to master a set of tools.

The question and answer session at the end of Martyn’s talk was also interesting.  An important point was raised that it’s important to embed accessibility into the module production process.  We shouldn’t ‘retrofit’ accessibility – we should be thinking about it from the outset.

Supporting visually impaired students in studying mathematics

The final presentation of the day was by my colleague Hilary Holmes, who is a maths staff tutor.  A comment that I’ve made (in my notebook) at the start of Hilary’s presentation is that the accessibility of maths is a challenging problem.  Students who are considering studying mathematics are told (or should be told) from the outset that maths is an inherently visual subject (which is advice that, I understand, is available in the accessibility guide for some modules).

Key issues include how to describe the notation (which can be inherently two dimensional), how to describe graphs and diagrams, how to present maths on web pages, and how to offer effective and useful guidance to staff and tutors.

First level modules make good use of printed books.  Printed books, of course, present fundamental accessibility challenges, so one solution to the notation (and book accessibility) issue is to use something called a DAISY book, which is a navigable audio book.  DAISY books can be created with either synthesised voices, or recorded human voices.  The university has the ability to record (in some cases) DAISY books through a special recording facility, which used to be a part of disabled student services.  One of the problems of ‘speaking’ mathematical notation is that ambiguities can quickly become apparent, but human readers are more able to interpret expressions and add pauses and use different tones to help convey different meanings.

Another approach is to use some software called AMIS (AMIS project home), which is an abbreviation for Adaptive Multimedia Information System. AMIS appears to be DAISY reader software, but it also displays text.

Diagrams present their own unique challenges.  Solutions might be to describe a diagram, or to create tactile diagrams, but tactile diagrams are limited in terms of what they can express.  Hilary subjected us all to a phenomenally complicated audio description which was utterly baffling, and then showed us a complex 3D plot of a series of equations and challenged us with the question, ‘how do you go about describing this?’  I’ve made a note of the following question in my note book: ‘what do you have to do to get at the learning?’

Another approach to tackle the challenge of diagrams is to use something called sonic diagrams.  A tool called MathTrax (MathTrax website) allows users to enter in mathematical expressions and have them converted into a sound.  The pitch and character of a note change in accordance with values that are plotted on a graph.  Two important points are: firstly, in some instances, users might need to draw upon the skills of non-medical helpers, and secondly (as mentioned earlier), these tools can take time to master and use.

A final point that I’ve noted down is the importance of offering tutors support.  In some situations, tutors might be unsure what is meant by the phrase ‘reasonable adjustment’, and what they might be able to do in terms of helping to prepare resources for students (perhaps with help from the wider university).  Different students, of course, will always have very different needs, and it is these differences that we need to be mindful of.

It was really interesting to hear that Hilary has been involved with something called a ‘programme accessibility guide’.  This is a guide about the accessibility of a series of modules, not just a single module.  This addresses the problem of students starting one module and then discovering that there are some fundamental accessibility challenges on later modules.  This is certainly something that would be useful in ICT and computing modules, but an immediate challenge lies with how best to keep such a guide up to date.

Reflections

It was a useful event, especially in terms of being exposed to a range of rather different perspectives and issues (not to mention research approaches).  The presentations went into sufficient detail that really started to highlight the fundamental difficulties that learners can come up against.  I think, for me, the overriding theme was about how best to accommodate differences.  A related thought is that if we offer different types of resources (for all students), there might well be a necessity to share and explain how different types of electronic resources and documents can be used in different ways (and in different situations).

The Languages Open Resources Online website was recently mentioned in a regional development conference I attended a month or two back.  Sarah’s session got me thinking: I wondered whether it could be possible to create something similar for the Maths Computing and Technology faculty, or perhaps, specifically for computing and ICT modules (which is my discipline).  Sharing happens within modules, but it’s all pretty informal – but there might be something said for raising the visibility of the work that individual tutors do.   One random through is that it could be called: TOMORO, with the first three letters being an abbreviation for: Technology Or Mathematics. There are certainly many discussions to be had. 

 

Permalink Add your comment
Share post
Christopher Douce

Social media toolkit workshop: Milton Keynes

Visible to anyone in the world
Edited by Christopher Douce, Tuesday, 8 Apr 2014, 15:55

26 March was another busy day.  In the morning I had managed to get myself onto something called a ‘social media toolkit workshop’.  In the afternoon, I had to go to a M364 Interaction Design (Open University) module team meeting.  This is a quick summary (taken from my paper-based analogue notes) of the workshop.  I should mention that I had to bale out of it early due to the other meeting commitment, so I wasn’t able to benefit from some of the closing discussions.  Nevertheless, I hope what is here might be of use to someone (!)

Objective

The university has created something called a social media toolkit which could be used by any academic (or any other group within the university) who might have an interest in using social media to share stories about projects or outcomes from research.  It is designed to be useful for those who are new to social media, as well as those who have a bit more experience. 

If you’re reading this from internally within the university, you might be able to access an early version of the toolkit (OU Social Media Toolkit).  In essence, the toolkit contains resources about how to capture and use different types of digital media, such as audio recordings, geo locations (or geodata), photos, text or video. The kit also aims to (as far as I understand) to offer examples of how these different types of media could be used within an academic context.

The objective of the day was to introduce the toolkit to a group of interested participants to gather up some views about how it might be potentially enhanced, developed or improved.  Since I could only stick around for a part of the day, I was only able to attend the first part of the day, which comprised of a forceful and evangelical presentation by Christian Payne, who runs a website (or social media hub) called Documentally.

The following sections have been edited together from the notes that I made on the day.

Social media and stories

Our presenter was very good at sharing pithy phrases.  One of the first that I’ve noted down is the phrase: ‘your story is your strategy about what you want to share’.  In retrospect, this phrase is a tricky to unpack, but your strategy might well be connected to the tools that you use, and the tools might well connect to the types of media that you are able (or willing) to produce.

During the first session we were told about different tools.  Some tools were immediately familiar, such as Twitter and YouTube, but there were others that were more niche and less familiar, such as Flickr, FourSquare, Audioboo and Bamboozer.  (A point was made that that YouTube can now be considered to be the webs second biggest search engine).  Another interesting point (or strategy, or technique) was that all tools should be focused towards a hub, perhaps a website (or a blog).  This isn't a new idea: this blog connects up to my OU website, which also had a feed of recent publications.

Here are some more phrases I've noted.  It’s important to get stories seen, heard and interacted with, and ‘a social network is the interaction between a group of people who share a common interest’. 

A really interesting phrase is ‘engineering serendipity’; ‘serendipity lives in the possibility of others discovering your materials’.  The point is that it’s all about networks, and I can clearly sense that it takes time and effort to create and nurture those networks.

The power of audio

An area that was loosely emphasised was audio recordings.  Audio, it is stated, connects with the ‘theatre of the mind’ (which reminded me of a quote or a saying that goes, ‘radio has much better pictures than television’).  Audio also has a number of other advantages: it is intimate, and you can be getting along with other things at the same time on your device whilst you listen to an audio stream.  Christian held the view that ‘photoslide sharing can create better engagement than videos’.

There was a short section of the morning about interview techniques: start easy and then probe deeply, be interested, take time to create rapport and take the listener on a journey.  Editing tools such as GarageBand and Audacity were touched upon, and a number of apps were mentioned, such as Hokusai and SoundCloud (that allows you to top and tail a recording).

Audio recordings can be rough and ready (providing that you do them reasonably well).  Another point was: ‘give me wobbly video, or professional video, but nothing in between’.  I made a note that perhaps there is something authentic about the analogue world being especially compelling (and real) if it is presented in a digital way.  In a similar vein, I’ve also noted (in my analogue notebook) ‘if you throw out a sketch, people are drawn to it’ (and I immediately start thinking about a TEDTalk that I once saw that comprised of just talking and sketching – but I can’t seem to find it again!)

Here are two other phrases: ‘good content always finds an audience, but without context it’s just more noise’, and, ‘you can control your content, but not how people react to it’.  Whilst this second quote is certainly true, this connects to an important connected point about using the technology carefully and responsibly.

A diversion into technology

During the middle of the presentation part of the workshop, we were taken on a number of diversions into technology.  We were told about battery backups, solar powered mobile chargers and the importance of having set of sim cards (if you’re going to be travelling in different countries).  Your choice of devices (to capture and manipulate your media) is important.  Whilst you can do most things on a mobile phone, a laptop gives you that little bit more power and flexibility to collate and edit content.

We were also told about networking tools, such as PirateBox, which is a bit like a self-contained public WiFi internet in a box, which can allow other people (and devices) to connect to one another and share files without having to rely on other communications networks.

The structure of stories

Putting the fascinating technology aside, we return to the objective of creating stories through social media.  So, what are stories?  Stories, it is argued, have a reveal; they grab your attention.  It’s also useful to say something about the background, to contextualise a setting.  A story is something that we can relate to.  It can be a tale that inspires or makes us feel emotional.

We were told that a story, in its simplest form, is an anecdote, or it’s a journey.  An important element is about the asking of questions (who, what, when, when, how), followed by a pay-off or resolution.  But when we are using many different tools to create different types of media, how do we make sense of it all?  We’re again back to the idea of a hub website.  A blog can operate as a curation tool.  It can become an on-line repository for useful links, notes and resources.

Reflections

The workshop turned out to be pretty interesting, and our facilitator was clearly a very enthusiastic about sharing a huge amount of his life online.  There, I feel, lies an issue that needs to be explored further: the distinction between using these tools to share stories about your research (or projects), and how much of yourself you feel comfortable sharing.  I feel that, in some occasions, two can become intertwined (since I personally identify myself with the research that I do).

On one hand, I clearly can see the purpose and the benefits of both producing and consuming social media.  On the other hand, I continue to hold a number of reservations. During the presentation, I raised some questions about security, particularly regarding geo-location data.  (I have generally tried to avoid explicitly releasing my GPS co-ordinates to all and sundry, but I’m painfully aware that my phone might well be automatically doing this for me).  An interesting comment from our facilitator was, ‘I didn’t realise that there would be so much interest in security’.  This, to me, was surprising, since it was one of the concerns that I had in forefront my mind.

Although I did mention that I left the workshop early, I did feel that there was still perhaps more of an opportunity to talk about instance of good practice, i.e. examples of projects that made good use of social media to get their message out.  Our presenter gave many personal examples about reporting from war-torn countries and how he interviewed famous people, but I felt that these anecdotes were rather removed from the challenge of communicating about academic projects.

I can see there is clear value in knowing how to use different social media tools: they can be very useful way to get your message across, and when your main job is about education and generating new knowledge, there’s almost an institutional responsibility to share.  Doing so, it is argued, has the potential to allow others to discover your work (in the different forms it might take), and to ‘engineer serendipity’.

I came away with a couple of thoughts.  Firstly: would I be brave enough to ever create my own wobbly video or short audio podcasts about my research interests?  This would, in some way, mean exposing myself in a rough and ready and unedited way.  I’m comfortable within the world of text and blogs (since I can pretty much edit what I say), but I feel I need a new dimension of confidence to embrace a new dimension of multimedia. 

Two fundamental challenges to overcome include: getting used to seeing myself on video and getting used to my own voice on audio recordings.  I can figure out how to use technology without too many problems (I have no problems with using any type of gadget; after all, I can just do some searches on YouTube).  The bigger challenge is addressing the dimension of performance and delivery.  I’m also remember the phrase, ‘just because everyone can [make videos or audio recordings], doesn’t necessarily mean that everyone should’.

I’m also painfully aware that research stories need to be interesting and engaging if they are to have impact.  I’m assuming that because I’m thinking of this from the outset, this is a good thing, right?

I’ll certainly be looking at the toolkit again, but in the meantime, I’ll continue to think about (and play with) some of the tools I’ve been introduced (and reintroduced) to.  Much food for thought.

Permalink Add your comment
Share post
Christopher Douce

Professional Development Conference: London, 22 March 2014

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 12 Oct 2022, 09:00

The Open University in London runs two professional development conferences per year, one at its regional offices in Camden town, the other at the London School of Economics. Saturday 22 March was a busy day; it was the day I ran my first staff development session at this venue.  (I had previously run sessions in the Camden centre, but running a session in an external venue had, for some reason, a slightly different feel to it).

This blog post aims to summarise a number of key points from the session.  It is intended for anyone who might be remotely interested, but it’s mostly intended for fellow associate lecturers.  If you’re interested in the fine detail, or the contents of what was presented, do get in touch. Similarly, if you work within any other parts of the university and feel that this session might be useful for your ALs, do get in touch; I don’t mind travelling to other regions. 

Electronic assignments

The aim of the session was to share what I had discovered whilst figuring out how a tool called the ETMA file handler works.  Students with the university submit their assignments electronically through something called the Electronic Tutor Marked Assignment (ETMA) system.  This allows submissions to be held securely and the date and time of submission to be recorded.  It also allows tutors to collect (or download) batches of assignments that students have submitted.

When assignments are downloaded, tutors use a piece of software called the ETMA file handler.  This is a relatively simple piece of software that allows tutors to get an overview of which student has submitted which assignment.  It also allows tutors to see their work, allowing them to comment (and mark) what they have submitted.

There are three things that a tutor usually has to do.  Firstly, they have to assign a mark for a student’s submission.  They usually also have to add some comments to a script that has been submitted (which is usually in the form of a Microsoft Word document).  They also have to add some comments to help a student to move forward with their studies.  These comments are entered into a form that is colloquially known as a PT3.  Please don’t ask me why it’s called this; I have no idea – but it seems to be an abbreviation that is deeply embedded within the fabric of the university.  If you talk to a tutor about a PT3 form, they know what you’re talking about.

Under the hood

Given that the tutor marked assignments constitutes a pretty big part of the teaching and learning experience in the university, the ETMA file handler program is, therefore, a pretty important piece of software.  One of my own views (when it comes to software) is that if you understand how something works, you’ll be able to figure out how to use it better.

The intention behind my professional development session was to share something about how the ETMA file handler works, allowing tutors to carry out essential tasks such as make backups and move sets of marking from one computer to another.  Whilst the university does a pretty good at offering comprehensive training about how to use the file handler to enable tutors to get along with their job of marking, it isn’t so good at letting tutors know about how to do some of the system administration stuff that we all need to do from time to time, such as taking backups and moving files to another computer (hence my motivation to run this session).

One of my confessions is that I’m a computer scientist.  This means that I (sometimes) find it fun figuring out how stuff works.  This means that I sometimes mess around with a piece of software to see how to break it, and then try to get it working again.  (Sometimes I manage to do this, other times I don’t!)  During the session I focussed on a small number of things: how the file handler program knows about the assignments that have been downloaded (it uses directories), how directories are structured, what ‘special files’ these directories contains, and where (and how) additional information is held.

Here’s what I focussed on: the directories used to download files to, the directories used to return marked files from and how the file handler reads the contents of those directories so it is able to offer choices a tutor.  Towards the end of the presentation, I also presented a number of what I considered to be useful tips.  These were: the file hander software is very stupid, the file handler software needs to know where your marking is, form habits, be consistent, save files in the same place, use zip files to move files around, and be paranoid!

Reflections

Whilst I was writing the session, I thought to myself, ‘is this going to be too simple?’ and ‘surely everyone will get terribly bored with all this detail and all the geeky stuff that I’m going to be talking about?’  Thankfully, these fears were unfounded.  The detail, it turned out, seemed to be quite interesting.  Even if I was sharing the obvious, sometimes a shared understanding can offer some reassurance.

There were parts that went right, and other parts that went wrong (or, not so well as I had expected); both represented opportunities for learning.  The part that I almost got right was about timing.  I had an hour and a half to fill, and although the session had to be wrapped up pretty quickly (so everyone could get their sandwiches), the timing seemed to be (roughly) about right.

The part that I got wrong wasn’t something that was catastrophically wrong, but instead could be understood in terms of an opportunity to improve the presentation the next time round.  We all user our computers in slightly different ways, and I have to confess that I became particularly fixated in using my own computer in quite a needlessly complicated way (in terms of how to create and use backup files).  As a result, I now have slightly more to talk about, which I think is a good thing (but I might have to re-jig the timing).

There is one implicit side effect of sharing how something is either designed, or how something works.  When we know how something works, we can sometimes find new ways of working, or new ways to use the tools that we have at our disposal.  Whist probing a strange piece of software can be a little frightening it’s sometimes possible to find unexpected rewards.  We may never know what these are, unless we spend time doing this.

And finally…

If you’re an associate lecturer, do try to find the time to come to one of the AL development events; you’re always likely to pick something up from the day (and this applies as much to the facilitator as it does to the tutor too!)  As well as being useful, they can also be good fun too!

After the session had been completed, and the projectors and laptops were turned off, I started to ask myself a question.  This was: ‘what can I do for the next conference?’  Answering this question is now going to be one of my next tasks.

Permalink Add your comment
Share post
Christopher Douce

Associate Lecturer Professional Development Conference: Kent College, Tonbridge

Visible to anyone in the world
Edited by Christopher Douce, Monday, 24 Mar 2014, 14:14

The Open University in the South East ran one of their associate lecturer professional development conferences on the 1 March 2014.   This year, the conference was held at Kent College, Tonbridge.  I don’t know whether I wrote about this before, but this was the same where I attended my first ever OU tutorial (as a rookie tutor).  Today, the site is very different. Then it was gloomy and dark.  Now, the buildings are bright and airy, and boasted a spectacular view of the Kent countryside.

This post is a very brief summary of the event.  The summary has drawn directly from the notes that I made during the day (and these, by definition, will probably contain a couple of mistakes!)  It also contains a bunch of rough reflections.  I should add that this blog is primarily intended for other associate lecturer colleagues but it might accidentally be of wider interest to others too.

During this conference, I signed up for two sessions.  The first was entitled, ‘supporting academic writing’.  The second session was all about, ‘aligning TMA feedback to students’ needs and expectations’. 

Supporting academic writing

This first session was facilitated by Anna Calvi, who projected a set of phrases about academic writing onto a digital whiteboard.  A couple of examples were, ‘what is a semi-colon?’ and ‘I think of ideas and information as I write’. ‘Do any of you recognise these?  Which are the most important for you?’ Anna asked, challenging us to respond.  She didn’t have to wait long for an answer.

A couple of responses that I noted down were: explaining why structure is important, the importance of paraphrasing and differences between written English and spoken English.  There’s also the necessity to help students to understand what is meant by ‘written academic English’.  Some suggestions were immediately forthcoming: the choice of vocabulary, style and appropriate referencing.

One of the participants asked a question that I have heard asked before.  This was, ‘can all faculties have a module that helps students to write descriptively?’  The truth of the matter is that different faculties do different things.  In the Mathematics Computing and Technology module, writing skills are embedded (and emphasised) within the introductory level 1 modules.  Other faculties have dedicated modules.  Two key modules are LB160 Professional Communication Skills for Business Studies, and L185 English for Academic purposes, which I understand can contribute credit to some degree programmes.

During this session, all the tutors were directed towards other useful resources.  These include a useful student booklet entitled reading and taking notes (PDF) which is connected to an accompanying Skills for Study website (OU website).  Another booklet is entitled Thinking Critically (PDF).  This one is particularly useful, since terms, ‘analyse critically’ and ‘critically evaluate’ can (confusingly) appear within module texts, assignments and exams.

One of the points shared during this first session was really important: it’s important to emphasise what academic writing is right at the start of a programme of study.

What needs to be done?

So, how can tutors help?  Anna introduced us to a tool known as the MASUS framework.  MASUS is an abbreviation for Measuring Skills of Academic Students and has originally come from the University of Sydney.  We were directed to a video (OU website) which describes what the framework is and how it works.  A big part of the framework (from what I remember), is a checklist for academic writing (OU website).  In essence, this tool helps us (tutors) to understand (or think about) what kind of academic writing support students might need.  Key areas can include the use of source materials (choosing the right ones), organising a response in an appropriate way, using language that is appropriate to both the audience and the task, and so on.  In some respects, the checklist is an awareness raising tool.  The tutor’s challenge lies in how to talk to students about aspects of writing.

If you’re interested, there’s a more comprehensive summary of the MASUS framework (PDF) is available directly from the University of Sydney.  Another useful resource is the OU’s own Developing academic English which tutors can refer students to.  We were also directed to an interesting external resource, a Grammar tutorial, from the University of Bristol.

Offering feedback

After looking at the checklist and these resources we moved onto a wider discussion about how best tutors can help students to develop their academic writing.  I’ve made a note of two broad approaches; one is reactive, the other is proactive. A reactive strategy might include offering general backward looking feedback and perhaps running a one to one session with a student.  A proactive approach, on the other hand, could include discussions through a tutor group forum, activities within tutorials, sharing of hand outs that contain exercises and practical feed-forward advice within assignments that have been returned.

TMA feedback can, for example, give examples (or samples) of what is considered to be effective writing.  An important point that emerged from the discussions was that it is very important to be selective, since commenting on everything can be very overwhelming.  One approach is to offer a summary and provide useful links (and pointers) to helpful resources.

On-line tutorials

Anna moved onto the question of what tutors might (potentially) do within either face to face or on-line tutorials to help students with their academic writing; this was the part of the sessions where tutors had an opportunity to share practice with each other.  Anna also had a number of sample activities that we could either use, modify, or draw teaching inspiration from.

The first example was an activity where students had to choose key paragraphs from a piece of writing.  Students could then complete a ‘diagram’ to identify (and categorise) different parts (or aspects of an argument).  Another activity might be to ask students to identify question words, key concepts and the relationships between them. 

Further ideas include an activity to spot (or identify) parts of essay, such as an introductory sentences, background information, central claims and perhaps a conclusion.  A follow on activity might be to ask questions about purpose of each section, then connecting with a discussion to the tasks that are required for an assignment.

There was also a suggestion of using some cards.  Students could be asked to match important terms written on cards to paragraphs. Terms could include: appropriate tone, formality, alternative views, vocabulary, linking words, and so on.  There would also be an opportunity to give examples, to allow tutors to emphasise the importance of writing principles.

A further tip was to search the OpenLearn website for phrases such as ‘paraphrasing’ (or module codes, such as L185) for instance.  The OpenLearn site contains some very useful fragments of larger courses which might be useful to direct students to.

Aligning TMA feedback to students’ needs and expectations

This second session was facilitated by Concha Furnborough.  Her session had subheading of, ‘how well does our feedback work?’ which is a very important question to ask.  It soon struck me that this session was about the sharing of research findings with the intention of informing (and developing) tutor practice.

I’ve made a note of another question: how do we bridge the gap between actual and desired performance.  Connecting back to the previous session, a really important principle is to offer ‘feed-forward’ comments, which aims to guide future altering behaviour. 

An early discussion point that I noted was that some students don’t take the time to download their feedback (after they have discovered what their assignment marks were).  We were all reminded that we (as tutors) really need to take the time to make sure students download the feedback that they are entitled to receive.

This session describes some of the outcomes from a project called eFeP, which is an abbreviation for e-Feedback evaluation project, funded by Jisc (which support the use of digital technologies in education and research).  If you’re interested, more information about the project is available from the eFePp project website (Jisc).

The aim of the project was to understand the preferences and perceptions that students have about the auditory and written feedback that are offered by language tutors.  The project used a combination of different techniques.  Firstly, it used a survey.  The survey was followed by a set of interviews.  Finally, ten students were asked to make a screen-cast recording; students were asked to talk through their responses to the feedback and guidance offered by their tutors.

One of the most interesting parts of the presentation (for me) was a description of a tool known as ‘feedback scaffolding’.  The ‘scaffolding’ corresponds to the different levels or layers of feedback that are offered to students.  The first level relates to a problem or issue that exists in an assignment.  Level two relates to an identification of the type of error.  If we’re thinking in terms of language teaching, this might be the wrong word case (or gender) being applied.  The third level is where an error is corrected.  The fourth is where an explanation is given, and the fifth is clear advice on how performance might be potentially improved.

Feeling slightly disruptive, I had to ask a couple of questions.  Firstly, I asked whether there was a category where tutors might work to contextualise a particular assignment or question, i.e. to explain how it relates to the subject as a whole, or to explain why a question is asked by a module team.  In some respects, this can fall under the final category, but perhaps not entirely.

My second question was about when in their learning cycle students were asked to comment on their feedback.  The answer was that they gave their feedback once they had taken the time to read through and assimilate the comments and guidance that the tutors had offered.   Another thought would be to capture how feedback is understood the instant that it is received by a learner.  (I understand that the researchers have plans to carry out further research).

If anyone is interested, there is a project blog (OU website), and it’s also possible to download a copy of a conference paper about the research from the OU’s research repository.

Reflections

Even though I attended only two sessions, there was a lot to take in.  One really interesting point was to hear different views about the challenges of academic writing from different people who work in different parts of the university.  I’ve heard it said that academic writing (of the type of writing needed to complete TMAs) is very tough if you’re doing it for the first time.  In terms of raising awareness of different resources that tutors could use to help students, the first session was especially useful.

These conferences are not often used to disseminate research findings, but the material that was covered in the second session was especially useful.  It exposed us to a new feedback framework (that I wasn’t aware of), and secondly, it directly encouraged us to consider how our feedback is perceived and used.

One of the biggest benefits of these conferences is that they represent an opportunity to share practices.  A phrase that I’ve often heard is, ‘you always pick up something new’.

Copies of the presentations used during the conference can be found by visiting the South East Region conference resources page (OU website, staff only).

Footnote

A week after drafting this summary, I heard that the university plans to close the South East regional centre in East Grinstead.  I started with the South East region back in 2006, and it was through this region that I began my career as an associate lecturer.

All associate lecturers are offered two days of professional development as their contract, and the events that the region have offered have helped to shape, inform and inspire my teaching practice.  Their professional development events have helped me to understand how to run engaging tutorials, my comfort zone has also been thoroughly stretched through inspiring ‘role play’ exercises, and I’ve also been offered exceptional guidance about how to provide effective correspondence tuition.

Without a doubt, the region has had a fundamental and transformative effect on how I teach and has clearly influenced the positive way that I view my role as an associate lecturer.  The professional development has always been supportive, respectful and motivating.

The implications on the closure of the South East region on continuing professional development for both new and existing tutors is currently unclear.  My own view is probably one this obvious: if these rare opportunities for sharing and learning were to disappear, the support that the university offer its tutors would be impoverished.

Permalink Add your comment
Share post
Christopher Douce

e-Learning community event: mobile devices

Visible to anyone in the world
Edited by Christopher Douce, Thursday, 20 Feb 2014, 12:01

Mobile devices are everywhere.  On a typical tube ride to the regional office in London, I see loads of different devices.  You can easily recognise the Amazon Kindle; you see the old type with buttons, and the more modern version with its touch screen.  Other passengers read electronic books with Android and Apple tablets.  Other commuters study their smart phones with intensity, and I’m fascinated with what is becoming possible with the bigger screen phones, such as the Samsung Note (or phablets, as I understand they’re called).  Technology is giving us both convenience and an opportunity to snatch moments of reading in the dead time of travel.

I have a connection with a module which is all about accessible online learning (H810 module description).  In the context of the module, accessibility is all about making materials, products and tools usable for people who have disabilities.  Accessibility can also be considered in a wider sense, in terms of making materials available to learners irrespective of their situation or environment.  In the most recent presentation of H810, the module team has made available much of the learning materials in eBook or Kindle format.  The fact that materials can be made available in this format can be potentially transformative and open up opportunities to ‘snatch’ more moments of learning.

An event I attended on 11 February 2014, held in the university library, was all about sharing research and practice about the use of mobile devices.  I missed the first presentation, which was all about the use of OU Live (an on-line real time conferencing system) using tablet devices.  The other two presentations (which I’ve made notes about) explored two different perspectives: the perspective of the student, and the perspective of the associate lecturer (or tutor).

(It was also interesting to note that the event was packed to capacity; it was standing room only.  Mobile technology and its impact on learning seems to be a hot topic).

Do students study and learn differently using e-readers?

The first presentation I managed to pay attention to was by Anne Campbell who had conducted a study about how students use e-readers.  Her research question (according to my notes) was whether users of these devices could perform deep reading (when you become absorbed and immersed in a text) and active learning, or alternatively, do learners get easily distracted by the technology?  Active learning can be thought of carrying out activities such as highlighting, note taking and summarising – all the things that you used to be able to do with a paper based text book and materials.

Anne gave us a bit of context.  Apparently half of OU postgraduate students use a tablet or e-reader, and most use it for studying.  Also, half of UK households have some kind of e-reader.  Anne also told us that there was very little research on how students study and learn using e-readers.  To try to learn more, Anne has conducted a small research project to try to learn more about how students consume and work with electronic resources and readers.

The study comprised of seventeen students.  Six students were from the social sciences and eleven students were studying science.  All were from a broad range of ages.  The study was a longitudinal diary study.  Whenever students used their devices, they were required to make an entry.  This was complemented with a series of semi-structured interviews.  Subsequently, a huge amount of rich qualitative data was collected and then analysed using a technique known as grounded theory.   (The key themes and subjects that are contained within the data are gradually exposed by looking at the detail of what the participants have said and have written).

One of the differences between using e-readers and traditional text books is the lack of spatial cues.  We’re used to the physical size of a book, so it’s possible to (roughly) know where certain chapters are once we’re familiar with its contents.  It’s also harder to skim read with e-readers, but on the other hand this may force readers to read in more depth.  One comment I’ve noted is, ‘I think with the Kindle… it is sinking in more’.  This, however, isn’t true for all students.

I’ve also noted that there clear benefits in terms of size.  Some text books are clearly very heavy and bulky; you need a reasonably sized bag to move them around from place to place, but with an e-reader, you can (of course) transfer all the books that you need for a module to the device.  Other advantages are that you can search for key phrases using an e-reader.  I’ve learnt that some e-readers contain a built in dictionary (which means that readers can look up words without having to reach for a paper dictionary).  Other advantages include a ‘clickable index’ (which can help with the navigation).  Other more implicit advantages can include the ability to change the size of the text of the display, and the ability to use the ‘voice readout’ function of a mobile device (but I don’t think any participants used this feature).

I also noted that e-readers might not be as well suited for active learning for the reasons that I touched on above, but apparently it’s possible to perform highlights and to record notes within an ebook.

My final note of this session was, ‘new types of study advice needed?’   More of this thought later.

Perspectives from a remote and rural AL

Tamsin Smith, from the Faculty of Science, talked about how mobile technology helps her in her role as an associate lecturer.  I found the subject of this talk immediately interesting and was very keen to hear learn about Tamsin’s experiences.  One of the modules that Tamsin tutors on consists of seven health science books.  The size and convenience of e-readers can also obviously benefit tutors as well as students.

On some modules, key documents such as assignment guides or tutor notes are available as PDFs.  If they’re not directly available, they can be converted into PDFs using freely available software tools.  When you have got the documents in this format, you can access them using your device of choice.  In Tamsin’s case, this was an iPad mini. 

On the subject of different devices, Tamsin also mentioned a new app called OU Anywhere, which is available for both iOS and Android devices.  After this talk, I gave OU Anywhere a try, downloading it to my smartphone.  I soon saw that I could access all the core blocks for the module that I tutor on, along with a whole bunch of other modules.  I could also access videos that were available through the DVD that was supplied with the module.  Clearly, this appeared to be (at a first glance) pretty useful, and was something that I needed to spend a bit more time looking at.

Other than the clear advantages of size and mobility, Tamsin also said that there were other advantages.  These included an ability to highlight sections, to add notes, to save bookmarks and to perform searches.  Searching was highlighted as particularly valuable.  Tutors could, for example, perform searches for relevant module materials during the middle of tutorials. 

Through an internet connection, our devices can allow access to the OU library, on line tutorials through OU Live (as covered during the first presentation that I missed), and tutor group discussion forums allowing tutors to keep track of discussions and support students whilst they’re on the move.  This said, internet access is not available everywhere, so the facility to download and store resources is a valuable necessity.  This, it was said, was the biggest change to practice; the ability to carry all materials easily and access them quickly. 

One point that I did learn from this presentation is that there is an ETMA file handler that available for the iPad (but not one that is official sanctioned or supported by the university).

Final thoughts

What I really liked about Anne’s study was its research approach.  I really liked the fact that it used something called a diary study (which is a technique that is touched on as a part of the M364 Interaction Design module).  This study aimed to learn how learning is done.  It struck me that some learners (including myself) might have to experiment with different combinations of study approaches and techniques to find out what works and what doesn’t.  Study technique (I thought) might be a judgement for the individual.

When I enrolled on my first postgraduate module with the Open University, I was sent a book entitled, The Good Study Guide by Andrew Northedge (companion website).  It was one of those books where I thought to myself, ‘how come it’s taken me such a long time to get around to reading this?’, and, ‘if only I had read this as an undergraduate, I might have perhaps managed to get a higher score in some of my exams’.  It was packed filled with practical advice about topics as time management, using a computer to study, reading, making notes, writing and preparing for exams.

It was interesting to hear from Anne’s presentation that studying using our new-fangled devices is that little bit different.  Whilst on one hand we lose some of our ability to put post it notes between pages and see where our thumbs have been, we gain mobility, convenience and extra facilities such as searching. 

It is very clear that more and more of university materials can now be accessed using electronic readers.  Whilst this is likely to be a good thing (in terms of convenience), there are two main issues (that are connected to each other) that I think that we need to bear in mind. 

The first is a very practical issue.  It is: how do you get the materials onto our device?  Two related questions are: how can we move our materials between different devices? and, how do we effectively manage the materials once we have saved them to our devices?  We might end up downloading a whole set of different files, ranging from different module blocks, assignments and other guidance documents.  It’s important to figure out a way to best manage these files:  we need to be literate in how we use our devices.   (As an aside, these questions loosely connect with the nebulous concept of the Personal Learning Environment).

The second issue relates to learning.  In the first presentation, Anne mentioned the term ‘active learning’.  The Good Study Guide contains a chapter about ‘making notes’.  Everyone is different, but I can’t help but think that there’s an opportunity for ‘practice sharing’.  What I mean is that there’s an opportunity to share stories of how learners can effectively make use of these mobile devices, perhaps in combination with more traditional approaches for study (such as note taking and paraphrasing).  Sharing tips and tricks about how mobile devices can fit into a personalised study plan has the potential to show how these new tools can be successfully applied.

A final thought relates to the broad subject of learning design.  Given that half of all households now have access to e-readers of one form or another (as stated in the first presentation I’ve covered) module teams need to be mindful of the opportunities and challenges that these devices can offer.  Although this is slightly away from my home discipline and core subject, I certainly feel that there needs to be work to be done to further understand what these challenges and opportunities might be.  I’m sure that there has been a lot more work carried out than I am aware of.  If you know of any studies that are relevant, please feel free to comment below.

Video recordings of these presentations are available through the university Stadium website.

Permalink 1 comment (latest comment by Jonathan Vernon, Wednesday, 5 Mar 2014, 23:38)
Share post
Christopher Douce

Gresham College: Designing IT to make healthcare safer

Visible to anyone in the world

On 11 February, I was back at the Museum of London.  This time, I wasn’t there to see juggling mathematicians (Gresham College) talking about theoretical anti-balls.  Instead, I was there for a lecture about the usability and design of medical devices by Harold Thimbleby, who I understand was from Swansea University. 

Before the lecture started, we were subjected to a looped video of a car crash test; a modern car from 2009 was crashed into a car built in the 1960s.  The result (and later point) was obvious: modern cars are safer than older cars.  Continual testing and development makes a difference.  We now have substantially safer cars.  Even though there have been substantial improvements, Harold made a really interesting point.  He said, ‘if bad design was a disease, it would be our 3rd biggest killer’.

Computers are everywhere in healthcare.  Perhaps introducing computers (or mobile devices) might be able to help?  This might well be the case, but there is also the risk that hospital staff might end up spending more time trying to get technology to do the right things rather than spending other time dealing with more important patient issues.  There is an underlying question of whether a technology is appropriate or not.

This blog post has been pulled directly from my notes that I’ve made during the lecture.  If you’re interested, I’ve provided a link to the transcript of the talk, which can be found at the end.

Infusion pumps

Harold showed us pictures of a series of infusion pumps.  I didn’t know what an infusion pump was.  Apparently it’s a device that is a bit like an intravenous drip, but you program it to dispense a fluid (or drug) into the blood stream at a certain rate.  I was very surprised by the pictures: every infusion pump looked very different to each other and these differences were quite shocking.  They each had different screens and different displays.  They were different sizes and had different keypad layouts.  It was clear that there was little in the way of internal and external consistency. Harold made an important point, that they were ‘not designed to be readable, they were designed to be cheap’ (please forgive my paraphrasing here).

We were regaled with further examples of interaction design terror.  A decimal point button was placed on an arrow key.  It was clear that there was not appropriate mapping between a button and its intended task.  Pushing a help button gave little in the way of help to the user.

We were told of a human factors analysis study where six nurses were required to use an infusion pump over a period of two hours (I think I’ve noted this down correctly).  The conclusion was that all of the nurses were confused.  Sixty percent of the nurses needed hints on how to use the device, and a further sixty percent were confused by how the decimal point worked (in this particular example).  Strikingly, sixty percent of those nurses entered the wrong settings.  

We’re not talking about trivial mistakes here; we’re talking about mistakes where users may be fundamentally confused by the appearance and location of a decimal point.   Since we’re also talking about devices that dispense drugs, small errors can become life threateningly catastrophic.

Calculators

Another example of devices where errors can become significant is the common hand-held calculator.  Now, I was of the opinion that modern calculators were pretty idiot proof, but it seems that I might well be the idiot for assuming this.  Harold gave us an example where we had to try to simply calculate percentages of the world population.  Our hand held calculator simply threw away zeros without telling us, without giving us any feedback.  If we’re not thinking, and since we implicitly know that calculators carry out calculations correctly, we can easily assume that the answer is correct too.  The point is clear:  ‘calculators should not be used in hospitals, they allow you to make mistakes, and they don’t care’.

Harold made another interesting point: when we use a calculator we often look at the keypad rather than the screen.  We might have a mental model of how a calculator works that is different to how it actually responds.   Calculators that have additional functions (such as a backspace, or delete last keypress buttons) might well break our understanding and expectations of how these devices operate.  Consistency is therefore very important (along with the visibility of results and feedback from errors).

There’s was an interesting link between this Gresham lecture and the lecture by Tony Mann (blog summary), which took place in January 2014.  Tony made the exact same point that Harold did.  When we make mistakes, we can very easily blame ourselves rather than the devices that we’re using.  Since we hold this bias, we’re also reluctant to raise concerns about the usability of devices and the equipment that we’re using.

Speeds of Thinking

Another interesting link was that Harold drew upon research by Daniel Kahneman (Wikipedia), explicitly connecting the subject of interface design with the subject of cognitive psychology.  Harold mentioned one of Kahneman’s recent books entitled: ‘Thinking Fast and Slow’, which posits that there are two cognitive systems in the brain: a perceptual system which makes quick decisions, and a slower system which makes more reasoned decisions (I’m relying on my notes again; I’ve got Daniel’s book on my bookshelves, amidst loads of others I have chalked down to read!)

Good design should take account of both the fast and the slow system.  One really nice example was with the use of a cashpoint to withdraw money from your bank account.  Towards the end of the transaction, the cashpoint begins to beep continually (offering perceptual feedback).  The presence of the feedback causes the slower system to focus attention on the task that has got to be completed (which is to collect the bank card).   Harold’s point is simple: ‘if you design technology properly we can make the world better’.

Visibility of information

How do you choose one device or product over another?  One approach is to make usually hidden information more visible to those who are tasked with making decisions.  A really good example of this is the energy efficiency ratings on household items, such as refrigerators and washing machines.  A similar rating scheme is available on car tyres too, exposing attributes such as noise, stopping distance and fuel consumption.  Harold’s point was: why not create a rating system for the usability of devices?

Summary

The Open University M364 Fundamentals of Interaction Design module highlights two benefits of good interaction design.  These are: an economic arguments (that good usability can save time and money), and safety.

This talk clearly emphasised the importance of the safety argument and emphasised good design principles (such as those created by Donald Norman), such as visibility of information, feedback of action, consistency between and within devices, and appropriate mapping (which means that buttons that are pressed should do the operation that they are expected to do).

Harold’s lecture concluded with a number of points that relate to the design of medical devices.  (Of which there were four, but I’ve only made a note of three!)  The first is that it’s important to rigorously assess technology, since this way we can ‘smoke out’ any design errors and problems (evaluation is incidentally a big part of M364).  The second is that it is important to automate resilience, or to offer clear feedback to the users.  The third is to make safety visible through clear labelling.

It was all pretty thought provoking stuff which was very clearly presented.  One thing that struck me (mostly after the talk) is that interactive devices don’t exist in isolation – they’re always used within an environment.  Understanding the environment and the way in which communications occur between different people who work within that environment are also considered to be important too (and there are different techniques that can be used to learn more about this).

Towards the end of the talk, I had a question that someone else asked.  It was, ‘is it possible to draw inspiration from the aviation industry and apply it to medicine?’  It was a very good question.  I’ve read (in another OU module) that an aircraft cockpit can be used as a way to communicate system state to both pilots.  Clearly, this is subject of on-going research, and Harold directed us to a site called CHI Med (computer-human interaction).

Much food for thought!  I came away from the lecture feeling mildly terrified, but one consolation was that I had at least learnt what an infusion pump was.  As promised, here’s a link to the transcript of the talk, entitled Designing IT to make healthcare safer (Gresham College). 

Permalink Add your comment
Share post
Christopher Douce

Bletchley Park archive course

Visible to anyone in the world

At end of January, I took a day off my usual duties and went to an event called the ‘Bletchley Park archive course’.  I heard about the course through the Bletchley Park emailing list.  As soon as I received the message telling me about it I contacted the organisers straight away, but unfortunately, I was already too late: there were no longer any spaces on the first event.  Thanks to a kind hearted volunteer, I was told about the follow up event.

This blog post is likely to be a number of blog posts about Bletchley Park, a place that is significant not only in terms of Second World War intelligence gathering and analysis, but is also significant in the history of computing.   It’s a place I’ve been to a couple of times, but this visit had a definite purpose; to learn more about their archives and what they might be able to tell a very casual historian of technology, like myself.

I awoke at about half six in the morning, which is the usual time when I have to travel to Milton Keynes and found my way to my local train station.  The weather was shocking, as it was for the whole of January.  I was wearing sturdy boots and had donned a raincoat, as instructed by the course organisers.  Two trains later, I was at Euston Station, ready to take the relatively short journey north towards Milton Keynes, and then onto the small town of Bletchley, just one stop away.

Three quarters of an hour later, after walking through driving rain and passing what appeared to be a busy building site, I had found the room where the ‘adult education’ course was to take place.

Introduction and History

The day was hosted by Bletchley Park volunteer, Susan Slater.  Susan began by taking about the history of the site that was to ultimately become a pivotal centre for wartime intelligence.  Originally belonging to a financier, the Bletchley Park manor house and adjoining lands were put up for auction in 1937. 

Bletchley was a good location; it was pretty incongruous.  It was also served by two railway lines.  One line that went to London and another that went from East to West, connecting the universities of Oxford and Cambridge.  Not only was it served well in terms of transport, the railway also offers other kinds of links too – it was possible to connect to telecommunication links that I understand ran next to the track.  Importantly, it was situated outside of London (and away from the never ending trials of the blitz).

Susan presented an old map and asked us what we thought it was.  It turned out to be a map of the telegraph system during the time of the British Empire; red wires criss-crossed the globe.  The telegraph system can be roughly considered to be a ‘store and forward’ system.  Since it was impossible (due to distances involved) to send a message from England to, say, Australia, directly, messages (sent in morse code) were sent via a number of intermediate stations (or hubs). 

Susan made the point that whoever ran the telecommunication hubs were also to read all the messages that were transferred through it.  If you want your communications to be kept secret, the thing to do is to encode them in some way.  Interestingly, Susan also referred to Edward II, where there was a decree in around 1324 (if I understand this correctly!) that stated ‘all letters coming from or going to parts overseas [could] be ceased’.  Clearly, the contemporary debates about the interception of communications have very deep historical roots.

We were introduced to some key terms.  A code is a representation of letters and words by other letters and words.  A cypher is how letters are replaced with other letters.  I’ve also noted that if that if something is formulaic (or predicable), then it can become breakable (which is why you want to hide artefacts of language - certain characters in a language are statistically more frequent than others, for example).  The most secure way to encode a message is to use what a one-time pad (Wikipedia).  This is an encoding mechanism that is used only once and then thrown away.

An Engima machine (Wikipedia), which sat at the front of the classroom, was an electro-mechanical implementation of an encoding mechanism.  Susan outlined its design to us: it had a keyboard like a typewriter, plug boards (to replace one letter with another), four or five rotors that had the same number of positions as there were characters (which moved every time you pressed a key), and wiring within the rotors that changed the ‘letters’ even further. 

Second session: how it all worked

After a swift break, we dived straight into the second session, where we were split into two teams.  One team had to encrypt a message (using the Enigma machine), and the second team had to use the same machine to decrypt the same message (things were made easier since the ‘decrypting side’ knew what all the machine settings were).   I think my contribution was to either press a letter ‘F’ or a letter ‘Q’ – I forget!  Rotors turned and lights lit up.  The seventy-something year old machine still did its stuff.

What follows is are some rough notes from my notebook (made quickly during the class).  We were told that different parts of the German military used different code books (and also the Naval enigma machine was different to other enigma machines).  Each code book lasted for around 6 weeks.  The code book contained information such as the day, rotor position, starting point of the rotor and plug board settings; everything you needed to make understandable messages totally incomprehensible.

The challenge was, of course, to uncover what the settings of an Engima machine were (so messages could be decrypted).  A machine called the Bombe (Wikipedia) was invented to help with the process of figuring what the settings might be.  When the settings were (potentially) uncovered, these were tested by entering them into a machine called the Typex (which was, in essence, a version of an Enigma machine) along with the original message, to see if plain text (an unencrypted message) appeared.

The Enigma wasn’t the only machine that was used to encrypt (and decrypt) messages. Enigma (as far as I understand) was used for tactical communications.  Higher level strategic communications used in the German high command were transmitted using the Lorenz cypher.  This more complicated machine contained a paper tape reader which allowed the automatic transmission of messages, dispensing with the need for a morse code operator.

In terms of the scale of the operation at Bletchley Park, we were told that three thousand Engima messages ever day were being decoded, and forty Lorenz messages.  To help with this, there were 210 Bombe machines to help with the Enigma codes, and a machine that is sometimes described as ‘the world’s first electronic computer’, the Colossus machine.  At its peak, there were apparently ten thousand workers (a quarter of whom were women), running three shifts. 

Bombe Demo

After a short break, we were gently ushered downstairs to one of the museum exhibits; a reconstruction of a Bombe machine.  This was an electro-mechanical device that ‘sped up’ the process of discovering Enigma machine settings.  Two operators described how it worked and then turned it on.  It emitted a low whirring and clicking noise as it mechanically went through hundreds of combinations.

As the Bombe was running, I had a thought.  I wondered how you might go about writing a computer program, or a simulation to do pretty much the same thing.  The machine operators talked about the use of something called a ‘code map’, which helped them to find the route towards the settings.  I imagined an application or interactive smartphone or tablet app that allowed you to play with your own version of a Bombe, to get a feel for how it would work...  There could even be virtual Enigma machine that you could play with; you could create a digital playground for budding cryptographers.

Of course, there’s no such thing as an original thought: a Bombe simulator has already been written by the late Tony Sale (who reconstructed the Colossus machine), and a quick internet search revealed a bunch of Engima machine simulators.  One burning question is how might we potentially make the best use of these tools and resources?

Archive Talk

The next part of the day was all about the archive; the real reason I signed up for this event.  I have to confess that I didn’t really know what to expect and this sense of uncertainty was compounded by having a general interest rather than having a very specific research question in mind.

The archive is run by the Bletchley Park Trust.  GCHQ, the Government Communication Headquarters, is the custodian for the records that have come from Bletchley Park.  I understand that GCHQ is going to use Bletchley Park is used as its ‘reading room’, having leant around one hundred and twenty thousand documents for a period of fifty years.

By way of a very general introduction, a number of samples from the archive were dotted around our training room.  These ranged from Japanese language training aids (and a hand-written Japanese-English dictionary), forms used to help with the decryption of transmissions, through to samples of transmissions that were captured during the D-Day landings.

Apparently, there’s a big project to digitise the archive.  There is a multi-stage process that is under way.  The first stage is to have the artefacts professionally photographed.  This is followed by (I believe) storing the documents in some kind of on-line repository.  Volunteers may then be actively needed to help create metadata (or descriptions) of each repository item, to enable them to be found by researchers.

Tour

The final part of the day was a tour.  As I mentioned earlier, I’ve been on a couple of Bletchley Park tours, but this was unlike any of the earlier tours I had been on before.  We were all given hard hats and told to don high visibility jackets.  We were then ushered into the driving rain.

After a couple of minutes of trudging, we arrived at a building that I had first seen when I entered the site.  The building (which I understand was known as ‘hut 3’) was to become a new visitor’s centre.  From what I remember, the building used to be one of the largest punched card archives in Europe, known as Deb’s delight (for a reason that completely escapes me).    It was apparently used to cross-reference stuff (and I’m writing in terrible generalisations here, since I really don’t know very much!) 

Inside, there was no real lighting and dust from work on the floors hung in the air.  There was a strong odour of glue or paint.  Stuff was clearly happening.  Internal walls had been stripped away to give way to reveal a large open plan area which would become an ideal exhibition space.  Rather than being a wooden prefabricated ‘hut’, we were walking through a substantial brick building. 

Minutes later, we were directed towards two other huts that were undergoing restoration.  These were the wooden ones.  It was obvious that these buildings had lacked any kind of care and attention for many years, and workmen were busy securing the internal structure.  Avoiding lights and squeezing past tools, we snaked through a series of claustrophobic corridors, passing through what used to be the Army Intelligence block and then onto the Navy Intelligence block.  These were the rooms in which real secrets became clear.   Damp hung in the air, and mould could be seen creeping up some of the old walls.  There was clearly a lot of work that needed to be done.

Final thoughts

Every time I visit Bletchley Park, I learn something new.  This time, I became more aware of what happened in the different buildings, and I certainly learnt more about the future plans for the archive.  Through the talks that took place at the start of the day, I also learnt of a place called the Telegraph museum (museum website), which can be found at Porth Curno, Cornwall.   When walking through the various corridors to the education room, I remember a large poster that suggested that all communication links come to Bletchley Park, and that Bletchley is the centre of everything.

When it comes to a history of computing, it’s impossible to separate out the history of the computer and the history of telecommunications.  In Bletchley Park, communications and computing are fundamentally intertwined.  There’s another aspect, which is computing (and computing power) has led to the obvious development of new forms of communication.  Before I go any further forward in time (from, say, 1940 onwards), there’s a journey that I have to make back in time, and that is to go on a diversion to discover more about telecommunications, and a good place to start is by learning more about the history of the telegraph system.

I’ll be back another day (ideally when it’s not raining), to pay another call to Bletchley Park, and will also drop into to The National Museum of Computing, which occupies the same site.

Permalink 1 comment (latest comment by Rebecca Kowalski, Thursday, 13 Feb 2014, 14:52)
Share post
Christopher Douce

Gresham College Lecture: Notations, Patterns and New Discoveries (Juggling!)

Visible to anyone in the world

On a dark winter’s evening on 23 January 2014, I discovered a new part of London I had never been to before.  Dr Colin Wright gave a talk entitled ‘notations, patterns and new discoveries’ at the Museum of London.   The subject was intriguing in a number of different ways.  Firstly, it was all about the mathematics of juggling (which represented a combination of ideas that I had never come across before).  Secondly, it was about notations.

 The reason why I was ‘hooked’ by the notation part of the title is because my home discipline is computer science.  Computers are programmed using notation systems (programming languages), and when I was doing some research into software maintenance and object-oriented programming I discovered a series of fascinating papers that was about something called the ‘cognitive dimensions of notations’.  Roughly put, these were all about how we can efficiently work with (and think about) different types of notation system.

In its broadest sense, a notation is an abstraction or a representation.  It allows us to write stuff down.  Juggling (like dance) is an activity that is dynamic, almost ethereal; it exists and time and space, and then it can disappear or stop in an instant.  Notation allows us to write down or describe the transitory.  Computer programming languages allow us to describe sets of invisible instructions and sequences of calculations that exist nowhere except within digital circuits.  When we’re able to write things down, it turns out that we can more easily reason about what we’ve described, and make new discoveries too.

It took between eight and ten minutes to figure out how to get into the Museum of London.  It sits in the middle of a roundabout that I’ve passed a number of times before.  Eventually, I was ushered into a huge cavernous lecture theatre, which clearly suggested that this was going to be quite ‘an event’.  I was not to be disappointed.

Within minutes of the start of the lecture, we heard names of famous mathematicians: Gauss and Liebniz.  One view was that ‘truths (or proofs) should come from notions rather than notations’.  Colin, however, had a different view, that there is interplay between notions (or ideas) and notations.

During the lecture, I made a note of the following sentence: a notation represents a ‘specialist terminology allows rapid and accurate communication’, and then moved onto ask the question, ‘how can we describe a juggling pattern?’  This led to the creation of an abstraction that could then describe the movement of juggling balls. 

Whilst I was listening, I thought, ‘this is exactly what computer programmers do; we create one form of notation (a computer program), using another form of notation (a computer language) – the computer program is our abstraction of a problem that we’re trying to solve’.  Colin introduced us to juggling terms (or high level abstractions), such as the ‘shower’, ‘cascade’ and ‘mill’s mess’.  This led towards the more intellectually demanding domain of ‘theoretical juggling’ (with impossible number of balls).

 My words can’t really do the lecture justice.  I should add that it is one of those lectures that you would learn stuff by listening to it more than once.  Thankfully, for those who are interested, it was recorded, and it available on-line (Gresham College)

Whilst I was witnesses all these great tricks, one thought crossed my mind, which was, ‘how much time did you have to spend to figure out all this stuff and to learn all these juggling tricks?!  Surely there was something better you could have done with your time!’ (Admittedly, I write this partially in jest and with jealousy, since I can’t catch and I fear that doing ‘a cascade’ with three balls is, for me, a theoretical impossibility). 

It was a question that was implicitly answered by considering the importance of pure mathematics.  Doing and exploring stuff only because it is intellectually interesting may potentially lead to a real world practical use – the thing is that you don’t know what it might be and what new discoveries might emerge.  (A good example of this is number theory leading to the practical application of cryptography, which is used whenever we buy stuff over the internet). 

All in all, great fun.  Recommended.

Permalink Add your comment
Share post
Christopher Douce

Gresham College Lecture: User error – why it’s not your fault

Visible to anyone in the world

On 20 January 2014 I found the time to attend a public lecture in London that was all about usability and user error. The lecture was presented by Tony Mann, from the University of Greenwich.  The event was in a group of buildings just down the street from Chancery Lane underground station.  Since I was keen on this topic, I arrived twenty minutes early only to find that the Gresham College lecture theatre was already full to capacity.  User error (and interaction design), it seems, was apparently a very popular subject!

One phrase that I’ve made a note of is that ‘we blame ourselves if we cannot work something’, that we can quickly acquire feelings of embarrassment and incompetence if we do things wrong or make mistakes.  Tony gave us the example that we can become very confused by the simplest of devices, such as doors. 

Doors that are well designed should tell us how they should be used: we rely on visual cues to tell us whether they should be pushed or pulled (which is called affordance), and if we see a handle, then we regularly assume that the door should be pulled (with is our application of the design rule of ‘consistency’).  During this part of Tony’s talk, I could see him drawing heavily on Donald Norman’s book ‘The psychology of everyday things’ (Norman’s work is also featured within the Open University module, M364 Fundamentals of Interaction design).

I’ve made a note of Tony saying that when we interact with systems we take information from many different sources, not just the most obvious.  An interesting example that was given was the Kegworth air disaster (Wikipedia), which occurred since the pilot had turned off the wrong engine, after drawing from experience gained from different but similar aircraft.

Another really interesting example was the case where a pharmacy system was designed to in such a way that drug names could only be 24 characters in length and no more.  This created a situation where different drugs (which had very similar names, but had different effects) could be prescribed by a doctor in combinations which could potentially cause fatal harm to patients.  Both of these examples connect perfectly to the safety argument for good interaction design.  Another argument (that is used in M364) is an economic one, i.e. poor interaction design costs users and businesses both time and money.

Tony touched upon further issues that are also covered in M364.  He said, ‘we interact best [with a system] when we have a helpful mental model of a system’, and our mental models determine our behaviour, and humans (generally) have good intuition when interacting with physical objects (and it is hard to discard the mental models that we form).

Tony argued that it is the job of an interaction designer to help us to create a useful mental model of how a system works, and if there’s a conflict (between what a design tells us and how we think something may work), we can very easily get into trouble very quickly.  One way to help with is to make use of metaphor.  Tony Mann: ‘a strategy is to show something that we understand’, such as a desktop metaphor or a file metaphor on a computer.  I’ve also paraphrased the following interesting idea, that a ‘designer needs to both think like a computer and think like a user’.

One point was clearly emphasised: we can easily choose not to report mistakes.  This means that designers might not always receive important feedback from their users.  Users may to easily think, ‘that’s just a stupid error that I’ve made…’  Good designs, it was argued, prevents errors (which is another important point that is addressed in M364).  Tony also introduced the notion of resilience strategies; things that we do to help us to avoid making mistakes, such as hanging our scarf in a visible place so we remember to take it home after we’ve been somewhere.

The three concluding points were: we’re always too ready to blame ourselves when we make a blunder, that we don’t help designers as often as we ought to, and that good interaction design is difficult (because we need to consider different perspectives).

Tony’s talk touched upon wider (and related) subjects, such as the characteristics of human error and the ways that systems could be designed to minimise the risk of mistakes arising.  If I were to be very mean and offer a criticism, it would be that there was perhaps more of an opportunity to talk about the ‘human’ side of error – but here we begin to step into the domain of cognitive psychology (as well as engineering and mathematics).  This said, his talk was a useful and concise introduction to the importance of good interaction design.

Permalink Add your comment
Share post
Christopher Douce

Interaction design and user experience for motorcyclists

Visible to anyone in the world
Edited by Christopher Douce, Sunday, 9 Feb 2014, 16:04

Has anyone ever uttered the following phrases:  ‘it must be me!’ or ‘I must be stupid, I can’t work this system!’  When you say those words the odds are that it is likely that the problems have little to do with you and have everything to do with the system that you’re trying to use.

Making usable systems and devices is all about understanding different perspectives and thinking about compromises.  Firstly, there’s the user (and understanding what he or she wants to do using a system).  Secondly, there’s the task that has to be completed (and how a task might be connected to other tasks and systems).  Finally, there’s the question of the environment, i.e. the situations in which a product is going to be used.  If you fully understand all these aspects in a lot of depth and balance one aspect against another, then you’ll be able to design a system that is usable (of course, this is a huge simplification of the process of interaction design, but I’m sure that you get my point).

Parking a motorbike

A couple of months ago took a course at my second favourite academic institution, CityLit.  Since it was pretty good weather (despite being January), I decided to ride my scooter into the middle on London and park in one of the parking bays that were not too far from the college.  The only problem was that the City of Westminster has introduced a charging scheme, and this was a system that I hadn’t used before.

This blog post is a polite rant (and reflection) of the banal challenge of trying to pay Westminster council a grand total of one pound and twenty pence.  It turns out that the whole exercise is an interesting example of interaction design since it helps us to think about issues surrounding who the user is, the environment in which a system is used and the task that has to be completed.  Paying for parking sounds like a pretty simple task, doesn’t it?  Well, let me explain…

Expecting trouble

Having heard about the motorcycle parking rules in Westminster, I decided to do some research.  I expecting a simple system where you texted your bike registration number and location code to a designated ‘parking’ telephone number, and through the magic of mobile telephony, one English pound was added to your monthly mobile phone bill and the same English pound was appropriated to Westminster Council.  Well, it turned out to be a bit more complicated than that.  Payments don’t come from your phone account but instead come from your credit card.  This means that you needed to connect your phone number to your credit card number.

When you’ve found the motorbike registration site (which isn’t through a recognisable ‘Westminster Council’ URL), you get to create something called a ‘parking account’.  When logged in, you’re asked to enter the registration number of your vehicle.  In my case, since I’m pretty weird, I have two motorbikes: one that makes the inside of the garage look pretty, and another one (a scooter) that I sometimes use to zip around town on.   There are enough spaces to enter the registration codes for four different bikes. 

The thing is, I can’t remember the registration numbers for any of my bikes!  It turns out that I can hardly remember anything!  I can’t remember my phone number, I can’t remember my credit card number and I can’t remember two registration numbers.  I must be an idiot!  (Thankfully, I remembered my email address, which is something else you need – just make sure you know the password to access your account).

There was another oddity of the whole system.  After you’ve got an account, you login using a PIN code, which is the last four numbers of your credit card.  I never use these four numbers!  Again, I don’t know what they are! (Unless I had to look).  I was starting to get a bit impatient.

Arriving at the parking bay

The ride to the middle of town was great.  It was too early in the day for most people, which meant that the streets were quiet.  After parking my bike, I started to figure out how to pay.  I looked at an information sign, which I immediately saw was covered in city grime.  I also immediately saw that it didn’t have all the information I needed. 

I visited the parking website and discovered that you needed FOUR different numbers!  You needed a phone number, a location number (where your bike is parked), a day code (to indicate how long you’re parking your bike for), and the final four numbers of your registered credit card.  Thankfully, I had the foresight to save the parking telephone number in my phone, so I only had to send three numbers (but I would have rather liked to avoid messing around with my wallet to fish out my credit card; it meant unzipping and then zipping up layers of protective clothing).

Coffee break

At last, I had done it.  I had sent a payment text.  To celebrate my success, I visited a nearby café for a coffee and a sit down.  About ten minutes later, I received a text message that confirmed that I had paid for parking ‘FOR THE WRONG BIKE!’ 

The text message confirmed that I had just paid for parking for my ridiculous bike rather than the sensible city scooter that I had just used.  Also, when I registered both bikes on the system, I entered the scooter registration first, since it would be the bike that I would be using most.  At this point, I had no idea whether the system was clever enough to stupidly assume that I had written either (or both) of my bikes to Westminster at the same time.  There was no clear way to choose one bike as opposed to the other.  Again, I felt like an idiot.

Then, I had a crazy thought – perhaps I ought to try to look at my ‘parking record’, since this way there might be a way to change the vehicle I was using.  I logged in to the magic system (through my smartphone), entering in my last four digits of my credit card, again, and found a screen that seemed to do what I wanted.  It encouraged me to enter start and end dates (what?), and then had a button entitled, ‘generate report’.  A report on what?  The number of toys found in Kinder Eggs that are considered to be dangerous?  I pushed the button.  Nothing happened.  I had no parking history despite having just sent a parking text.  Effective feedback is one of the most obvious and fundamental principles of good usability.

Chat

It took be around five minutes to walk to the college.  When I got there I discovered two other motorcycle parking bays that were just around the corner.  I then made a discovery: it seemed that different bays seemed to have the same location ID.  It then struck me: perhaps the second number I had been entering in the phone was totally redundant!  Perhaps it’s the same code that is used all over London!

 During my class I got chatting to a fellow biker.  After I had emoted about the minor trauma of trying the pay for the parking, my new biker friend said, ‘there’s an app for this…’  Again, I thought ‘why didn’t anyone tell me!’  So, during a break I found the right app and started a download.  After a couple of minutes of nothing happening, I was presented with the delightful message:  ‘Error downloading: 504’.

Final thoughts

A really good interaction design principle is that you should always try to design systems which minimise what users need to remember (there’s this heuristic that has the title ‘visibility of system status’).   On this system, you needed to remember loads of different numbers and codes.  The task is pretty simple.  There is a fixed fee.  The only variable that you might want to enter is either the length of the stay (in days) and the choice of the vehicle.  But what happens if your phone runs out of charge and you want to use a friends phone to pay?  You’ll then have to make a telephone call with an operator, all for the sake of one pound twenty.

There’s also the environment to contend with.  I had to take gloves off, fumble around in my pockets for my mobile phone and then enter numbers.  The information sign was pretty small (and I can’t remember it mentioning anything about using an app).  I dread to think how difficult the process is if English isn’t your first language, and you don’t know that Westminster has bike parking fees.

One final thought is that one approach to learning more about the user experience is to observe users in the things that they do.  This is an approach that has drawn heavily from the social sciences, and on Open University modules such as M364 Interaction Design, subjects and techniques such as Ethnography are introduced.  Another approach to learning about user successes and failures is to search on-line, to learn about the problems other people have experienced.  Although this isn’t explicitly covered in M364, it is an interesting technique.

All this said, the second time that I needed to pay, I used the ‘pay by phone’ parking app.  The ‘504’ error message that I wrote about earlier had miraculously disappeared (why not a message that says, ‘please try again later?) and I was able to download the app and then press a couple of on-screen (virtual) buttons and enter in the last four numbers of my credit card (again, a number that I haven’t yet memorise, since no other system asks me for it…).  I even managed to pay for the right bike, this time!

Permalink Add your comment
Share post
Christopher Douce

MOOCs - What the research says

Visible to anyone in the world
Edited by Christopher Douce, Friday, 3 Jan 2014, 10:23

On 29 November 2013 I bailed out of the office and went to an event a place called  the London Knowledge Lab to attend a dissemination event about MOOCs.  Just in case you’re not familiar with the term, a MOOC (Wikipedia) is an abbreviation for Massively Open On-line Courses. The London Knowledge Lab was a place that I had visited a few years ago to attend an event about e-assessment (blog post).

This post is a quick, overdue, summary of the event.  For those who are interested, the London Knowledge Lab has provided a link to a number of pages (Institute of Education) that summarises some of the presentations in a lot more detail.  

Introductions

The event started with a brief presentation by Diana Laurillard, entitled The future potential of the MOOC.  During Diana’s presentation, I noted down a number of points that jumped out at me.

An important question to ask is what problems is a MOOC going to solve?  Diana mentioned a UNESCO goal (UNESCO website) that states that every child should have access to compulsory education by 2015.  It’s also important to note that there is an increasing demand for higher education but in the sector, the current model is that there is 1 member of staff for every 25 students.  If the objective is to reach as many people as possible, we’re immediately faced with some fundamental challenges. One thought is that perhaps MOOCs might be able to help with the demand for education.

But why should an institution create a MOOC in the first place?  There are a number of reasons.  Firstly, a MOOC offers a taster of what you might expect as a particular course of study, it has the potential to enhance or sustain the reputation of an institution that provides (or supports) a MOOC, offers an opportunity to carry out research and development within the intersection between information technology and education.  One of the fundamental challenges include how to best create a sustainable business model.

A point to bear in mind is that there hasn’t (yet) been a lot of research about MOOCs.   Some MOOCs clearly attract a certain demographic, i.e. professionals who already have degrees; this was a point that was echoed a number of times throughout the day.

Presentations

The first presentation of the day was by Martin Hawksey who talked about a MOOC ran by the Association of Learning Technology (ALT website).  I made a note that it adopting a ‘connectivist’ model (but I’m not quite sure I know what this means), but it was clear that different types of technology were used within this MOOC, such as something called FeedWordPress (which appears to be a content aggregator).

Yishay Mor, from the Open University Institute of Educational Technology spoke about a MOOC that was all about learning design.  I’ve made a note that his MOOC adopted a constructionist (Wikipedia) approach.  This MOOC used a Google site as a spine for the course, and also use an OU developed tool called CloudWorks (OU website) to facilitate discussions.

Yishay’s tips about what not to do include: don’t use homebrew technology (since scaling is iimportant), don’t assume that classroom experiences work on a MOOC, from the facilitators perspective the amount of interactions can be overwhelming.  An important note is that scaling might mean (in some instances), moving from a mechanical system to a dynamic system.

The third presentation of the day was by Mike Sharples who was also from the Open University.   Mike also works as an academic lead for FutureLearn, a UK based MOOC that was set up as a partnership between the Open University and other institutions.  At the time of his presentation, FutureLearn had approximately 50 courses (or MOOCs?) running.

I’ve noted that the pedagogy is described as ‘a social approach to online learning’ and Mike mentioned the term social constructivism.  I’ve also made a note that Laurillard’s conversational framework was mentioned, and ‘tight cycles’ of feedback are offered.  Other phrases used to describe the FutureLearn approach include vicarious learning, conversational learning and orchestrated collaboration. 

In terms of technology, Moodle was not used due to the sheer number of potential users.  The architecture of Moodle, it was argued, just wouldn’t be able to cope or scale.  Another interesting point was that the software platform was developed using an agile process and has been designed for desktop computers, tablets and smartphones. 

Barney Graner, from the University of London, described a MOOC that was delivered within Coursera (Coursera website).  I have to confess to taking two different Coursera courses, so this presentation was of immediate interest (although I found that the content was very good, I didn’t manage to complete either of them due to time pressures).  The course that Barney spoke of was 6 weeks long, and required between 5 and 10 hours of study per week.  All in all, there were 212 thousand students registered and 9% of those completed.  Interestingly, 70% were said to hold a higher degree and the majority were employed.  Another interesting point was that if the students paid a small fee to permit them to take something called a ‘signature track’, this apparently had a significant impact on retention statistics.

Matthew Yee-King from Goldsmiths gave a presentation entitled ‘metrics and systems for peer learning’.  In essence, Matthew spoke about how metrics can be used on different systems.  An important question that I’ve noted is, ‘how do we measure difference between systems?’ and ‘how do we measure if peer learning is working?’

The final presentation of the day, entitled ‘exploring and interacting with history on-line’ was by Andrew Payne, who was from the National Archive (National Archive education).  Andrew described a MOOC that focused on the use of archive materials in the classroom.  A tool called Blackboard Collaborate (Blackboard website) was used for on-line voice sessions, the same tool used by the Open University for many of their modules.

Towards the end of the day, during the start of a discussion period, I noted of a number of key issues for further investigation.  These included: pedagogy, strategy and technology.

Reflections

In some respects, this day was less about sharing hard research findings (since the MOOC is such a new phenomenon) but more about the sharing of practice and ‘war stories’.

Some messages were simple, such as, ‘it’s important to engineer for scale’.  Other points certainly require further investigation, such as, how best MOOCs might potentially help to reach those groups of people who could potentially benefit most from participating in study.  It’s interesting that such a large number of participants already have degree level qualifications.  You might argue that these participants are already experienced learners.

It was really interesting to hear that different MOOCs made use of different tools.  Although I’m more of an expert in technology than pedagogy, I feel that there is continuum between MOOCs (or on-line courses, in general) that offer an instructivist (or didactic) approach on one hand, and those that offer a constructivist approach on the other. Different software tools, of course, permit different pedagogies.   

Another (related) thought is that learners not only have to learn the subject that is the focus of a MOOC, but also learn the tool (or tools) through which the learning can be acquired.  When it comes to software (and those MOOCs that offer learners a range of different tools) my own view is that people use tools if they are sure that there is something in it for them, or the benefit of use outweighs the amount of investment that is extended in learning something.

In some respects, the evolution of a MOOC is an exercise in engineering as much as it is an exercise in mass education.  What I mean is that we’re creating tools that tell us about what is possible in terms of large scale on-line education.  Some tools and approaches will work, whereas other tools and approaches will not.  By collecting war stories and case studies (and speaking with the learners) we can begin to understand how to best create systems that work for the widest number of people, and how MOOCs can be used to augment and add to more ‘traditional’ forms of education.

One aspect that developers and designers of MOOCs need to be mindful of is the need for accessibility.  Designers of MOOCs need to consider this issue from the outset.  It’s important to provide media in different formats and create simple interfaces that enable all users to participate in on-line courses.  None of the presenters, as far as I recall, spoke about the importance of accessibility.  A high level of accessibility is connected to high levels of usability.

Just as I was finishing writing up this quick summary, I received an email, which was my daily ‘geek news’ summary.  I noticed an article which had an accompanying discussion.  It was entitled: Are High MOOC Failure Rates a Bug Or a Feature? (Slashdot).  For those who are interested in MOOCs, it’s worth a quick look.

Permalink Add your comment
Share post
Christopher Douce

Disability History Month 2013 Launch Event

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 27 Nov 2013, 18:29

It took me a good few minutes to find my way out of Westminster underground station.  When I did finally emerge to the surface, I found the houses of parliament towering above me.  After a minute or so gathering my bearings, I was on my way.  I roughly knew where I was going: past Westminster Abbey, then take a turning down one of the adjacent roads.  As per usual, I got to the site of the venue ridiculously early.   So early, in fact, that they organisers were still putting the chairs out (!)

The launch event for the 2013 disability history month was held on 19 November. I attended a similar event in 2011 (blog post), which I found thought provoking, but my diary conspired against me from attending last year.  There were a number of reasons to go along to this event: one is personal, and another is professional (a third reason could be considered to be political).

The day kicked off with Richard Raiser playing a clip from a recent BBC two part documentary which described how the lives of people with disabilities had changed.  I did manage to see the first episode, which was about the care system, but I didn’t get to see the second episode (and I had missed downloading it on iPlayer).  The point was simple: we’re on the telly, and we’ve a right to be there.

Speakers

Just like the last launch event, there were a number of speakers.  The first speaker of the day was Kevin Courney from the National Union of Teachers.  Representatives from unions featured heavily in the 2011 event, and this year was no exception.  Teachers are, of course, likely to encounter people with disabilities and they, of course, may have disabilities themselves.  Kevin drew our attention to some teaching resources that the unions had prepared for schools.

The second speaker of the day was Mike Oliver, who was introduced as a social model theorist.  By way of detail, the social model is a way of looking at disability where people disabled not by their so called impairments, but instead by the society in which they inhabit.  Mike touched upon history before speaking about themes such as choice, control and independent living.  Mike’s underlined the significance of the current economic challenges.

The third speaker was Jan Walmsley, formerly from The Open University (an institution that has now over ten thousand students with disabilities).  Jan is a part of the Social History of Learning Disability research group (a research group that I hadn't heard of before).  The group was established in 1994 and one of its objectives is to share memories and experiences by people and for people by publishing life stories. 

The two final speakers of the day were Jackie Downer and Kirsten Hearn.  Jackie described the importance of support workers and that technology can be a lifeline.  Kirsten gave an impassioned speech, emphasising the importance of rights, and echoing points earlier points by saying that it was liberating that it, ‘wasn’t me that was the problem, but the world’.

Plenary

One of the first points to be made was by Baroness Dame Campbell who emphasised the importance of political lobbying.  An audience member asked about the credibility of the social model, and whether we ought to be thinking in terms of a ‘post-social model’.  (The questioner mentioned the name of an academic called Tom Shakespeare).  This struck me as a difficult question to answer, and a quick internet search led me to a research paper (University of Leeds) that takes quite a bit of reading.  This question points us towards the growing discipline of disability studies.

Towards the end of the panel session, the issue of teaching (and teachers) was again returned to.  I seem to remember a reference to the learning resources that were mentioned during the start of the speeches.  The point for these were simple: there is a potential to ‘educate out’ discrimination, (or to normalise difference) at an early age.

An alternative perspective

The final speech of the day a speech wasn’t really a speech at all.  It was a stand-up comedy performance by comedienne Liz Carr.  I hadn’t seen Liz before, but I had heard of her work through a comedy group called Abnormally Funny People.  Unfortunately, I haven’t made too many notes during this part of the event, since I was laughing too much, but Liz did reference a recent challenge to the government’s bid to abolish the Independent Living Fund (BBC Website).  I also remember a startling gag about the right to work assessments.  This, to me, was the kind of comedy that cuts quickly to an issue and makes us think.

Reflection

There was a palpable difference between the 2011 event that I attended and this event.  The biggest difference, of course, reflects the change in the UK political landscape; there were many references to government cuts and the ways that the affect people with disabilities.  We were encouraged to reflect on history and the lessons that it offers us.  We also needed to be mindful of ‘what used to be’; stories of change, difference and individuality are important to remember and to keep.  One thing I felt was a steely will to retain rights and fight for new ones.

Permalink Add your comment
Share post
Christopher Douce

Media Training – Milton Keynes, 19 November 2013

Visible to anyone in the world

‘You want me to go on the radio, right?  And talk about my subject…?  You’ve got to be joking.  I’m not doing it!’   This imaginary conversation helped me to make up my mind to sign up for what turned out to be a really interesting (and fun) media training course.

For reasons that I won’t go into, I always thought that I wouldn’t ever be in a position to go anywhere near a microphone.  After having taken a couple of courses as a student with the Open University, I started to realise that I had always learnt a huge amount from the audio materials that accompany some modules.  Even if I wasn’t ever going to be on the radio as a subject matter expert, there might (one day) be a possibility that I may be asked to record some podcasts that might find their way into some module materials; signing up to a short half day media training course seemed like a very good idea (at the time).

This isn’t going to be one of my longer blog posts.  The course was pretty hands on.  We were asked to carry out two mock interviews; one was face to face (in a pretend studio), and another over the telephone.  Despite the clear emphasis on the practical, there was also a bit of theory that was worth remembering.

Firstly, it’s essential to come across as a person, i.e. don’t talk like a scientist.  If you do, you just end up sounding defensive (or like a politician) - scientists (or engineers), are different.  Let something about ‘you’ come across – sharing the personal is okay.  In fact, you should expect ‘the personal’.  We were also told to think of an interview in terms of having a cup of coffee with a friend.

During the session I learnt a couple of interesting phrases.  One of them was the phrase ‘news values’.  In retrospect, what makes a news story ‘news worthy’ is pretty obvious, but it’s a phrase that allows you to articulate what aspects of a story might be interesting (or relevant) to listeners.

One point that recurs is the subject of control, i.e. whether an interview is controlled by the interviewer or the interviewee.  We were clearly told that it is certainly okay to take the initiative.  It is important to answer a direct question; it’s okay to say ‘yes’ or ‘no’, for example, returning to the main subject or focus of the interview by using linking phrases.

What do we do if we’re asked about subjects or areas that are beyond our area of expertise?   In this case, we might say something like, ‘this is a complicated subject and for the purposes of this interview’.  There are many different audiences, and one audience is your academic peers.  Whilst it is important to acknowledge this group of listeners, it’s more important to consider the general listener.

Sometimes the stating the obvious can really help.  When it comes to language, avoid acronyms and using scientific or technical language that is specific to a subject (always consider the audience), and avoid language that is ambiguous (since you might come across as being evasive).

I’ve made a note of quote that was used during the day.  It was by Alexander Graham Bell: ‘Before anything else, preparation is the key to success’.  Again, the point of this is pretty obvious: have a think about what you’re going to say before you get into the studio.

I found the whole experience both tough and interesting in equal measure.  I continue to have no immediate plans to step foot into a radio studio.  I am, however, slightly more aware of how things might work if I were ever called upon to make a recording.

My take away points are: expect the personal, think of it as a chat over a coffee, don’t use complicated language, and you should be free to take control: the interviewer has chosen to speak to you about your subject – you’re the expert in that situation, not the interviewer.

Permalink Add your comment
Share post
Christopher Douce

London Associate Lecturer development day, London, November 2013

Visible to anyone in the world

The Open University is divided into a number of regions and twice a year the London region runs a staff development event for its associate lecturers who live in and close to our capital.  This blog post is a brief summary of an event that took place on Saturday 16 November 2013.  My own role during the day was quite a modest one (I was only required to do a couple of introductions).  This meant that I was able to wear my ‘tutor hat’ for much of the day.

Challenges for ESL students

ESL is, of course, a common abbreviation for ‘English as a second language’.  From time to time I’m asked what the university is able to do to help students who struggle with English.  There are a couple of schools of thought about this.  One school of thought is that English and writing skills should be embedded within modules (this is certainly the case within computing and engineering modules).  Another school of thought is that there should be a particular course or module that is dedicated to writing (which is the approach that the science faculty takes).  There are, of course, pros and cons with either approach.  The aim of this session was to offer tutors useful guidance about different resources and materials that could be shared with students.  It also aimed to help tutors chat about different challenges they have faced.

One skill that was considered to be important was the reading of papers, and a point was made that this is something that could be practiced.  Reading is, of course, a prelude to writing.  Although some people might argue that university level academic writing is something that is done only within the university (or academic) context, it can also be argued that learning how to write in an academic way can benefit learners in other ways, i.e. when it comes to writing for business and commerce, or the ability to distil evidence and construct cohesive arguments.

One question that was raised was, ‘how do you offer feedback in instances where students may struggle to read suggestions?’  This was a very good question, and sometimes interventions, or special sessions to help students are necessary.

Our discussions about writing led onto other discussions about plagiarism and academic conduct.  Plagiarism is, of course, a word that has very negative connotations.  In some cultures, using the words of an authority may be considered to be a mark of respect.  On the other hand, developing the ability to write in one’s own words is a really important part of distance learning; it’s both important and necessary for students to demonstrate how they are able to evaluate materials. 

The university has very clear policies about plagiarism and academic practice, and this is something that I’ve blogged about previously.  (Academic practice conference: day 1 summary, day 2 summary). From the tutor’s perspective, it isn’t an easy task to address these issues thoroughly and sensitively.  One thing that tutors could do is to run an activity (which exposes issues that relate to academic conduct).  Tutors (or module teams) could show how things should be done, and then tutors could facilitate a discussion using on-line forums, for example.

Another discussion that I’ve noted was the use of the ‘voice’.  Different modules may have a preference as to whether students can or should write in the first person.   One of the arguments about writing in the third person is that it allows other voices to be more clearly exposed.

During the session, we were all encouraged to do a bit of group work.  We were given a sample of writing and we were asked, ‘what resource would you choose to share with your students to try to help them with their writing skills?’  This was a fun activity and it emphasised that there is a lot of resources that both students and tutors can draw on.

To underline this point of resources, there were sets of study skills booklets that were available in the presentation room.  These had the titles:  Studying with the OU – UK learning approach, Reading and Taking Notes, Preparing Assignments and Thinking Critically.  If you’re interested, these can be downloaded from the Skills for Study website.

Developing resources and pedagogy for OU Live

I arrived at this afternoon session slightly late, since I was having too much fun chatting to colleagues.  OU Live is an asynchronous teaching and learning tool (which is a posh term to say that people can do things at the same time).  In essence, think ‘skype with a whiteboard’.  It allows tutors to run on-line sessions with groups of students, offering both audio and text-chat channels.  From my own experience, running OU Live can be pretty hard going, so I try to take every opportunity that I can (time permitting) to attend whatever training sessions the university offers.

This afternoon session was presented in two parts.  The first part was from the perspective of a science tutor (Catherine Halliwell), whereas the second part was from the perspective of a languages tutor.

Science perspective

I arrived in the session right at the moment when an important point was being made.  This was: ‘find a style of delivery that suits you’. It can be quite easy to use OU Live just to give ‘lectures’, but it is possible to use it to deliver dynamic interactive sessions.

One thing that tutors can do is to record their on-line sessions.  More students might use a recording of a session than there are students who are able to attend a live session.  One of the benefits of recordings is that they have the potential to become a very useful resource.  Tutor might, for example, refer students to sections of a recording when they start to revise for their exams.  Another thought is that you could explicitly refer to them when a tutor gives assignment feedback (guiding students to parts of a presentation where you have explained potentially difficulty concepts).

Catherine mentioned that her faculty had trialled the use of pairing tutors together to run single OU Live session.  Her module, a third level chemistry module, has 10 hours of tuition time.  Each session was shared; one tutor would take the lead, and the other would be a ‘wing man’.

Another aspect to OU Live pedagogy which can be easily overlooked is the importance of preparation.  Students can be asked to carry out certain activities before a session, such as completing one or more worksheets, for instance, to help to prepare students – or even performing observations, with the view to sharing data.

Catherine also spoke about some features that I had never used, but had been (slightly) aware of.  One of these features was the ‘file transfer’ facility, which could be used by the tutor to send students sets of ‘unseen questions’, perhaps in the form of a word document.  In some ways, this could be considered to be the electronic equivalent of giving everyone some handouts.  (I can also see that this would be especially useful during programming sessions, where tutors might hand out working copies of computer code to all participants).

We were given a number of very useful tips: make the first session as interactive as possible, and feel free to use a silly example.  Also, use things like voting, or drawing on a map.  Another thought is to turn the webcam on at the start so that the participants know who you are (you can turn it off after a few minutes, of course!)  Tutors should try their best to make their sessions friendly and fun.

There are a number of other points to bear in mind: some students can be reluctant to use the microphone, and this is okay.  Another approach (and one that I’ve heard of before) is to use OU Live as an informal drop-in session, where students are able to log in to have a chat with a tutor at a pre-arranged time.  It’s also important to take the time to look at a student’s profile to make sure whether there are any additional requirements that need to be taken into account.   Finally, because it’s possible to record a session, a tutor can always say, ‘I’m going to go through this bit quite quickly; because I’m recording this, you can always go back and play it back later if there’s anything that you miss’.

Languages perspective

The presentation from our language tutor was rather different.  We were given, quite literally, an A to Z tour of topics that relate to the use of OU Live, leaving us (and our facilitator), pretty breathless!

A couple of points that I’ve noted include the importance of developing routines and forcing habits (in terms of running sessions at the same time).  It’s also a good idea to send group emails, both before and after sessions (so students are aware of what is going to happen).  In terms of preparation, it’s a good idea to get on-line around half an hour before just to make sure that you don’t run across any technical problems or issues; having been confronted with the situation of Java software updates in the past this is very sound advice.

During the question and answer session at the end of the afternoon, the issue of the recording of day schools also cropped up again.  Our tutors were very pragmatic about this: recording of OU Live sessions should happen, since it allows the creation of resources that all students can use (especially those who could not attend any of the sessions).  It is therefore important to let all students know that recording is going to take place either before events, or at the start of an event.

Reflections

There’s always something to pick up from these events.   There were two main things that I gained from this session.  The first was the early discussions about language support consolidated what I already knew about the importance of academic conduct (and how the university procedures work).  Secondly, I picked up some tips about how to connect things together, i.e. connecting together assignment feedback with the use of OU Live recordings. 

The next event is to be held at the London School of Economics in March.   This event is likely to include a Mathematics Computing and Technology faculty specific session which will be held in the afternoon.  The fine detail hasn’t yet been decided on, but this too is also likely to be a good day.

Permalink
Share post
Christopher Douce

Gresham College: A history of computing in three parts

Visible to anyone in the world
Edited by Christopher Douce, Tuesday, 15 Oct 2019, 15:47

After a week and a half of continual exam and assignment marking, I was relieved to finally be able to turn my attention to other matters (and get out of my house).  I had an idle question: I wondered whether there were any professors or lecturers in London who shared an interest in the history of computing or technology.  Rather than trawling through university web pages (which was the first idea that crossed my mind), I decided to ask the internet, searching for the words, ‘history computing lecturer London’.

One name was clearly at the top of the list, but it was something else a bit lower down the search result that immediately attracted my attention.  It was a series of lectures entitled, ‘a history of computing in three parts’.  My first reactions were, ‘it’s probably too late’ and, ‘you’ve probably got to pay a lot of money to go along to this gig’.  All this computer history stuff that I’m interested in has to be folded into my day job which means that that it’s easier to justify time but a whole lot harder to justify expenses.

After reading the paragraph that described the event, I cast my eye back to the heading.  I realised that the date of the lecture was TODAY!  The very same day I had done my Google search, Thursday 31 October!  After a few more clicks I discovered that the event was also FREE!  Behold, it was a miracle!  I looked at my calendar; the lecture started at four in the afternoon and provided that I managed to sort out some admin stuff and have a meeting with a colleague, I would probably have enough time.

The only fly in the ointment was that it was all booked up; there were no tickets remaining.  Who knew that the history of computers was such a popular subject?  No matter.  I was looking reasonably smart – I would try to talk my way in.

Lecture 1: Pictures of computers

After a few false starts I managed to find my way to a place called Gresham College (website); navigating my way out of Chancery Lane tube proved to be quite tricky. It is only in retrospect that I realised that this was one of those places in London that I really ought to have known about.  I just know that people who I speak to about this event will chuckle, slap their thigh and say, ‘oh yes, Gresham College...’ and then will look at me as if I’m some kind of idiot if I said that I had visited there ‘by accident’.

I strode purposefully down a long alleyway and was confronted by a smartly dressed gentleman who obviously had an important role to play.  I began my attack: ‘I’m, erm, here for the lecture…’, and was swiftly gestured towards a flight of stairs without a word.   I felt deflated!  I was expecting to fight my way into the lecture!  I soon found myself in an anti-chamber filled with men (and women) in anoraks looking at a projector screen and noisily settled down to what was the first lecture by Martin Campbell-Kelly.

I joined the lecture at the point where people were being shown coloured photos of office equipment and pictures of steel filing cabinets.  The context was that computers are machines that allow us to process ever increasing amounts of data (and there’s a whole history of manual record keeping that we can easily overlook).  We were then told something about the history of the Rand Corporation followed by parts of the history of the computer company IBM.

On the subject of IBM, he mentioned someone called Eliot Noyes (Wikipedia).  Noyes was for IBM as Jonathan Ive (Wikipedia) is for Apple (if you’re into industrial design).  Martin mentioned that mainframe computers had a particular look; for a time there was a particular ‘design zeitgeist’.  I’ve made notes that Noyes used to look over catalogues from the Italian company Olivetti, and not only designed computers, but entire rooms.  We were shown photographs of various mock-ups. 

The creation of physical prototypes reminded me of some themes that are mentioned in a couple of design modules, either Design Essentials or Design for Engineers.  Martin also made reference to designer Norman Bel Geddes (Wikipedia).  He also showed us a whole host of other pictures of big machines, notably the ICL 2900 (Wikipedia) used in the Bankers’ Automated Clearing System (BACS).  (I have to confess being dragged into the depth of the Wikipedia page about that particular ICL computer.  Should I confess to such level of geekiness?  Probably not!)

Martin’s talk wasn’t really what I had expected but I found it pretty interesting (and it was a shame I missed the first quarter of it).  I was surprised by the detail that he provided about manual filing systems but I was also encouraged by the inclusion of information about designers.  The visual and industrial design aspect is an important part of computing history too.  Thinking back, one of my first computers had a very different aesthetic to the machines that I use today.  Function and fashion, combined with the wider perception of devices and machines are perspectives that are inexplicably linked.

After the lecture, it later dawned on me that I’ve actually read one of Martin’s books, ‘Computer: a history of the information machine’ which he co-authored with William Aspray.  It’s a pretty good read.  It covers a range of different strands; the pre-history, early electronic machines (such as the UNIVAC, which he touched on in his talk), before moving onto the emergence of the internet and software.  It’s tough to do everything but he has a good old go at it.

Lecture 2: Turing and his work

The second lecture of the day was by Professor Jonathan Bowen (website).  Jonathan talked about the life and work of Alan Turing (Wikipedia) and mentioned Alan Hodges’s scholarly biography, ‘the enigma of intelligence’. 

Jonathan spoke about three key areas of Turing’s work: his work that relates to the fundamentals of computer science, philosophical work relating to artificial intelligence and his later work on morphogenesis (which now has strong connections to the field of bioinformatics).  He mentioned his birth place, spoke about his PhD research which took place at Princeton University (with Alonzo Church being his doctoral supervisor), and also spoke about his work at Bletchley Park.  Other aspects of his life were touched on, such as his work in the National Physical Laboratory (NLP) in Teddington and his movement to the University of Manchester.  During his time in the NPL, he worked on the design of a computer which then became the Pilot Ace (Wikipedia).  When he was at Manchester, he was familiar with the Manchester Mark I computer (the world’s first stored program computer, and don’t let any American tell you otherwise).

What I liked about Jonathan’s talk was its breadth.  He covered many different aspects of Turing life in a very short space of time.  He also spoke of the ambiguity regarding his death, echoing what Hodges had written in his biography of Turing

At the end of his talk, we were directed to a set of web links that might be of interest to some.  Last year was the centenary of Turing’s birth, and there is a commemorative website that contains a whole host of different resources to celebrate this.  There is also a site that is maintained by his biographer, Alan Hodges (turing.org.uk).  Interestingly, we were also directed to an on-line archive of documents which can be accessed by computer scientists, historians or anyone else who might be interested.

Lecture 3: The grand narrative of the history of computing

The headline act of the night was Doron Swade.  I know of Doron’s work from the Science Museum where he headed up a project to construct a working version of Charles Babbage’s design for his Difference Engine number 2.  Babbage (for those who don’t know of him) is a Victorian inventor and raconteur whose lifelong quest was to build and design mechanical calculating machines.  During his life, he had a battle with his engineer, had the challenge of securing money for his ideas, travelled around Italy and hosted some famous parties (and did a whole lot more).

The title of Doran’s lecture was an intriguing and demanding one.  Could there really be a grand narrative about the history of computing?  If so, what elements or ingredients might it contain?  Doron told us that the history of computing is an emerging field and then posed a similar question: ‘what strings [the different] pieces together?’  He also reassured us that there was a clear narrative that appears to be emerging.

The narrative begins with methods for accounting and number systems, i.e. mechanisms to keep track of number.  We could consider the pre-history to comprise of artefacts such as tally sticks or physical devices that can be used to ‘relieve or replace mental calculation’.  This led to the emergence of mechanisms that used moving parts, such as an abacus and a slide rule.  The next ‘chapter’ would comprise of devices that embodied algorithms; their mechanisms carried out sequences or steps of calculations.  Here we have the work of Babbage and links to Hollerith (who was mentioned by Campbell-Kelly).

Doron then presented us with a challenge.  If we represent history in this way there is an implicit suggestion that there is a clear deterministic path from the past through to the present.  If I understand the point correctly, any narrative (or description of the past) is always going to be flawed, since there is so much more going on.  There could be situations in which nothing much happens.  A really interesting thought that Doron introduced was the idea of a ‘stored program’ being met with puzzlement and confusion, but this is an idea that distinctly defines what a computer is today.  (I haven’t made a word for word note of what Doron said, but this is something that has certainly stuck in my mind).

Another interesting point is that a serial narrative naturally excludes the parallel.  There is also an issue of reflexivity (to nick a posh word that I learnt from the social sciences); there is a relationship between history making machines and machines making history.  Linearity, it is argued, does a disservice.  One way to get over the challenge of linearity is to draw upon the stories of people.  These thoughts reminded me of a talk by Tilly Blyth, current keeper of technologies at the science museum, about the forthcoming ‘information age’ gallery.  Tilly also emphasised the importance of personal narratives and also cautioned about viewing history as a deterministic process.

One of the highlights of Doran’s talk was his ‘river diagram’ of the ‘history of computing’ (my ‘quotes’ at this point, since I don’t think I made a note of a ‘heading’).  Obviously, a picture is much better, but I’ll have a go at describing it succinctly. 

In essence, the grand narrative comprises of a bunch of different threads.  One thread that runs through it all is the history of calculation.  There is another thread about the history of communication.  In the middle, these threads are linked by ‘tributaries’ which relate to the subjects of automatic computation and information management.  These lead to another (current) thread of study which is entitled ‘electronic information age’.  I also made a note of a fabulous turn of phrase.  The current electronic information age emerged from the ‘fusion chamber of solid state physics’. Another bit of the diagram relates to different ways in which calculation or computation could be realised: mechanical, electromechanical or electronic. 

I also made a quick note of what were considered to be the core ideas in computing: mechanical processes, digital logic, algorithms, systems architecture, software and universality (I’m not sure what this means, though) and the internal stored program.  A narrative, it was argued, comes from a splicing together of different threads.

Returning to Babbage, Doran said that ‘[he] burst out of nowhere and confounds us with schemes that are unprecedented’; proposing mechanical calculating machines the size of rooms.  Doran also spoke about Ada Lovelace’s description of Babbage’s designs of his Analytical Engine, a machine that embodies many of the core ideas that are used in computing today: ‘a fetch execute cycle, transfer of memory form the processor, programmable, automatic execution, separation of program and memory’.

Doran ends with a question: ‘to what extent did this [Babbage’s work] influence modern computing?’  The answer is, ‘probably, not very much…’ (my quotes this time, rather than Doran’s), since many of Babbage’s discoveries and inventions were rediscovered and re-implemented as computing devices were realised in different forms, moving from the mechanical to the electrical.  Doran argued that perhaps because there is so much congruence between the different approaches, the ideas that have been rediscovered and re-implemented may well be really important and fundamental to the subject of computation.  To paraphrase from Doran’s book, Babbage isn’t so much a ‘great grandfather’ of computing, more of a ‘great uncle’.

Reflections

For me, Doron’s talk tied together aspects of the earlier talks.  Martin spoke about the history of information management and touched upon the electromechanical world of computing.  By describing the work of Turing, Jonathan spoke about and connected to the history of automatic computation.  One of the challenges that I’ve been grappling with is that there is so much history that is fundamentally interesting.  I’m interested in learning more, but it remains difficult to know which parts of a bigger picture to focus on. 

What I personally got from the day was a confirmation that my interest in related subjects such as communication technologies and the use, development and deployment of software (and algorithms) do indeed form an important piece of a ‘grand narrative’ in the history of computing and information technology.  Whilst I instinctively knew this to be true, Doran’s river diagram, for me, drew together different influences and connections in a very clear and obvious way.

Before heading home, I grabbed a brochure that had the title, ‘free public lectures’, vowing that I would have a good look  though it to see what else was going on.  After saying a few goodbyes to people I left the basement room and walked up a flight of stairs.  In the intervening hours, it had become dark; time had passed and I hadn’t really noticed.  When I reached the street I reached into by inside pocket for my smartphone to see if I had any messages.  A light was flashing.  I didn’t have any messages but I had a few alerts.  A theoretical Turing machine rendered into a physical device was alerting me to a comedy night that was to take place later on that week.  This was also a gentle reminder about how subtly technology had become entwined with my life.  Was I reliant on this little device?  That was a whole other question.

When I was heading home I asked myself, ‘how come I never knew this Gresham college place existed?’  Perhaps it is only one of those places that you hear about if you’re ‘in the know’.  London, for me, is gradually revealing some of its secrets.

Permalink Add your comment
Share post
Christopher Douce

Ada Lovelace Day: City University London, 15 October 2013

Visible to anyone in the world
Edited by Christopher Douce, Monday, 28 Oct 2013, 13:42

After a day of meetings and problem solving, I wandered down to the basement where my scooter was parked.  I had a rough idea of the route I had to follow; I needed to head south from Camden town, navigate around Kings Cross and onto the Pentonville Road and then pick up the A1 at Angel, and then try to find my way south.  Thanks to Google Streetview I had geekily rehersed some of the trickier intersections – but I still ended up going the wrong way.

The reason for my Tuesday evening visit to the City University was to attend an event that was a part of a wider programme of events called the Ada Lovelace day (Finding Ada).  A website describes it as: ‘an international celebration of the achievements of women in science, technology, engineering and maths (STEM)’.  Okay, so I’m not a woman, but I’m fundamentally interested in two related subjects: the availability and accessibility of education to everyone, and the history of computing – so, it seemed a pretty cool event to go down and support.

Panel discussion

The event kicked off with a panel discussion.  The panel was introduced by Connie St Louis from City University.  The panel was a great mix of discussants from different sectors: the university sector, commercial sector and public sector.  Each discussant had a different story as to why they found science, technology or computing a fascinating subject.  

Whilst the subject of ‘coding’ (or the creating of computer programs) took central stage, quite a lot of the discussion it was great to hear about photonic research (from Arti Agrawal) and Prim Smith’s journey from programmer through to senior manager.  I particularly liked her description about how software can play a very important role in the provision of services to the public sector.   Vikki Read, from Unruly media, said that ‘it was important to give everyone the opportunity [to code]’.

Coding demo

After the introductions and initial questions came to an end we were given a taste of what ‘coding’ actually was.  In reality, this meant that we were shown what a ‘for loop’ looked like in a language called M-script which is used in something called Matlab.  For those who don’t know anything about Matlab, it’s a very complicated piece of software (I’m not going to say much more than this!)  It’s something that is used by engineering professionals to tackle some really tough engineering problems. 

For me, there were two things that didn’t work quite so well in this section: if you’re going to introduce what coding was all about Matlab wouldn’t have been my personal choice, and secondly, the coding demo was carried out by a man (which didn’t really seem to be in keeping with the day).  This said, we did get to see what M-script code looked like.

Doing a livecoding demo that is compelling and engaging is always going to be tough.  You’ve got to provide effective and efficient instructions that, in effect, are very understandable that do something that is interesting.   It’s not an easy task, and coders (in my humble opinion) only get into ‘the zone’ of coding (to appreciate the beauty and elegance of software) after a lot of hard work.

The Matlab demo was followed by a video presentation (YouTube) from code.org (website) which opened with the quote, ‘everybody in this country should learn how to program a computer... because it teaches you how to think’ (which, I think, is a good point).  I remember a quote from the video which goes something like, ‘software is about humanity’.  By writing code and considering abstractions (and how best to describe problems and situations to a computer), we need to reflect about our problems.  We also perpetually interact and work with software, whether we choose to or not.  It could even be argued that although software and programming has its foundations in mathematics and sciences, it is a subject that requires a huge amount of creativity.

One of the panel members later made the point that to be a scientist requires you to apply and use a huge amount of imagination.  The same, of course, can be said about software.

Question and answer session

The question and answer session was quite short and I haven’t taken too many notes during this part of the evening.  One of the questions asked was, ‘how difficult is coding?’  This one is difficult to answer easily since it depends on a number of different factors: the language, the problem that you’re trying to solve, and the level of motivation that you might have to solve it.  One other point that I do remember is a story about how one of the members of the panel gained her first job as an energy manager.  The short version of her answer was: it doesn’t hurt to be direct.

Reflections

This event was all about outreach and its objective was to inform and inspire, and this is something that is very tough to do in an hour.

Lovelace is a beguiling figure.  Her story is one that is fascinating.  It is also fascinating because of not necessarily what is known about her, but also what is disputed.  You don’t have to dig too far into her story to read about rumours of horse racing, gambling, debts and family jewels.  This said, she was certainly way ahead of her time (as were Babbage’s attempts to build a computing machine), when she wrote about the way that machines could weave patterns with numbers.   Babbage is certainly indebted to her when she translated (and added to) Menabrea’s description of his idea of the analytical engine.

During this event I was expecting there to be stronger voices that more directly call for more women in science, technology and engineering subjects.  I can remember a distinct gender disparity from my own undergraduate days when I studied computer science and I can clearly see that this is continuing today when I drop into computing and engineering tutorials (but less so in design tutorials) to give our tutors a bit of moral support.  I’ll be the first to put my hand up and say that I don’t really understand the reasons why this should be the case.

To me, computing is not just cool, it is very cool.  In what other subject can you invent infinitely complex, interactive and unique universes out of nothing but numbers?  Not only is software the stuff of pure thought, but it is also a way to solve real-world problems (some of which were hinted at by one of the panel members).

Not only did I get lost getting to the City University, I also got lost trying to leave the building. After a couple of false starts, I finally made it to the exit and out into the cool autumn air.  Minutes later, I had fired up the scooters engine and practically oblivious to the fact that deep inside the machine was some software (in my scooter’s engine management system) that was helping to propel me on my journey home.

Permalink Add your comment
Share post

This blog might contain posts that are only visible to logged-in users, or where only logged-in users can comment. If you have an account on the system, please log in for full access.

Total visits to this blog: 2008943