OU blog

Personal Blogs

Christopher Douce

e-Learning community event: mobile devices

Visible to anyone in the world
Edited by Christopher Douce, Thursday, 20 Feb 2014, 12:01

Mobile devices are everywhere.  On a typical tube ride to the regional office in London, I see loads of different devices.  You can easily recognise the Amazon Kindle; you see the old type with buttons, and the more modern version with its touch screen.  Other passengers read electronic books with Android and Apple tablets.  Other commuters study their smart phones with intensity, and I’m fascinated with what is becoming possible with the bigger screen phones, such as the Samsung Note (or phablets, as I understand they’re called).  Technology is giving us both convenience and an opportunity to snatch moments of reading in the dead time of travel.

I have a connection with a module which is all about accessible online learning (H810 module description).  In the context of the module, accessibility is all about making materials, products and tools usable for people who have disabilities.  Accessibility can also be considered in a wider sense, in terms of making materials available to learners irrespective of their situation or environment.  In the most recent presentation of H810, the module team has made available much of the learning materials in eBook or Kindle format.  The fact that materials can be made available in this format can be potentially transformative and open up opportunities to ‘snatch’ more moments of learning.

An event I attended on 11 February 2014, held in the university library, was all about sharing research and practice about the use of mobile devices.  I missed the first presentation, which was all about the use of OU Live (an on-line real time conferencing system) using tablet devices.  The other two presentations (which I’ve made notes about) explored two different perspectives: the perspective of the student, and the perspective of the associate lecturer (or tutor).

(It was also interesting to note that the event was packed to capacity; it was standing room only.  Mobile technology and its impact on learning seems to be a hot topic).

Do students study and learn differently using e-readers?

The first presentation I managed to pay attention to was by Anne Campbell who had conducted a study about how students use e-readers.  Her research question (according to my notes) was whether users of these devices could perform deep reading (when you become absorbed and immersed in a text) and active learning, or alternatively, do learners get easily distracted by the technology?  Active learning can be thought of carrying out activities such as highlighting, note taking and summarising – all the things that you used to be able to do with a paper based text book and materials.

Anne gave us a bit of context.  Apparently half of OU postgraduate students use a tablet or e-reader, and most use it for studying.  Also, half of UK households have some kind of e-reader.  Anne also told us that there was very little research on how students study and learn using e-readers.  To try to learn more, Anne has conducted a small research project to try to learn more about how students consume and work with electronic resources and readers.

The study comprised of seventeen students.  Six students were from the social sciences and eleven students were studying science.  All were from a broad range of ages.  The study was a longitudinal diary study.  Whenever students used their devices, they were required to make an entry.  This was complemented with a series of semi-structured interviews.  Subsequently, a huge amount of rich qualitative data was collected and then analysed using a technique known as grounded theory.   (The key themes and subjects that are contained within the data are gradually exposed by looking at the detail of what the participants have said and have written).

One of the differences between using e-readers and traditional text books is the lack of spatial cues.  We’re used to the physical size of a book, so it’s possible to (roughly) know where certain chapters are once we’re familiar with its contents.  It’s also harder to skim read with e-readers, but on the other hand this may force readers to read in more depth.  One comment I’ve noted is, ‘I think with the Kindle… it is sinking in more’.  This, however, isn’t true for all students.

I’ve also noted that there clear benefits in terms of size.  Some text books are clearly very heavy and bulky; you need a reasonably sized bag to move them around from place to place, but with an e-reader, you can (of course) transfer all the books that you need for a module to the device.  Other advantages are that you can search for key phrases using an e-reader.  I’ve learnt that some e-readers contain a built in dictionary (which means that readers can look up words without having to reach for a paper dictionary).  Other advantages include a ‘clickable index’ (which can help with the navigation).  Other more implicit advantages can include the ability to change the size of the text of the display, and the ability to use the ‘voice readout’ function of a mobile device (but I don’t think any participants used this feature).

I also noted that e-readers might not be as well suited for active learning for the reasons that I touched on above, but apparently it’s possible to perform highlights and to record notes within an ebook.

My final note of this session was, ‘new types of study advice needed?’   More of this thought later.

Perspectives from a remote and rural AL

Tamsin Smith, from the Faculty of Science, talked about how mobile technology helps her in her role as an associate lecturer.  I found the subject of this talk immediately interesting and was very keen to hear learn about Tamsin’s experiences.  One of the modules that Tamsin tutors on consists of seven health science books.  The size and convenience of e-readers can also obviously benefit tutors as well as students.

On some modules, key documents such as assignment guides or tutor notes are available as PDFs.  If they’re not directly available, they can be converted into PDFs using freely available software tools.  When you have got the documents in this format, you can access them using your device of choice.  In Tamsin’s case, this was an iPad mini. 

On the subject of different devices, Tamsin also mentioned a new app called OU Anywhere, which is available for both iOS and Android devices.  After this talk, I gave OU Anywhere a try, downloading it to my smartphone.  I soon saw that I could access all the core blocks for the module that I tutor on, along with a whole bunch of other modules.  I could also access videos that were available through the DVD that was supplied with the module.  Clearly, this appeared to be (at a first glance) pretty useful, and was something that I needed to spend a bit more time looking at.

Other than the clear advantages of size and mobility, Tamsin also said that there were other advantages.  These included an ability to highlight sections, to add notes, to save bookmarks and to perform searches.  Searching was highlighted as particularly valuable.  Tutors could, for example, perform searches for relevant module materials during the middle of tutorials. 

Through an internet connection, our devices can allow access to the OU library, on line tutorials through OU Live (as covered during the first presentation that I missed), and tutor group discussion forums allowing tutors to keep track of discussions and support students whilst they’re on the move.  This said, internet access is not available everywhere, so the facility to download and store resources is a valuable necessity.  This, it was said, was the biggest change to practice; the ability to carry all materials easily and access them quickly. 

One point that I did learn from this presentation is that there is an ETMA file handler that available for the iPad (but not one that is official sanctioned or supported by the university).

Final thoughts

What I really liked about Anne’s study was its research approach.  I really liked the fact that it used something called a diary study (which is a technique that is touched on as a part of the M364 Interaction Design module).  This study aimed to learn how learning is done.  It struck me that some learners (including myself) might have to experiment with different combinations of study approaches and techniques to find out what works and what doesn’t.  Study technique (I thought) might be a judgement for the individual.

When I enrolled on my first postgraduate module with the Open University, I was sent a book entitled, The Good Study Guide by Andrew Northedge (companion website).  It was one of those books where I thought to myself, ‘how come it’s taken me such a long time to get around to reading this?’, and, ‘if only I had read this as an undergraduate, I might have perhaps managed to get a higher score in some of my exams’.  It was packed filled with practical advice about topics as time management, using a computer to study, reading, making notes, writing and preparing for exams.

It was interesting to hear from Anne’s presentation that studying using our new-fangled devices is that little bit different.  Whilst on one hand we lose some of our ability to put post it notes between pages and see where our thumbs have been, we gain mobility, convenience and extra facilities such as searching. 

It is very clear that more and more of university materials can now be accessed using electronic readers.  Whilst this is likely to be a good thing (in terms of convenience), there are two main issues (that are connected to each other) that I think that we need to bear in mind. 

The first is a very practical issue.  It is: how do you get the materials onto our device?  Two related questions are: how can we move our materials between different devices? and, how do we effectively manage the materials once we have saved them to our devices?  We might end up downloading a whole set of different files, ranging from different module blocks, assignments and other guidance documents.  It’s important to figure out a way to best manage these files:  we need to be literate in how we use our devices.   (As an aside, these questions loosely connect with the nebulous concept of the Personal Learning Environment).

The second issue relates to learning.  In the first presentation, Anne mentioned the term ‘active learning’.  The Good Study Guide contains a chapter about ‘making notes’.  Everyone is different, but I can’t help but think that there’s an opportunity for ‘practice sharing’.  What I mean is that there’s an opportunity to share stories of how learners can effectively make use of these mobile devices, perhaps in combination with more traditional approaches for study (such as note taking and paraphrasing).  Sharing tips and tricks about how mobile devices can fit into a personalised study plan has the potential to show how these new tools can be successfully applied.

A final thought relates to the broad subject of learning design.  Given that half of all households now have access to e-readers of one form or another (as stated in the first presentation I’ve covered) module teams need to be mindful of the opportunities and challenges that these devices can offer.  Although this is slightly away from my home discipline and core subject, I certainly feel that there needs to be work to be done to further understand what these challenges and opportunities might be.  I’m sure that there has been a lot more work carried out than I am aware of.  If you know of any studies that are relevant, please feel free to comment below.

Video recordings of these presentations are available through the university Stadium website.

Permalink 1 comment (latest comment by Jonathan Vernon, Wednesday, 5 Mar 2014, 23:38)
Share post
Christopher Douce

MOOCs - What the research says

Visible to anyone in the world
Edited by Christopher Douce, Friday, 3 Jan 2014, 10:23

On 29 November 2013 I bailed out of the office and went to an event a place called  the London Knowledge Lab to attend a dissemination event about MOOCs.  Just in case you’re not familiar with the term, a MOOC (Wikipedia) is an abbreviation for Massively Open On-line Courses. The London Knowledge Lab was a place that I had visited a few years ago to attend an event about e-assessment (blog post).

This post is a quick, overdue, summary of the event.  For those who are interested, the London Knowledge Lab has provided a link to a number of pages (Institute of Education) that summarises some of the presentations in a lot more detail.  

Introductions

The event started with a brief presentation by Diana Laurillard, entitled The future potential of the MOOC.  During Diana’s presentation, I noted down a number of points that jumped out at me.

An important question to ask is what problems is a MOOC going to solve?  Diana mentioned a UNESCO goal (UNESCO website) that states that every child should have access to compulsory education by 2015.  It’s also important to note that there is an increasing demand for higher education but in the sector, the current model is that there is 1 member of staff for every 25 students.  If the objective is to reach as many people as possible, we’re immediately faced with some fundamental challenges. One thought is that perhaps MOOCs might be able to help with the demand for education.

But why should an institution create a MOOC in the first place?  There are a number of reasons.  Firstly, a MOOC offers a taster of what you might expect as a particular course of study, it has the potential to enhance or sustain the reputation of an institution that provides (or supports) a MOOC, offers an opportunity to carry out research and development within the intersection between information technology and education.  One of the fundamental challenges include how to best create a sustainable business model.

A point to bear in mind is that there hasn’t (yet) been a lot of research about MOOCs.   Some MOOCs clearly attract a certain demographic, i.e. professionals who already have degrees; this was a point that was echoed a number of times throughout the day.

Presentations

The first presentation of the day was by Martin Hawksey who talked about a MOOC ran by the Association of Learning Technology (ALT website).  I made a note that it adopting a ‘connectivist’ model (but I’m not quite sure I know what this means), but it was clear that different types of technology were used within this MOOC, such as something called FeedWordPress (which appears to be a content aggregator).

Yishay Mor, from the Open University Institute of Educational Technology spoke about a MOOC that was all about learning design.  I’ve made a note that his MOOC adopted a constructionist (Wikipedia) approach.  This MOOC used a Google site as a spine for the course, and also use an OU developed tool called CloudWorks (OU website) to facilitate discussions.

Yishay’s tips about what not to do include: don’t use homebrew technology (since scaling is iimportant), don’t assume that classroom experiences work on a MOOC, from the facilitators perspective the amount of interactions can be overwhelming.  An important note is that scaling might mean (in some instances), moving from a mechanical system to a dynamic system.

The third presentation of the day was by Mike Sharples who was also from the Open University.   Mike also works as an academic lead for FutureLearn, a UK based MOOC that was set up as a partnership between the Open University and other institutions.  At the time of his presentation, FutureLearn had approximately 50 courses (or MOOCs?) running.

I’ve noted that the pedagogy is described as ‘a social approach to online learning’ and Mike mentioned the term social constructivism.  I’ve also made a note that Laurillard’s conversational framework was mentioned, and ‘tight cycles’ of feedback are offered.  Other phrases used to describe the FutureLearn approach include vicarious learning, conversational learning and orchestrated collaboration. 

In terms of technology, Moodle was not used due to the sheer number of potential users.  The architecture of Moodle, it was argued, just wouldn’t be able to cope or scale.  Another interesting point was that the software platform was developed using an agile process and has been designed for desktop computers, tablets and smartphones. 

Barney Graner, from the University of London, described a MOOC that was delivered within Coursera (Coursera website).  I have to confess to taking two different Coursera courses, so this presentation was of immediate interest (although I found that the content was very good, I didn’t manage to complete either of them due to time pressures).  The course that Barney spoke of was 6 weeks long, and required between 5 and 10 hours of study per week.  All in all, there were 212 thousand students registered and 9% of those completed.  Interestingly, 70% were said to hold a higher degree and the majority were employed.  Another interesting point was that if the students paid a small fee to permit them to take something called a ‘signature track’, this apparently had a significant impact on retention statistics.

Matthew Yee-King from Goldsmiths gave a presentation entitled ‘metrics and systems for peer learning’.  In essence, Matthew spoke about how metrics can be used on different systems.  An important question that I’ve noted is, ‘how do we measure difference between systems?’ and ‘how do we measure if peer learning is working?’

The final presentation of the day, entitled ‘exploring and interacting with history on-line’ was by Andrew Payne, who was from the National Archive (National Archive education).  Andrew described a MOOC that focused on the use of archive materials in the classroom.  A tool called Blackboard Collaborate (Blackboard website) was used for on-line voice sessions, the same tool used by the Open University for many of their modules.

Towards the end of the day, during the start of a discussion period, I noted of a number of key issues for further investigation.  These included: pedagogy, strategy and technology.

Reflections

In some respects, this day was less about sharing hard research findings (since the MOOC is such a new phenomenon) but more about the sharing of practice and ‘war stories’.

Some messages were simple, such as, ‘it’s important to engineer for scale’.  Other points certainly require further investigation, such as, how best MOOCs might potentially help to reach those groups of people who could potentially benefit most from participating in study.  It’s interesting that such a large number of participants already have degree level qualifications.  You might argue that these participants are already experienced learners.

It was really interesting to hear that different MOOCs made use of different tools.  Although I’m more of an expert in technology than pedagogy, I feel that there is continuum between MOOCs (or on-line courses, in general) that offer an instructivist (or didactic) approach on one hand, and those that offer a constructivist approach on the other. Different software tools, of course, permit different pedagogies.   

Another (related) thought is that learners not only have to learn the subject that is the focus of a MOOC, but also learn the tool (or tools) through which the learning can be acquired.  When it comes to software (and those MOOCs that offer learners a range of different tools) my own view is that people use tools if they are sure that there is something in it for them, or the benefit of use outweighs the amount of investment that is extended in learning something.

In some respects, the evolution of a MOOC is an exercise in engineering as much as it is an exercise in mass education.  What I mean is that we’re creating tools that tell us about what is possible in terms of large scale on-line education.  Some tools and approaches will work, whereas other tools and approaches will not.  By collecting war stories and case studies (and speaking with the learners) we can begin to understand how to best create systems that work for the widest number of people, and how MOOCs can be used to augment and add to more ‘traditional’ forms of education.

One aspect that developers and designers of MOOCs need to be mindful of is the need for accessibility.  Designers of MOOCs need to consider this issue from the outset.  It’s important to provide media in different formats and create simple interfaces that enable all users to participate in on-line courses.  None of the presenters, as far as I recall, spoke about the importance of accessibility.  A high level of accessibility is connected to high levels of usability.

Just as I was finishing writing up this quick summary, I received an email, which was my daily ‘geek news’ summary.  I noticed an article which had an accompanying discussion.  It was entitled: Are High MOOC Failure Rates a Bug Or a Feature? (Slashdot).  For those who are interested in MOOCs, it’s worth a quick look.

Permalink Add your comment
Share post
Christopher Douce

Xerte Project AGM

Visible to anyone in the world
Edited by Christopher Douce, Monday, 18 Feb 2013, 19:13

Xerte is an open source tool that can be used to create e-learning content that can be delivered through virtual learning environments such as Moodle.  This blog post is a summary of a meeting entitled Xerte Project AGM that was held at the East Midlands Conference Centre at the University of Nottingham on 10 October 2012.  The purpose of the day was to share information about the current release about the Xerte tool, to offer an opportunity to different users to talk to each other and also to allow delegates to gain some understanding about where the development of the tool is heading.

One of my main motivations for posting a summary of the event is to share some information about the project with my H810 Accessible online learning: supporting disabled students tutor group.  Xerte is a tool that is considered to create accessible learning material - this means that the materials that are presented through (or using) Xerte may be able to be consumed by people who have different impairments. One of the activities that H810 students have to do is to create digital educational materials with a view to understanding what accessibility means and what challenges students may face when the begin to interact with digital materials.  Xerte can be one tool that could be used to create digital materials for some audiences.

This blog will contain some further description of accessibility (what it is and what it isn't); a subject that was mentioned during the day.  I'll also say something about other approaches that can be used to create digital materials.  Xerte isn't the beginning and end of accessibility - no single tool can solve the challenge of creating educational materials that are functionally and practically accessible to learners.  Xerte is one of many tools that can be used to contribute towards the creation of accessible resources, which is something different and separate to accessible pedagogy.

Introductions

The day was introduced by Wyn Morgan, director of teaching and learning at Nottingham.  Wyn immediately touched upon some of the objectives of the tool and the project - to allow the simple creation of attractive e-learning materials.

Wyn's introduction was followed by a brief presentation by Amber Thomas, who I understand is the manager for the JISC Rapid Innovation programme.  Amber mentioned the importance of a connected project called Xenith, but more of this later.

Project Overview

Julian Tenney presented an overview of the Xerte project and also described its history.  As a computer scientist, Julian's unexpected but very relevant introduction resonated strongly with me.  He mentioned two important and interesting books: Hackers, by Steven Levy, and The Cathedral and the Bazaar by Eric S Raymond.  Julian introduced us to the importance of open source software and described the benefit and strength of having a community of interested developers who work together to create something (in this case, a software tool) for the common good.

I made a note of a number of interesting quotes that can be connected to both books.  These are: 'always yield to the hands on' (which means, just get on and build stuff), 'hackers should be judged by their hacking', 'the world is full of interesting problems to be solved', and 'boredom and drudgery are evil'.  When it comes to the challenge of creating digital educational resources that can be delivered on-line, developers can be quickly faced with similar challenges time and time again.  The interesting and difficult problems lie with how best to overcome the drudgery of familiar problems.

I learnt that the first version of Xerte was released in 2006.  Julian mentioned other tools that can be used to create materials and touched upon the issue of both their cost and their complexity.  Continued development moved from a desktop based application to a set of on-line tools that can be hosted on an institutional web server (as far as I understand things).

An important point from Julian's introductory presentation that I paraphrase is that one of the constants of working with technology is continual change.  During the time between the launch of the original version of Xerte and the date of this blog post, we have seen the emergence of tablet based devices and the increased use of mobile devices, such as smartphones.  The standalone version of Xerte currently delivers content using a technology called Flash (wikipedia), which is a product by Adobe.  According to the Wikipedia article that was just referenced, Adobe has no intention to support Flash for mobile devices.  Instead, Adobe has announced that they wish to develop products for more open standards such as HTML 5. 

This brief excursion into the domain of software technology deftly took us onto the point of the day where the delegates were encouraged to celebrate the release of the new versions of the Xerte software and toolkits.

New Features and Page Types

Ron Mitchell introduced a number of new features and touched upon some topics that were addressed later during the day.  Topics that were mentioned included internationalisation, accessibility and the subject of Flash support.  Other subjects that were less familiar to me included how to support authentication through LDAP (lightweight directory access protocol) when using the Xerte Online Toolkit (as opposed to the standalone version), some hints about how to integrate some aspects of the Xerte software with the Moodle VLE, and how a tool such as Jmol (a Java viewer for molecular structures) could be added to content that is authored through Xerte.

One of the challenges with authoring tools is how to embed either non-standard material or materials that were derived from third party sources.  I seem to remember being told about something called an Embed code which (as far as I understand things) enables HTML code to be embedded directly within content authored through Xerte.  The advantage of this is that you can potentially make use of rich third party websites to create interactive activities.

Internationalisation

I understand the internationalisation as one of those words that is very similar to the term software localisation; it's all about making sure that your software system can be used by people in other countries.  One of the challenges with any software localisation initiative is to create (or harness) a mechanism to replace hardcoded phrases and terms with labels, and have them dynamically changed depending on the locale in which a system is deployed.  Luckily, this is exactly the kind of thing that the developers have been working on: a part of the project called XerteTrans.

Connector Templates

When I found myself working in industry I created a number of e-learning objects that were simply 'page turners'.  What I mean is that you had a learning object that had a pretty boring (but simple) structure - a learning object that was just one page after another.  At the time there wasn't any (easy) way to create a network of pages to take a user through a series of different paths.  It turns out that the new connector templates (which contains something called a connector page), allows you to do just this. 

The way that things work is through a page ID.  Pages can have IDs if you want to add links between them. Apparently there are a couple of different types of connector pages: linear, non-linear and some others (I can't quite make out my handwriting at this point!) The principle of a connector template may well be something that is very useful.  It is a concept that seems significantly easier to understand than other e-learning standards and tools that have tried to offer similar functionality.

A final reflection on this subject is that it is possible to connect sets of pages (or slides) together using PowerPoint, a very different tool that has been designed for a very different audience and purpose.

Xenith and HTML 5

Returning to earlier subjects, Julian Tenney and Fay Cross described a JISC funded project called Xenith. The aim of Xenith is to create a system to allow content that has been authored using Xerte to be presented using HTML 5 (Wikipedia).  The motivation behind this work is to ensure that e-learning materials can be delivered on a wide variety of platforms.  When HTML 5 is used with toolkits such as jQuery, there is less of an argument for making use of Adobe Flash.  There are two problems with continuing to use Flash.  The first is that due to a historic fall out between Apple and Adobe, Flash cannot be used on iOS (iPhone, iPad and iPod) devices.  Secondly, Flash has not been considered to be a technology that has been historically very accessible.

Apparently, a Flash interface will remain in the client version of Xerte for the foreseeable future, but to help uncover accessibility challenges the Xenith developers have been working with JISC TechDis.  It was during this final part of the presentation that the NVDA screen reader was mentioned (which is freely available for download).

Accessibility

Alistair McNaught from TechDis gave a very interesting presentation about some of the general principles of technical and pedagogic accessibility.  Alistair emphasised the point that accessibility isn't just about whether or not something is generally accessible; the term 'accessibility' can be viewed as a label.  I also remember the point that the application of different types of accessibility standards and guidelines don't necessarily guarantee a good or accessible learning experience.

I made a note of the following words.  Accessibility is about: forethought, respect, pragmatism, testing and communication.  Forethought relates to the simple fact that people can become disabled.  There is also the point that higher educational institutions should be anticipatory.  Respect is about admitting that something may be accessible for some people but not for others.  A description of a diagram prepared for a learner who has a visual impairment may not be appropriate if it contains an inordinate amount of description, some of which may be superfluous to an underlying learning objective or pedagogic aim.  Pragmatism relates to making decisions that work for the individual and for the institution.  Testing of both content and services is necessary to understand the challenges that learners face.  Even though educational content may be accessible in a legislative sense, learners may face their own practical challenges.  My understanding is that all these points can be addressed through communication and negotiation.

It was mentioned that Xerte is accessible, but there are some important caveats.  Firstly, it makes use of Flash, secondly the templates offer some restrictions and that access depends on differences between screen readers and browsers.  It is the issue of the browser that reminds us that technical accessibility is a complex issue.  It is also dependent upon the design of the learning materials that we create.

To conclude, Alistair mentioned a couple of links that may be useful.  The first is the TechDis Xerte page.  The second is the Voices page, which relates to a funded project to create an 'English' synthetic voice that can be used with screen reading software.

For those students who are studying H810, I especially recommend Alistair's presentation which can be viewed on-line by visiting the AGM website.  Alistair's presentation starts at about the 88 minute mark.

Closing Discussions and Comments

The final part of the day gave way to discussions, facilitated by Inge Donkervoort, about how to develop the Xerte community site. Delegates were then asked whether they would like an opportunity to attend a similar event next year.

Reflections

One of the things I helped to develop when I worked in industry was a standards compliant (I use this term with a degree of hand waving) 'mini-VLE'.  It didn't take off for a whole host of reasons, but I thought it was pretty cool!  It had a simple navigation facility and users could create a repository of learning objects.  During my time on the project (which predated the release of Xerte), I kept a relatively close eye on which tools I could use to author learning materials.  Two tools that I used was a Microsoft Word based add in (originally called CourseGenie) which allowed authors to create series of separate pages which were then all packaged together to create a single zip file, and an old tool called Reload.  I also had a look at some commercial tools too.

One of the challenges that I came across was that, in some cases, it wasn't easy to determine what content should be created and managed by the VLE and what content was created and managed by an authoring tool.  An administrator of a VLE can define titles and make available on-line communication tools such as forums and wikis and then choose to provide learners with sets of pages (which may or may not be interactive) that have been created using tools like Xerte.  Relating back to accessibility, even though content may be notionally accessible it is also important to consider the route in which end users gain access to content.  Accessible content is pointless if the environment which is used to deliver the content is either inaccessible or is too difficult to navigate.

Reflecting on this issue, there is a 'line' that exists between the internal world of the VLE and the external world of a tool that generates material that can be delivered through (or by) a VLE.  In some respect, I feel that this notional line is never going to be pinned down due to differences between the ways in which systems operate and the environments in which they are used.  Standards can play an important role in trying to defining such issues and helping to make things to work together, but different standards will undoubtedly place the line at different points.

During my time as a developer I also thought the obvious question of, 'why don't we make available other digital resources, such as documents and PowerPoint files to learners?'  Or, to take the opposite view of this question, 'why should I have to use authoring tools at all?'  I have no (personal) objections about using authoring tools to create digital materials.  The benefit of tools such as Xerte is that the output can be simple, directly and clear to understand.  The choice of the mechanisms used to create materials for delivery to students should be dictated primarily by the pedagogic objectives of a module or course of study.

And finally...

One thought did plague me towards the end of the day, and it was this: the emphasis on the day was primarily about technology; there was very little (if at all) about learning and pedagogy.  This can be viewed from two sides - understanding more about the situations in which a particular tool (in combination with other tools) can best be used, and secondly how users (educators or learning technologists) can best begin to learn about the tool and how it can be applied.  Some e-learning tools work well in some situations than others.  Also, educators need to know how to help learners work with the tools (and the results that they generate).

All in all, I had an enjoyable day.  I even recognised a fellow Open University tutor!  It was a good opportunity to chat about the challenges of using and working with technology and to become informed about what interesting developments were on the horizon and how the Xerte tool was being used.  It was also great to learn that a community of users was being established. 

Finally, it was great how the developers were directly tacking the challenge of constant changes in technology, such as the emergence of tablet computers and new HTML standards.  Tackling such an issue head on whilst at the same time trying to establish a community of active open source developers can certainly help to establish a sustainable long-term project.

Permalink 1 comment (latest comment by Jonathan Vernon, Saturday, 20 Oct 2012, 16:58)
Share post
Christopher Douce

eTeaching and Learning workshop

Visible to anyone in the world
Edited by Christopher Douce, Sunday, 1 Feb 2015, 13:37

I attended a HEA eTeaching and Learning workshop at the University of Greenwich on 1st June 2011.  It is always a pleasure visiting the Greenwich University campus; it is probably (in my humble opinion) the most dramatic of all university campuses in London - certainly the only one that is situated within a World Heritage site.

My challenge was to find the King William building (if I remember correctly), which turned out to be a Wren designed neo-classical building that sat adjacent to one of the main roads.  Looking towards the river, all visitors were treated to a spectacular view of the Canary Wharf district.  Visitors were also treated to notes emanating from a nearby music school. 

I first went to the eTeaching and Learning workshop back in 2008 where I presented some preliminary work about an accessibility project I was working on.  This time I was attending as an interested observer.  It was a packed day, comprising of two keynotes and eight presentations.

Opening Keynote

The opening keynote was given by Deryn Graham (University of Greenwich).  Deryn's main focus was the evaluation of e-delivery (e-delivery was a term that I had not heard of before, so I listened very intently).  The context for her presentation was a postgraduate course on academic practice (which reminded me of a two year Open University course sounds to have a similar objective).  Some of the students took the course through a blended learning approach, whereas others studied entirely from a distance. 

The most significant question that sprung to my mind was: how should one conduct such an evaluation?  What should we measure, and what may constitute success (or difference).  Deryn mentioned a number of useful points, such as Salmond's e-moderating model (and the difficulty that the first stages may present to learners), and also considered wider economic and political factors.  Deryn presented her own framework which could be used to consider the effectiveness of e-delivery (or e-learning).

This first presentation inspired a range of different questions from the participants and made me wonder how Laurillard's conversational framework (see earlier blog post) might be applied to the same challenge of evaluation.  By way of a keynote, Deryn's presentation certainly hit the spot.

General Issues

The first main presentation was by Simon Walker, from the University of Greenwich.  The title of his paper was, 'impact of metacognitive awareness on learning in technology enhanced learning environments'.

I really liked the idea of metacognition (wikipedia) and I can directly relate it back to some computer programming research I used to stidy.  I can remember myself asking different questions whilst writing computer software, from 'I need to find information about these particular aspects...' through to, 'hmm... this isn't working at all, I need to do something totally different for a while'.  The research within cognitive is pretty rich, and it was great to hear that Simon was aware of the work by Flavell, who defines metacognition as, simply, 'knowledge and cognition about cognitive phenomena'.

Andrew spoke about some research that himself and his colleagues carried out using LAMS (learning activity management system), which is a well known learning design tool and accompanying runtime environment.  An exploratory experiment was described: one group were given 'computer selected' tools to use (though LAMS), whereas the other group were permitted a free choice.  Following the presentation of the experiment, the notion of learning styles (and whether or not they exist, and how they might relate to tool choice - such as blogs, wikis or forums) was discussed in some detail.

Andrew Pyper from the University of Hertfordshire gave a rather different presentation.  Andrew teaches human-computer interaction, and briefly showed us a software tool that could be used to support the activity of computer interface evaluation though the application of heuristic evaluations. 

The bit of Andrew's talk that jumped out at me was the idea that instruction of one cohort might help to create materials that are used by another.  I seemed to have made a note that student-generated learning materials might be understood in terms of the teaching intent (or the subject), the context (or situation) in which the materials are generated, their completeness (which might relate to how useful the materials are), and their durability (whether or not they age over time).

The final talk of the general section returned to the issue of evaluation (and connects to other issues of design and delivery).  Peiyuan Pan, from the London Metropolitan University, draws extensively on the work of others, notably Kolb, Bloom, and Fry (who wrote a book entitled 'a handbook for teaching and learning in higher education - one that I am certainly going to look up).  I remember a quote (or a note) that is (roughly) along the lines of, '[the] environment determines what activities and interactions take place', which seems to also have echoes with the conversational framework that I mentioned earlier.

Peiyuan describes a systematic process to course and module planning.  His presentation is available on line and can be found by visiting his presentation website.  There was certainly lots of food for thought here.  Papers that consider either theory or process always have a potential to impact practice.

Technical Issues

The second main section comprised of three papers.  The first was by Mike Brayshaw and Neil Gordon from the University of Hull, who were presenting a paper entitled, 'in place of virtual strife - issues in teaching using collaborative technologies'.  We all know that on-line forums are spaces where confusion can reign and emotions can heighten.  There are also perpetual challenges, such as none participation within on-line activities.  To counter confusion it is necessary to have audit trails and supporting evidence.

During this presentation a couple of different technologies were mentioned (and demoed).   It was really interesting to an the application of Microsoft Sharepoint.  I had heard that it can be used in an educational context, but this was the first ever time I had witnessed a demonstration of a system that could permit groups of users to access different shared areas.  It was also interesting to hear that a system called WebPA was being used in Hull.  WebPA is a peer assessment system which originates from the University of Loughborough.

I had first heard about WebPA at an ALT conference a couple of years ago.  I consider peer assessment as a particularly useful approach since not only might it help to facilitate metacognition (linking back to the earlier presentation), but it may also help to develop professional practice.  Peer assessment is something that happens regularly (and rigorously) within software engineering communities.

The second paper entitled 'Increased question sharing between e-Learning systems' was presented by Bernadette-Marie Byrne on behalf of her student Ralph Attard.  I really liked this presentation since it took me back to my days as a software developer where I was first exposed to the world of IMS e-learning specifications.

Many VLE systems have tools that enable them to deliver multiple choice questions to students (and there are even projects that try to accept free text).  If institutions have a VLE that doesn't offer this functionality there are a number of commercial organisations that are more than willing to offer tools that will plug this gap.  One of the most successful organisations in this field is QuestionMark.

The problem is simple: one set of multiple choice questions cannot easily be transferred to another.  The solution is rather more difficult: each system defines a question (and question type) and correct answer (or answers) in a slightly different ways.  Developers for one tool may use horizontal sliders to choose numbers (whereas others might not support this type of question).  Other tools might enable question designers to code extensive feedback for use in formative tests (I'm going beyond what was covered in the presentation, but you get my point!)

Ralph's project was to take QuestionMark questions (in their own flavour of XML) at one end and output IMS QTI at the other.  The demo looked great, but due to the nature of the problem, not all question types could be converted. Bernadette pointed us to another project that predates Ralph's work, namely the JISC MCQFM (multiple-choice questions, five methods) project, which uses a somewhat different technical approach to solve a similar problem.  Whereas MCQFM is a web-service that uses the nightmare of XSLT (wikipedia) transforms, I believe that Ralph's software parses whole documents into an intermediate structure from where new XML structures can be created.

As a developer (some years ago now), one of the issues that I came up against was that different organisations used different IMS specifications in different ways.  I'm sure things have improved a lot now, but whilst standardisation has likely to have facilitated the development of new products, real interoperability was always a problem (in the world of computerised multiple-choice questions).

The final 'technical' presentation was by John Hamer, from the University of Glasgow.  John returns to the notion of peer assessment by presenting a system called Aropa and discussing 'educational philosophy and case studies' (more information about this tool can be found by visiting the project page).  Aropa is designed to support peer view in large classes.  Two case studies were briefly described: one about professional skills, and the other about web development.

One thing is certain: writing a review (or conducting an assessment of student work) is  most certainly a cognitively demanding task.  It both necessitates and encourages a deep level of reflection.  I noted down a number of concerns about peer assessment that were mentioned: fairness, consistency, competence (of assessors), bias, imbalance and practical concerns such as time.  A further challenge in the future might be to characterise which learning designs (or activities) might make best use of peer assessment.

Pedagogical Issues

The subjects of collusion and plagiarism are familiar tropes to most higher education lecturers.  A paper by Ken Fisher and Dafna Hardbattle (both from London Metropolitan University) asks the question of whether students might benefit it they work through a learning object which explains to learners what is and what is not collusion.  The presentation began with a description of a questionnaire study that attempts to uncover what academics understand collusion to be.

Ken's presentation inspired a lot of debate.  One of the challenges that we must face is the difference between assessment and learning.  Learning can occur through collaboration with others.  In some cases it should be encouraged, whereas in other situations it should not be condoned.  Students and lecturers alike have a tricky path to negotiate.

Some technical bits and pieces.  The learning object was created using a tool called Glomaker (generative learning object maker), which I had never heard of before.  This tool reminds me of another tool, such as Xerte, which hails from the University of Nottingham.  On the subject of code plagiarism, there is also a very interesting project called JPlag (demo report, found on HEA plagiarism pages).  The JPLAG on-line service now supports more languages than it's original Java.

The final paper presentation of the day was by Ed de Quincy and Avril Hocking, both from the University of Greenwich.  Their paper explored how students might make use of the social bookmarking tool, Delicious.  Here's a really short summary of Delicious: it allow you to record your web favourites to the web using a set of keywords that you choose, enabling you to easily find them again if you use different computers (it also allows you to share stuff with users of similar interest).  

One way it can be used in higher education is to use it in conjunction with course codes (which are often unique, or can be, if a code is combined with another tag). After introducing the tool to users, the researchers were interested in finding out about common patterns of use, which tags were used, and whether learners found it a useful tool.

I have to say that I found this presentation especially interesting since I've used Delicious when tutoring on a course entitled accessible online learning: supporting disabled students which has a course code of H810, which has been used as a Delicious tag.  Clicking on the previous link brings up some resources that relate to some of the subjects that feature within the course.

I agree with Ed's point that a crowdsourced set of links comprises of a really good learning resource.  His research indicates that 70% of students viewed resources tagged by other students.  More statistics are contained within his paper.

My own confession is that I am an infrequent user of Delicious, mainly due to being forced down one browser route as opposed to another at various times, but when I have use it, I've found browser plug-ins to be really useful.  My only concern about using Delicious tags is that the validity of links can age very quickly, and it's up to a student to determine the quality of the resource that is linked to (but metrics saying, 'n people have also tagged this page' is likely to be a useful indicator).

Closing Keynote

Malcolm Ryan from the University of Greenwich School of Education presented the final keynote entitled, 'Listening and responding to learners' experiences of technology enhanced learning'.  Malcolm asked a number of searching questions, including, 'do you believe that technology enhances or transforms practice?' and 'do you know what their experience is?'  Malcolm went on to mention something called the SEEL Project (student experience of e-learning laboratory) that was funded by the HEA.

The mention of this project (which I had not heard of before) reminded me of something called the LEX report (which Malcolm later went on to mention).  LEX is an abbreviation of: learner experience of e-learning.  Two other research projects were mentioned.  One was the JISC great expectations report, another was a HEFCE funded Student Perspectives on Technology report.  I have made a note of the finding that perhaps students may not want everything to be electronic (and there may be split views about mobile).  A final project that was mentioned was the SLIDA project which describes how UK FE and HE institutions are supporting effective learners in a digital age.

Towards the end of Malcolm's presentation I remember a number of key terms, and how these relate to individual project.  Firstly, there is hearing, which relates to how technology should be used (the LEX report).  Listening relates to SEEL.  Responding connects to the great expectations report, and finally engaging, which relates to a QAA report entitled 'Rethinking the values of higher education - students as change agents?' (pdf). 

Malcolm's presentation has directly pointed me towards a number of reports that perhaps I need to spend a bit of time studying whilst at the same time emphasising just how much research has already been done by different institutions.

Workshop Themes

At the end of these event blogs I always try to write something about what I think the different themes are (of course, my themes are likely to be different to those of other delegates!)

The first one that jumped out at me was the theme of theory and models, namely different approaches and ways to understand the e-learning landscape.

The second one was the familiar area of user generated content.  This theme featured within this workshop through creation of bookmarks and course materials.

Peer assessment was also an important theme (perhaps one of increasing importance?)  There is, however, a strong tension between peer assessment and plagiarism, but particularly the notion of collusion (and how to avoid it).

Keeping (loosely) with the subject of assessment, the final theme has to be evaluation, i.e. how can we best determine whether what we have designed (or the service that we are providing) are useful for our learners.

Conclusion

As mentioned earlier, this is the second e-learning workshop I have been to.  I enjoyed it!  It was great to hear so many presentations.  In my own eyes, e-learning is now firmly established.  I've heard it say that the pedagogy has still got to catch up with the technology (how to do the best with all of the things that are possible).

Meetings such as these enable practitioners to more directly understand the challenges that different people and institutions face.  Many thanks to Deryn Graham from the University of Greenwich and Karen Frazer from HEA.

Permalink 2 comments (latest comment by Chris Douce, Tuesday, 7 Jun 2011, 21:19)
Share post
Christopher Douce

1st International Aegis Conference

Visible to anyone in the world

 Aegis project logo: Open accessibility everywhere - groundwork, infrastructure, standards

7-8 October 2010

It seems like a lot of time has passed between this blog post and my previous one. I begin this entry with an explicit statement of: less time will pass between this one and the next!

This post is all about an accessibility conference that I recently attended in Seville, Spain, on behalf of the EU4ALL project in which the Open University has played an important part. Before saying something about the themes of the Aegis conference and summarising some of the notes that I made during some of the presentations, I guess I ought to say something about the project (from an outsiders perspective).

Aegis is an EU funded project that begins with a silent O (the O stands for Open). It then continues to use the first letters of the words Accessibility Everywhere: Groundwork, Infrastructure, Standards. My understanding is that it aims to learn more about the design, development and implementation of assistive and accessible technologies by not only carrying out what could be termed basic research, but also through the development and testing of new software.

Without further ado, here is a rough summary of my conference notes, complete with accompanying links.  I hope it is useful to someone!

Opening

After Evangelos Bekiaris presented the four cornerstones of the project (make things open, make things programmatically accessible, make sample applications and make things personalisable), Miguel Gonzalez Sancho outlined different EU research objectives and initatives. It was stated that 'there must be research in the area of ICT and accessibility, and this will continue'.

Pointers towards future research included a FP7 call that related to ICT for aging and well being. Other subjects mentioned included the areas of 'tools and infrastructures for mainstream accessibility', 'intelligent and social computing for social interaction' (which would be interdisciplinary, perhaps drawing upon the social sciences) and 'brain-neuronal computer interfaces' (BNCI), as well as plans to develop collaborations with other parts of the world.

It was useful not only get an overview of the domains that the funders are likely to be interested in, but also useful to be given a wealth of information rich links that researchers could explore later. The following links stood out for me: the EC ICT Research in FP7 site and the e-Inclusion activities page.

The Aegis Concept

Peter Korn from Oracle presented a very brief history of accessibility, drawing on the notion of 'building in accessibility' into the built environment. He presented a total of six steps, which I hope I have noted down correctly.

The first is to define what 'accessible' is. This may involve the taking of measurements, such as the width of doors and maybe the tones of elevators, or the sounds that are made of road crossings. The next (second) stage is to create standard building materials. Here you might have a building company creating standard door frames or even making electronic circuits to make consistent tones and noises (this is my own paraphrasing!). The next step is to create some tools to know how best to combine our pieces together. The tools may take the form of standardised instructions.

The next three items are more about the use of the physical items. The fourth step is that you need to make a choice as to where to place a building. Ideally it should be situated close to public transport and in a convenient place. The fifth step is to go ahead and to 'build' the building. The final step is all about dissemination: the telling of people about what has been created.

Peter drew a parallel between the process of creating physical acccessibility and creating accessibility for ICT systems. There ought to be 'stock' components of interface elements (such as the Fluid component set), developers and designers should adhere to good practice guidelines (such as the WCAG guidelines), applications need to be then created (which is akin to going ahead and making our building), and then we need to tell others what we have done.

If my memory is serving me well, Peter then went onto talk about the different generations of assistive technologies. More information about the generations can be found by jumping to my earlier blog post. From my own perspective (as a technologist), all this history stuff is really interesting, but there's such a lot of it, especially when technology is moving on so quickly. Our current challenge is to begin to understand the challenge of mobile devices and learn about how to develop tools and systems that remain optimally functional and accessible.

Other Projects

One of the great things of going to conferences (other than the cakes, of course) is an opportunity to learn about loads of other stuff that you had never heard of before. Blanca Alcanda from Technosite (Fundacion ONCE) spoke briefly about a number of projects, including T-Orienta (slideshare), Gametel (the development of accessible games) and INREDIS (self-adaptive inverfaces).

Roundtable Discussion

Karen Van Isacker was our question master. He kicked off with few killer questions (a number of which he tried to answer himself!) The panel comprised of a journalist, industrialists, researchers and user representatives. The notable questions were: ' what are your opinions about the [Aegis] products that are being developed?', 'how are you going to make sure users know about the tools [that are being developed]?', 'what are the current barriers people face?', and 'can you say something about the quality of AT training in Europe?'

In many ways, these questions were addressed by many of the conference presentations as well as by the panel. Challenges relating to the development of assistive technologies include the continual necessity of maintenance and updates, that users ought to be more aware of the different types of technologies that may be available, the price of technology is significant and one of the significant challenges relating to training is the fact of continual technological change.

After a short break the conference then split into two parallel sessions. I tended to opt for sessions that focussed on more general issues rather than those that related to particular technologies (such as mobile devices) or operating systems. This said, there is always a huge amount of cross over between the different talks.

Parallel session 1b (part 1)

It was good to see a clear presentation of a user centred design methodology (UCD) by Evangelos Bakiaris. Evangelos described user research techniques such as interviews, questionnaires and something called contextual enquiry. His talk reminded me of materials that are presented through the Open University course Fundamentals of Interaction Design (a course which I wholeheartedly recommend!)

My colleague Carlos Velasco from FIT, Germany, gave a very concise outline of early web software before introducing us to WCAG (W3C). Carlos then went onto summarise some intresting research from something called the 'technology penetration report' where it was discovered that out of 1.5 million websites, 65% of them use Javascript (which is know to yield challenges for some assistive technologies). The prevalance of Javascript relates to the increasing application and development of Rich Internet Applications (or RIAs, such as Google Maps, for instance). The characteristics of RIAs include the presentation of engaging UI's and asynchronous content retieval (getting many bits of 'stuff' at the same time). All these developments led to the creation of the WAI-ARIA guidelines (Accessible Rich Internet Applications).

Carlos argued that it was once relatively straightforward to test earlier types of web application, since the pages themselves didn't change. You could just send the pages to an 'page analysis server' or system (perhaps like Imergo), which may then persent a report, perhaps in a formal language like EARL (W3C). Due to the advent of RIAs, the situation has changed. The accessibility of a system very much depends on the state in which it is, and this can change. Testing web accessibility has therefore changed into something more resembling traditional usability testing.

A higher level question might be, 'having an application or product that is accessible is all very well, but do people have access to assistive technology (AT) that enable web sites to be used?' Other related questions include, 'if people have access to AT, do they use it? If not, why not?' These were the questions that Karel Van Isacker aimed to address.

Karel began by saying that different definitions within Europe leads to different estimates of the number of people with disabilities. He told us that the AT supplier market is rather fragmented: there are many suppliers in different countries and there are also substantial differences in terms of how purchases of AT equipment can be funded. He went on to suggest that different countries applied different models of disability (medical, social and consumer) to different market segments.

Some of the challenges were clear: people were often unaware of the solutions that best meet their ICT needs, users of AT's are just given very rudimentary training, and many people may even have a computer that they have used once, and there is a high level of users discarding their AT due to low levels of satisfaction.

Parallel session 1b (part 2)

Francesca Cesaroni began the next part of the afternoon by describing a set of projects that related to the broad theme of user requirements. These included the VISIOBOARD project (which related to eye tracking) and the CAALYX project (Complete Ambiant Assisted Living Experiment).

Harry Geyskens then went on to consider the following question from the perspective of someone with a visual impairment: 'how can I use a device in a comfortable and safe way that is good as a non-disabled person?' Harry then presented different design for all principles (wikipedia) : that a product must be equitable in use, be flexible, be simple and intuitive, provide perceptable information, be tolerant for error, permit usage through low physical effort.

Begona Pino gave an interesting presentation about the use of video game systems and how they could potentially be used for different groups, whilst clearly expressing a call for design simplicity.

The final talk of the day was given my yours truly, where I tried to present a summary of four year project called EU4ALL in twenty minutes. To summarise, the aim of EU4ALL is to try to consider how to enhance the provision of accessible systems and services in higher eduction through the creation of a small number of prototype systems. A copy of my presentation and accompanying paper can be found by visiting the OU knowledge network site (a version will eventually be deposited into the Open Research Online system).

Day 2 keynote

Gregg Venderheiden kicked off day 2 with a keynote entitled 'Roadmap for building a global public inclusive infrastructure'. Gregg imagined a future where user interfaces change to the needs of individual users. Rather than presenting a complicated set of interfaces, a system (a PC or mobile device) may present a more simplified user interface. Gregg pointed us to a project called NPII (National Public Inclusive Infrastructures). It was good to learn that some of the challenges that Gregg mentioned, specifically security and ways to gather preferences were also lightly echoed in the earlier EU4ALL presentation.

Parallel session 2a: Rich RIA!

RIA is an abbreviation for Rich Internet Application. The canonical example of a RIA is, of course, Google Maps or Gmail. Web application development techniques (such as AJAX, wikipedia) that were pioneered by Google and other organisations have now found their way into a myriad of other web products. From their inception RIAs proved to be troublesome for users of assistive technologies.

Juta Trevianus gave a talk with an intreguing title: 'changing the world - on a tiny budget'. She began by saying that being on-line and being connected is no longer an option. Digital exclusion can lead to social exclusion. The best bargain is often, in my experience, one that you can find through a web browser. I made a note of some parts of her talk that jumped out at me, i.e., 'laws work when they are clear, simple, consistent and stable', but, 'laws cannot create a culture of change'. Also, perhaps we need to move from a case where one size fits all (universal design) to the case where one size fits one (personalised design, which may be facilited through technology).

Being an engineer, I was struck by Juta's quote from computer scientist Alan Kay: 'the best way to predict the futuer is to invent it'. It's not too difficult to relate this quote back to the Aegis theme of openness and open source software (OSS): freedom of code has the potential enable the freedom of invention.

The first session was concluded by Dominique Hazael-Massieux from the W3C mobile web initative (W3C). The challenges of accessibility now reach much further than the increasingly quaint desktop PC. They now sit within the hands and pockets of users.

One early approach to dealing with the explosion of new devices was to provide a separate websites: one for mobile devices, another for 'traditional' computers. This approach yields the inevitable challenge of maintenance. Dominique told us about HTML 5 (wikipedia) and mentioned that it has the potential to help with site navigation and make it easier for developers (and end users) to work with rich media.

The remainder of the day was mainly focused upon WAI-ARIA. I particularly enjoyed Mike Squillace's presentation that returned to the challenge of testing rich internet applications. Mike presented some work that attempted to codify with WCAG rules into executable Javascript which could then be used within a test engine. Jan Richards, from OCAD, Canada, presented the Fluid project.

Parallel session 3b: Standardisation and valorisation

I split my time in the final afternoon between the two parallel sessions, visiting the standardisation session first, then moving onto the coordination strand half way through. There were presentations that described the process of standardisation and its importance in the process of accessibility. During this session Loic Martinez presented his work on the creation of a tool to support the development of accessible software.

Parallel session 3a: Coordinating research

The final session of the conference yielded a mix of presentations, ranging from description of physical centres that people could visit through to another presentation about the EU4ALL project made by my colleague from Madrid. This second EU4ALL presentation outlined a number of proposed prototype accessibility information services. Our two presentations complemented each other very well: my presentation outlined (roughly) an accessibility framework, whereas this second presentation seemed to an alternative perspective on how the framework might be applied and used within an institution.

Summary

One of the overriding themes was the necessity to not only make assistive technology available to others but also to make sure the right kind of technology was selected, and to ensure that users were given ample opportunity to learn how to use it. If you are given a car and you have never driven before you shouldn't just get into it and start driving: it takes time to learn the controls, and it takes time to build confidence and to learn about the different places you might want to go to (and besides, it's dangerous!) To risk stretching a metaphor too far, this is a bit like assistive technologies: it takes time to understand what controls you have at your disposal and where you would like to travel to. As Karol pointed out in his talk: far too much technology sits unused in a box.

Another theme of the conference was about the solidity of 'this box'. Rather than having everyting in 'a box' or installed on your computer (or mobile device), perhaps another idea might be to use technology 'on demand' from 'the cloud' (aka, the internet). Tools may have the potential to be liberated, but this depends on other types of technology 'groundwork' being available, i.e. good, fast and reliable connectivity.

Finally, another theme (and one that is pretty fundamental) is the issue of usability and simplicity. The ease of use of systems will continue to be a perpetual challenge due to the continual differences between people, task and context (where the person and the task takes place). Whilst universal design offers much possibility in terms of making product for the widest possible audience, there is also much opportunity to continue to explore the notion of interface, product and system personalisation. Through simplicity comes accessibility, and visa versa.

All in all, a very interesting couple of days. I came away feeling that there was a strong and vibrant community committed to creating useful technologies to help people in their daily lives. I also came away feeling that there is so much more to do, and a stronger feeling that whilst technology can certainly help there are many other complementary actions that need to be taken before technology can even begin to play a part.

The latest project newsletter (available at the time for writing) can now be downloaded (pdf).

See also Second International Education for All Conference (blog post), 24 October 2009.

Permalink
Share post
Christopher Douce

Learning technology and return on investment

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 21 Jul 2010, 18:13

Image of a man playing a tired old piano in front of a building site that has warning signes.  One says, 'danger, men working overhead'.

A couple of years ago I attended a conference held by the Association of Learning Technology (ALT).  I remember a riveting keynote that asked the members of the audience to consider not only the design and the structure of new learning technologies, but also the benefits that they can offer institutions, particularly in terms of costs.  I remember being reminded that technology (in terms of educational technology) need not be something that is considered in terms of 'high technology'.  A technology could be a bunch of cards tied to a desk chair!

I came away with a message that it is important to try to assess the effectiveness of systems that we construct, and not to get ahead of ourselves in terms of building systems that might not do the job they set out to do, or in fact may even end up solving the wrong problem.

When attending the learning technologies exhibition a couple of months ago the notions of 'return on investment' and 'efficiency' were never too far away.  E-learning, it is argued, can help companies to develop training capacities quickly and efficiently and allow company employees to prepare for change.

I might have been mistaken but I think I may have heard the occasional presenter mentioning some figures.  The presenters were saying things like, 'if you implement our system in this way, it can help you save a certain number of millions of pounds, dollars or euros per year'.  Such proclamations made me wonder, 'how do you go about measuring the return on investment into e-learning systems?' (or any other learning technology system, for that matter).

I do not profess to be an expert in this issue by any means.  I am aware that there have been many others (who have greater levels of expertise than myself) have both blogged and written about this issue at some length.  I hope by sharing my own meagre notes on the subject might make a small contribution to the debate.

Measuring usefulness

Let's say you build a learning technology or an e-learning tool.  You might create an on-line discussion forum or even a hand held classroom response system.  You might have created it with a particular problem (or pedagogic practice) in mind.  When you have built it, it might be useful to determine whether the learning technology helps learners to learn and acquire new understandings and knowledge.

One way of conducting an evaluation is to ask them about their experience.  This can allow you to understand how it worked, whether it was liked, what themes were learnt and what elements of a system, product or process might have inspired learners.

Another way to determine whether a technology is affective is to perform a test.  You could take two groups, one that has access to the new technology, the other who do not, and see which group perform better.  Of course, such an approach can open up a range of interesting ethical issues that need to be carefully negotiated and conquered.

Dimensions of measurement

When it comes to large e-learning systems the questions that can uncover learners experience can relate to a 'low level' understanding of how learning technologies are used and applied.  Attempting to measure the success of a learning technology or e-learning system for a whole department or institution could be described as a 'high level' understanding.  It is this 'high level' of understanding that relates to the theme of how much money a system may help to save (or cost) an organisation.

Bearing in mind that organisations are very unlikely to carry out experiments, how is it possible to determine how much return on investment an e-learning system might give you?  This is a really tough question to ask since it depends totally on the objectives of a system.  The approach taken to measure the return on investment of a training system is likely to be different to one that has been used to instil institutional values or create new ways of working (which may or may not yield employee productivity improvements).

When considering the issue of e-learning systems that aim to train (I'm going to try to steer clear of the debates around what constitutes training and what constitutes education), the questions that you might ask include:

  • What are the current skills (or knowledge base) of your personnel?
  • What are the costs inherent in developing a training solution?
  • What are the costs inherent in rolling this out to those who need access?

Another good question to ask is: what would you have to do to find out the same information if you had not invested in the new technologies?  A related question is: would there be any significant travel costs attached to finding out the information?, and, would it be possible to measure the amount of disruption that might take place if you have to ask other people for the information that you require?

These questions relate to actions that can be measured.  If you can put a measure on the costs of finding out key pieces of information before and after the implementation of a system, you might be able to edge towards figuring out the value of the system that you have implemented.  What, for example, is the cost of running the same face to face training course every year as opposed to creating a digital equivalent that is supported by a forum that is supported by an on-line moderator?  You need to factor in issues such as how much time it might take for a learner to take the course.  Simply providing an on-line course is not enough.  Its provision needs to be supported and endorsed by the organisation that has decided to sponsor the development of e-learning.

The above group of points represents a rather simplistic view.  The introduction of a learning technology system may also facilitate the development of new opportunities that were perhaps not previously possible.  'Up skilling' (or whatever it is called) in a limited amount of time may enable employees to respond to a business opportunity that may not have been able to exploited without the application of e-learning.

Other themes

Learning technologies are not only about the transmission of information (and knowledge) between a training department and their employees.  They can also have a role to play in facilitating the development of a common culture and strengthening bonds between work groups.

Instances of success (or failure) can be shared between fellow employees. Details of new initiatives or projects may be disseminated through a learning system.  The contents of the learning system, as a result, may gradually change as a result of such discussions.

The wider cultural perspectives that surround the application of learning technologies, in my humble opinion, are a lot harder to quantify.  It's hard to put a value on the use a technology to share information (and learning experiences) with your co-workers.

Related resources

A quick search takes me to the wikipedia definition of ROI and I'm immediately overwhelmed by detail that leaves my head spinning.

Further probing reveals a blog entitled ROI and Metrics in eLearning by Tony Karrer who kindly provides a wealth of links (some of which were updated in April 2008).  I have also uncovered a report entitled What Return on Investment does e-Learning Provide? (dated July 2005) (pdf) prepared by SkillSoft found on a site called e-Learning Centre.

Summary

The issue of e-learning and learning technology return on investment is one that appears to be, at a cursory glance, one that can be quite difficult to understand thoroughly.  Attaching numbers to the costs and benefits of any learning technology is something that is difficult to well or precisely.  This can be partly attributed to the nature of software: often, so much of the costs (whether it be in terms of administration or maintenance) or benefits (ability to find things out quicker and collaborate with new people) can be hidden amongst detail that needs to be made clearly explicit to be successfully understood.

When it comes to designing, building and deploying learning technology systems, the idea of return on investment is undoubtedly a useful concept, but those investing in systems should consider issues beyond the immediately discoverable costs and benefits since there are likely to be others lurking in the background just waiting to be discovered.

Acknowledgements: Image licensed under creative commons, liberated via Flickr from Mr Squee.  Piano busker in front of building site represents two types of investments: a long term investment (as represented by the hidden building site), and a short term investment (using the decrepit piano to busk).  The unheard music represents hidden complexities.

Permalink
Share post

This blog might contain posts that are only visible to logged-in users, or where only logged-in users can comment. If you have an account on the system, please log in for full access.

Total visits to this blog: 1976322