OU blog

Personal Blogs

Christopher Douce

eTeaching and Learning workshop

Visible to anyone in the world
Edited by Christopher Douce, Sunday, 1 Feb 2015, 13:37

I attended a HEA eTeaching and Learning workshop at the University of Greenwich on 1st June 2011.  It is always a pleasure visiting the Greenwich University campus; it is probably (in my humble opinion) the most dramatic of all university campuses in London - certainly the only one that is situated within a World Heritage site.

My challenge was to find the King William building (if I remember correctly), which turned out to be a Wren designed neo-classical building that sat adjacent to one of the main roads.  Looking towards the river, all visitors were treated to a spectacular view of the Canary Wharf district.  Visitors were also treated to notes emanating from a nearby music school. 

I first went to the eTeaching and Learning workshop back in 2008 where I presented some preliminary work about an accessibility project I was working on.  This time I was attending as an interested observer.  It was a packed day, comprising of two keynotes and eight presentations.

Opening Keynote

The opening keynote was given by Deryn Graham (University of Greenwich).  Deryn's main focus was the evaluation of e-delivery (e-delivery was a term that I had not heard of before, so I listened very intently).  The context for her presentation was a postgraduate course on academic practice (which reminded me of a two year Open University course sounds to have a similar objective).  Some of the students took the course through a blended learning approach, whereas others studied entirely from a distance. 

The most significant question that sprung to my mind was: how should one conduct such an evaluation?  What should we measure, and what may constitute success (or difference).  Deryn mentioned a number of useful points, such as Salmond's e-moderating model (and the difficulty that the first stages may present to learners), and also considered wider economic and political factors.  Deryn presented her own framework which could be used to consider the effectiveness of e-delivery (or e-learning).

This first presentation inspired a range of different questions from the participants and made me wonder how Laurillard's conversational framework (see earlier blog post) might be applied to the same challenge of evaluation.  By way of a keynote, Deryn's presentation certainly hit the spot.

General Issues

The first main presentation was by Simon Walker, from the University of Greenwich.  The title of his paper was, 'impact of metacognitive awareness on learning in technology enhanced learning environments'.

I really liked the idea of metacognition (wikipedia) and I can directly relate it back to some computer programming research I used to stidy.  I can remember myself asking different questions whilst writing computer software, from 'I need to find information about these particular aspects...' through to, 'hmm... this isn't working at all, I need to do something totally different for a while'.  The research within cognitive is pretty rich, and it was great to hear that Simon was aware of the work by Flavell, who defines metacognition as, simply, 'knowledge and cognition about cognitive phenomena'.

Andrew spoke about some research that himself and his colleagues carried out using LAMS (learning activity management system), which is a well known learning design tool and accompanying runtime environment.  An exploratory experiment was described: one group were given 'computer selected' tools to use (though LAMS), whereas the other group were permitted a free choice.  Following the presentation of the experiment, the notion of learning styles (and whether or not they exist, and how they might relate to tool choice - such as blogs, wikis or forums) was discussed in some detail.

Andrew Pyper from the University of Hertfordshire gave a rather different presentation.  Andrew teaches human-computer interaction, and briefly showed us a software tool that could be used to support the activity of computer interface evaluation though the application of heuristic evaluations. 

The bit of Andrew's talk that jumped out at me was the idea that instruction of one cohort might help to create materials that are used by another.  I seemed to have made a note that student-generated learning materials might be understood in terms of the teaching intent (or the subject), the context (or situation) in which the materials are generated, their completeness (which might relate to how useful the materials are), and their durability (whether or not they age over time).

The final talk of the general section returned to the issue of evaluation (and connects to other issues of design and delivery).  Peiyuan Pan, from the London Metropolitan University, draws extensively on the work of others, notably Kolb, Bloom, and Fry (who wrote a book entitled 'a handbook for teaching and learning in higher education - one that I am certainly going to look up).  I remember a quote (or a note) that is (roughly) along the lines of, '[the] environment determines what activities and interactions take place', which seems to also have echoes with the conversational framework that I mentioned earlier.

Peiyuan describes a systematic process to course and module planning.  His presentation is available on line and can be found by visiting his presentation website.  There was certainly lots of food for thought here.  Papers that consider either theory or process always have a potential to impact practice.

Technical Issues

The second main section comprised of three papers.  The first was by Mike Brayshaw and Neil Gordon from the University of Hull, who were presenting a paper entitled, 'in place of virtual strife - issues in teaching using collaborative technologies'.  We all know that on-line forums are spaces where confusion can reign and emotions can heighten.  There are also perpetual challenges, such as none participation within on-line activities.  To counter confusion it is necessary to have audit trails and supporting evidence.

During this presentation a couple of different technologies were mentioned (and demoed).   It was really interesting to an the application of Microsoft Sharepoint.  I had heard that it can be used in an educational context, but this was the first ever time I had witnessed a demonstration of a system that could permit groups of users to access different shared areas.  It was also interesting to hear that a system called WebPA was being used in Hull.  WebPA is a peer assessment system which originates from the University of Loughborough.

I had first heard about WebPA at an ALT conference a couple of years ago.  I consider peer assessment as a particularly useful approach since not only might it help to facilitate metacognition (linking back to the earlier presentation), but it may also help to develop professional practice.  Peer assessment is something that happens regularly (and rigorously) within software engineering communities.

The second paper entitled 'Increased question sharing between e-Learning systems' was presented by Bernadette-Marie Byrne on behalf of her student Ralph Attard.  I really liked this presentation since it took me back to my days as a software developer where I was first exposed to the world of IMS e-learning specifications.

Many VLE systems have tools that enable them to deliver multiple choice questions to students (and there are even projects that try to accept free text).  If institutions have a VLE that doesn't offer this functionality there are a number of commercial organisations that are more than willing to offer tools that will plug this gap.  One of the most successful organisations in this field is QuestionMark.

The problem is simple: one set of multiple choice questions cannot easily be transferred to another.  The solution is rather more difficult: each system defines a question (and question type) and correct answer (or answers) in a slightly different ways.  Developers for one tool may use horizontal sliders to choose numbers (whereas others might not support this type of question).  Other tools might enable question designers to code extensive feedback for use in formative tests (I'm going beyond what was covered in the presentation, but you get my point!)

Ralph's project was to take QuestionMark questions (in their own flavour of XML) at one end and output IMS QTI at the other.  The demo looked great, but due to the nature of the problem, not all question types could be converted. Bernadette pointed us to another project that predates Ralph's work, namely the JISC MCQFM (multiple-choice questions, five methods) project, which uses a somewhat different technical approach to solve a similar problem.  Whereas MCQFM is a web-service that uses the nightmare of XSLT (wikipedia) transforms, I believe that Ralph's software parses whole documents into an intermediate structure from where new XML structures can be created.

As a developer (some years ago now), one of the issues that I came up against was that different organisations used different IMS specifications in different ways.  I'm sure things have improved a lot now, but whilst standardisation has likely to have facilitated the development of new products, real interoperability was always a problem (in the world of computerised multiple-choice questions).

The final 'technical' presentation was by John Hamer, from the University of Glasgow.  John returns to the notion of peer assessment by presenting a system called Aropa and discussing 'educational philosophy and case studies' (more information about this tool can be found by visiting the project page).  Aropa is designed to support peer view in large classes.  Two case studies were briefly described: one about professional skills, and the other about web development.

One thing is certain: writing a review (or conducting an assessment of student work) is  most certainly a cognitively demanding task.  It both necessitates and encourages a deep level of reflection.  I noted down a number of concerns about peer assessment that were mentioned: fairness, consistency, competence (of assessors), bias, imbalance and practical concerns such as time.  A further challenge in the future might be to characterise which learning designs (or activities) might make best use of peer assessment.

Pedagogical Issues

The subjects of collusion and plagiarism are familiar tropes to most higher education lecturers.  A paper by Ken Fisher and Dafna Hardbattle (both from London Metropolitan University) asks the question of whether students might benefit it they work through a learning object which explains to learners what is and what is not collusion.  The presentation began with a description of a questionnaire study that attempts to uncover what academics understand collusion to be.

Ken's presentation inspired a lot of debate.  One of the challenges that we must face is the difference between assessment and learning.  Learning can occur through collaboration with others.  In some cases it should be encouraged, whereas in other situations it should not be condoned.  Students and lecturers alike have a tricky path to negotiate.

Some technical bits and pieces.  The learning object was created using a tool called Glomaker (generative learning object maker), which I had never heard of before.  This tool reminds me of another tool, such as Xerte, which hails from the University of Nottingham.  On the subject of code plagiarism, there is also a very interesting project called JPlag (demo report, found on HEA plagiarism pages).  The JPLAG on-line service now supports more languages than it's original Java.

The final paper presentation of the day was by Ed de Quincy and Avril Hocking, both from the University of Greenwich.  Their paper explored how students might make use of the social bookmarking tool, Delicious.  Here's a really short summary of Delicious: it allow you to record your web favourites to the web using a set of keywords that you choose, enabling you to easily find them again if you use different computers (it also allows you to share stuff with users of similar interest).  

One way it can be used in higher education is to use it in conjunction with course codes (which are often unique, or can be, if a code is combined with another tag). After introducing the tool to users, the researchers were interested in finding out about common patterns of use, which tags were used, and whether learners found it a useful tool.

I have to say that I found this presentation especially interesting since I've used Delicious when tutoring on a course entitled accessible online learning: supporting disabled students which has a course code of H810, which has been used as a Delicious tag.  Clicking on the previous link brings up some resources that relate to some of the subjects that feature within the course.

I agree with Ed's point that a crowdsourced set of links comprises of a really good learning resource.  His research indicates that 70% of students viewed resources tagged by other students.  More statistics are contained within his paper.

My own confession is that I am an infrequent user of Delicious, mainly due to being forced down one browser route as opposed to another at various times, but when I have use it, I've found browser plug-ins to be really useful.  My only concern about using Delicious tags is that the validity of links can age very quickly, and it's up to a student to determine the quality of the resource that is linked to (but metrics saying, 'n people have also tagged this page' is likely to be a useful indicator).

Closing Keynote

Malcolm Ryan from the University of Greenwich School of Education presented the final keynote entitled, 'Listening and responding to learners' experiences of technology enhanced learning'.  Malcolm asked a number of searching questions, including, 'do you believe that technology enhances or transforms practice?' and 'do you know what their experience is?'  Malcolm went on to mention something called the SEEL Project (student experience of e-learning laboratory) that was funded by the HEA.

The mention of this project (which I had not heard of before) reminded me of something called the LEX report (which Malcolm later went on to mention).  LEX is an abbreviation of: learner experience of e-learning.  Two other research projects were mentioned.  One was the JISC great expectations report, another was a HEFCE funded Student Perspectives on Technology report.  I have made a note of the finding that perhaps students may not want everything to be electronic (and there may be split views about mobile).  A final project that was mentioned was the SLIDA project which describes how UK FE and HE institutions are supporting effective learners in a digital age.

Towards the end of Malcolm's presentation I remember a number of key terms, and how these relate to individual project.  Firstly, there is hearing, which relates to how technology should be used (the LEX report).  Listening relates to SEEL.  Responding connects to the great expectations report, and finally engaging, which relates to a QAA report entitled 'Rethinking the values of higher education - students as change agents?' (pdf). 

Malcolm's presentation has directly pointed me towards a number of reports that perhaps I need to spend a bit of time studying whilst at the same time emphasising just how much research has already been done by different institutions.

Workshop Themes

At the end of these event blogs I always try to write something about what I think the different themes are (of course, my themes are likely to be different to those of other delegates!)

The first one that jumped out at me was the theme of theory and models, namely different approaches and ways to understand the e-learning landscape.

The second one was the familiar area of user generated content.  This theme featured within this workshop through creation of bookmarks and course materials.

Peer assessment was also an important theme (perhaps one of increasing importance?)  There is, however, a strong tension between peer assessment and plagiarism, but particularly the notion of collusion (and how to avoid it).

Keeping (loosely) with the subject of assessment, the final theme has to be evaluation, i.e. how can we best determine whether what we have designed (or the service that we are providing) are useful for our learners.

Conclusion

As mentioned earlier, this is the second e-learning workshop I have been to.  I enjoyed it!  It was great to hear so many presentations.  In my own eyes, e-learning is now firmly established.  I've heard it say that the pedagogy has still got to catch up with the technology (how to do the best with all of the things that are possible).

Meetings such as these enable practitioners to more directly understand the challenges that different people and institutions face.  Many thanks to Deryn Graham from the University of Greenwich and Karen Frazer from HEA.

Permalink 2 comments (latest comment by Chris Douce, Tuesday, 7 Jun 2011, 21:19)
Share post
Christopher Douce

Using and teaching mobile technologies for ICT and computer science

Visible to anyone in the world
Edited by Christopher Douce, Monday, 3 Mar 2014, 18:48

 I recently attended an event entitled Mobile Technologies - The Challenge of Learner Devices Delivering Computer Science held at Birmingham City University last week, organised by the Information and Computer Sciences (ICS) Higher Education Academy (HEA) subject centre.

This blog post aims to present a summary of proceedings as well as my own reflections on the day. If any of the delegates or presenters read this (and have any comments), then please feel free to post a reply to add to or correct anything that I've written. I hope these notes might be useful to someone.

Keynote

The day was kicked off by John Traxler from the University of Wolverhampton. Just as any good keynote should, John asked a number of searching questions. The ones that jumped out at me were whether information technology (or computers) had accelerated the industrialisation of education, and whether mobile technologies may contribute to this.

John wondered about the changing nature of technology ownership. On one hand universities maintain rooms filled with computers that students can use, but on the other hand students increasingly have their own devices, such as laptops or mobile phones. 

John also pointed us towards an article in the Guardian, published in July 2010 about teenagers and technology which has a rather challenging subtitle. Mobility and connectedness, it is argued, has now become a part of our identity.

One thing John said jumped out at me: 'requiring students to use a VLE is like asking them to wear a school uniform'. This analogy points towards a lot of issues that can be unpacked. Certainly, a VLE has the potential to present institutional branding, and a uniform suggests that things might done in a particular way. But a VLE also has the potential be be an invaluable source of information to ensure that we know what we need to know to navigate around an institution.

For those of us who had to wear school uniforms, very many of us customised them as much as we possibly could without getting told off for breaking the rules. Within their constraints, it would be possible to express individuality whilst conforming (to get an education). The notion of customisation and services also has a connection with the idea of a Personal Learning Environment (PLE) (wikipedia), which, in reality, might exist somewhere in between the world of the mobile, a personal laptop and the services that an institution provides.

Session I

The first session was opened by Kathy Maitland from Birmingham City University. Kathy talked about how she used cloud computing to enable students using different hardware to access different different software services. She spoke about the challenge of using different hardware (and operating system) platforms to access services and the technical challenges of ensuring correct configuration.

John Busch from Queen's university, Belfast made a presentation about how to record lectures using a mobile phone. It was great to see a (relatively) low tech approach being used to make educational materials available for students. All John needed to share his lectures with a wider audience was a mid range mobile phone, a tiny tripod, a desk to perch the mobile phone on, and (presumably) a lot of hard won experience.

John gave the audience a lot of tips about how to make the best use of technology, along with a result from a survey where he asked students how they made use of the recordings he made of his computer gaming lectures. 

A part of his talk was necessarily technical, where he spoke about different data encoding standards and which standard was supported by which mobile (or desktop) platform. One of the members of the audience pointed us to Encoding.com, a website that enables transcoding of digital media. The presentation gave way to interesting discussions about privacy. One of the things that I really liked about John's presentation was that is addressed 'mobile' from different perspectives at the same time: using mobile technology to produce content that may, in turn, be consumed by other mobile devices.

Laura Crane, from Lancaster University then gave an interesting presentation about using location, context and preference in VLE information delivery. Laura's main research question appeared to be, 'which is (potentially) more useful - it is information that is presented at a particular location, or information that is presented in a particular time?'

This reminded me of some research that I had heard of a couple of years ago called context modelling. Laura mentioned a subject or area that was new to me, namely, Situation Theory.  Laura's talk was very well received and it inspired a lot of debate. Topics discussed include the nature of mobility research, the importance of personal or learner attributes on learning (such as learning styles). Discussions edged towards the very active area of recommender research (recommender system, Wikipedia), and out to wider questions of combining location, recommender and affective interfaces (interfaces or systems that could give recommendations or make suggestions depending on emotion). A great talk!

Darren Mundy and Keith Dykes gave a presentation about the WILD Project funded by JISC. WILD is an abbreviation for Wireless Interactive Lecture Demonstrator. The idea behind the project is one that is simple and compelling: how to make use of personal technology to enable students to make a contribution to lectures. By contribution, I mean allowing students to add comments and text to a shared PowerPoint presentation.

A lecturer prepares a PowerPoint presentation and providing there is appropriate internet connectivity, there is a link to a WILD webpage, which the students can send messages to. This might be used to facilitate debate about a particular subject, but also enable those learners who are less reluctant to contribute to 'speak up' by 'texting out'. We were also directed towards the project source code.

During the talk, I was introduced to a word that I had never heard of before: prosumerism (but apparently Wikipedia had!). At the end of the talk, during the Q&A session, one delegate pointed us towards the SAP Twitter PowerPoint plug in, which might be able to achieve similar things.

This last presentation of the morning really got me thinking about my own educational practice, and perhaps this is one of the really powerful aspects of using and working learning technology: it can have the potential to encourage reflection about what is and what is not possible, both inside and outside the classroom. I tutor on an undergraduate interaction design course with the Open University, where I facilitate a number of face to face sessions.

Due to various reasons my tutorials are not well as attended as they could be. Students may have difficulty travelling to a tutorial session, they may have family responsibilities, or even have jobs at the weekend. This is a shame, since I sense that some students would really benefit from these face to face sessions. The WILD presentation make me wonder whether those students who attend a face to face tutorial might be able to collectively author a summary PowerPoint that could then be shared with the group of students who were unable to attend. Interactivity, of course, has the potential to foster inclusivity and ownership. Simply put, the more a student does within a lecture (or puts into it) the more they may get out of it.

Session II

After lunch, the second session proved to be slightly more technical. The first half was merely a warm up!

The second session kicked off with demonstration by Doug Belshaw. Doug works for JISCInfoNet. This part of JISC aims to provide information and products known as InfoKits which can be used by senior management to understand and appreciate a range of different education and technology issues. We were directed towards examples, such as effective practice in a digital age, and effective assessment in a digital age. A new kit, entitled JISC mobile and wireless technologies review is currently under presentation.

Doug asked the audience to share information about any case studies. A number of projects were mentioned, along with a set of links. During the discussion part of the demo we were directed towards m.sunderland.ac.uk , and this makes me wonder whether the 'm.' is a convention that I'm not aware of (and perhaps ought to be!) Something called iWebKit was also mentioned. Other projects included MyMobileBristol.com, in collaboration with Bristol University and Bristol City Council. For more information visit the m.bristol.ac.uk site.

There was also a mention of a service provided by Oxford University, m.ox.ac.uk (the project also has an accompanying press release) This service appears to have been developed in association with something called the Molly Project, which seems to be a mobile application development framework. There was a lot to take in!

Gordon Eccleston from Robert Gordon University in Aberdeen gave a fabulous presentation about his work teaching programming the iPhone. Having remained steadfastly in the desktop world, and admitting to being a laggard on the mobile technology front, Gordon answered many questions that I have always had about how one might potentially begin to write an iPhone application. Gordon introduce us to the iPhone software development kit, which I understand was free to universities. The software used to create Apps is called Xcode.  Having predominantly worked within a PC software development environment for too many years than I would care to admit, a quick poke around the Apple Tools website looked rather exciting; a whole new world of languages, terms and technologies.

Gordon had a number of views about the future of App development. He thought that XHTML 5, CSS 3 and accompanying technologies would have an increasingly important role to play. On a related note, the cross mobile platform PhoneGap was mentioned during the following presentation which makes use of some of these same technologies. (Digging further into the web, there's a Wikipedia page called Multiple phone web based application framework, which might prove to be interesting.  There was also some debate about which platform mobile might dominate (and whether mobile dominance may depend on whether how many Apple stores there may be within a particular city or country!)

Gordon also briefly talked about some of the student project he has been involved with. A notable example was an iPhone app for medical students to learn ophthalmology terms and concepts. There were some really good ideas here; how to create applications that have direct benefit to learners by the application of mobile technology through learning how they can be developed.

Karsten Lundqvist from the University of Reading offered technology balance to the day by presenting his work teaching the development of Android applications. Karsten began his presentation by considering the different platforms: iPhone, RIM, and Android, but the choice of platform was ultimately decided by the availability of existing hardware, namely, PC's running Windows or Linux. In place of using Xcode, Java with Eclipse was used. I seem to remember that students may have had some experience using C/C++ before attending the classes, but I can't quite remember.

The question and answer session was really interesting. One delegate asked Karsten whether he had heard of something called the Google Android App Inventor, another mobile software development platform. It was also interesting to hear about the different demo apps. Karsten showed us a picture of a phone in a mini-segway cradle, demonstrating the concept of real-time control, there was also a reference to an app that may help people with language difficulties, and Karsten pointed us to his own website where he has been developing a game template by means of a blog tutorial.

Towards the end of Karsten's session, I recall an echo from the earlier HEA employability event which explored computing forensics. One of the ideas coming from this event was that perhaps it might be a good idea for institutions to share forensic data sets. An idea posed within this event was that perhaps institutions might be able to share application ideas or templates, perhaps for different platforms. Some ideas might include fitness utilities, 'finding your way around' apps (very useful: I still remember my days being a confused fresher during my undergrad days!), simple game templates, and flash card apps to help students to learn a number of different concepts.

Plenary

The plenary discussion was quite wide ranging, and is quite difficult to down to a couple of paragraphs. My own attempt at making sense of the day was to understand the key topics in terms of 'paired terms', which might be either subject dimensions or tensions (depending on how you look at it).

VLEs and apps: different software with different purposes, which connect to the idea of information and content. Information might be where to go to find a lecture theatre, or the location of a bank, and content is a representation of the course materials itself.

Ownership and provision: invariably students will have their own technology, but to what extent should an organisation provide technology to facilitate learning? Provision has been historically thought of in terms of rooms filled with computers, and necessarily conservative institutional IT provision (to make sure that everything keeps working). Entwined with these issues is the notion of legacy information and the need for institutions (and learners) to keep up with technology.

Development and usage: where does the information or content come from? To what extent might consumers of mobile information potentially participate in the development of their own content? Might this also create potential dangers for institutions and individuals. This is related to another tension of control, namely, institutional versus individual control, of either information, content or technology.

Guidance and figuring things out: when it comes to learning, there is always a balance to be reached between providing just enough guidance that enables learners to gain enough information so that they find the information that they need. On one hand, there may be certain apps that facilitate learning in their own right, apps that provide information, and apps that may present content held within a VLE. One idea might be that we may need a taxonomy of uses for both an institution and an individual.

Industry and academia: a two way relationship. We must provide education (about mobile) that industry needs, and also make use of innovations coming from industry, but also we have a role to innovate ourselves and potentially feedback into industry. (I seem to recall quite a few delegates mentioning something called mCampus, but I haven't been able to uncover any information about it!)

Other discussion points that were raised included the observation that location-based information provision is new, and the need to interact with people is one of the things that is driving the development of technology. A broader question, posed by John Traxler was, 'does mobile have the potential to transform teaching and learning?' Learners, of course, differ very widely in terms of their experience and attitude to interactive products.

Points such as accessibility, whether it being availability of technology or ability to perceive information through assistive technologies are also substantial issues. The wider organisational and political environment is also a significant factor when it comes to the development of mobile applications, and their subsequent consumption.

Footnote

All in all, a very enjoyable day! As I travelled into Birmingham from London on the train on the morning of the event my eye caught what used to be the site of an old industrial centre. I had no idea what it used to be. I could see the foundations of what might have been a big factory or a depot. I was quite surprised to discover that Millenium Point building also overlooked the same area.

Walking to the train station for my return journey to London, I thought, 'wouldn't it be great if there was an app that could use your location to get articles and pictures about what used to be here before; perhaps there could be a timeline control which users could change to go back in time to see what was there perhaps twenty, thirty or even one hundred years before'. I imagined a personal time machine in the palm of your hand. I then recalled a mash-up between Google Maps and Wikipedia, and had soon uncovered something called Wikimapia.

Like so many of these passing ideas, there's no such thing as an original thought. What really matters is how such technology thoughts are realised, and the ultimate benefit they may have to the different sets of end user.

Permalink
Share post
Christopher Douce

Enhancing Employability of Computing Students

Visible to anyone in the world
Edited by Christopher Douce, Monday, 3 Mar 2014, 18:49

I was recently able to attend what was the first Higher Education Academy (HEA) event that explicitly aimed to discuss how universities might enhance the employability of computing courses.  The intention of this blog is to present a brief summary of the event (HEA website)  and to highlight some of the themes (and issues) that I took away from it.

The day was held at the University of Derby enterprise centre and was organised on behalf of the HEA Information and Computer Sciences subject group.  I had only ever been to one HEA event before, so I wasn't quite sure what to expect.  This said, the title of the workshop (or mini-conference) really interested me, especially after having returned to the higher education sector from industry.

The day was divided into two sets of paper presentations punctuated by two keynote speeches.  The afternoon paper sessions was separated into two streams: a placements workshop and a computing forensics stream.  Feeling that the placements workshop wasn't really appropriate, I decided to sit in on the computing and forensics stream.

Opening Keynote

The opening address was given by Debbie Law, an account management director at Hewlett Packard.  As well as outlining the HP recruitment process (which sounds pretty tough!) Debbie mentioned that through various acquisions, there was a gradual movement beyond technology (such as PCs and servers) through to the application of services.  Business, it was argued, don't particularly care for IT, but they do care for what IT gives them.

So, what makes an employable graduate?  They should be able to do a lot!  They should be able to learn and to apply knowledge (completing a degree should go some way to demonstrating this).  Candidates should demonstrate their willingness to consider (and understand) customer requirements.  They should also demonstrate problem solving and analytical skills and be able to show a good awareness of the organisations in which they work.  They should be performance driven, show good attention to detail (a necessity if you have ever written a computer program!), be able to lead a team and be committed to continuous improvement and developing personal effectiveness. Phew!

I learnt something during this session (something that perhaps I should have already known about).  I was introduced to something called ITIL (Information Technology Infrastructure Library) (wikipedia).  ITIL was later spoken about in the same sentences as PRINCE (something I had heard about after taking M865, the Open University Project Management course).

First paper session

There were a few changes to the published programme.  The first paper was by McCrae and McKinnon : Preparing students for employment through embedding work-related learning.  It was at this point that the notion of employability was defined as: A set of attributes, skills and knowledge that all labour market participants should possess to ensure they have the capability of being effective in the workplace - to the benefit of themselves, their employer and the wider economy.  A useful reference is the Confederation of British Industry's Fit for the Future: preparing graduates for the world of work report (CBI, 2009).

The presentation went on to explore how employability skills (such as team working, business skills and communication skills) may be embedded within the curriculum using an approach called Work Related Learning (WRL).  The underpinning ideas relate to linking theory and practice, using relevant learning outcomes, widening horizons, carrying out active learning and taking account of cultural diversity.  A mixed methodology was used to determine the effectiveness of embedding WRL within a course.

The second paper was by Jing and Chalk and was entitled: An initiative for developing student employability through student enterprise workshops.   The paper outlined one approach to bridge the gap between university education and industry through a series of seminars over a twelve week period given by people who currently work within industry.  A problem was described where there were lower employment rates amongst computing graduates (despite alleged skills shortages), low enrolment to work placement years (sandwich years), lack of employability awareness (which also includes job application and interview skills).

The third presentation was by our very own Kevin Streater and Simon Rae from the Open University Business School.  Their paper was entitled 'Developing professionalism in New IT Graduates? Who Needs It?'  Their paper addressed the notion of what it may mean to be an IT professional, encouraging us to look at the British Computer Society Chartered IT Professional status (CITP) (in addition to the ITIL and Prince), and something called the Professional Maturity Model (which I had never heard of before).

Something else that I had never heard of before is the Skills Framework for the Information Age (SFIA).  By using this framework it was possible to uncover whether new subjects or modules may contribute to enhancing the degrees of undergraduates who may be studying to work within a particular profession.  Two Open University courses were mentioned: T122 Career Development and Employability, and T227 Change, Strategy and Projects at Work.

This final presentation of the morning was interesting since it asked us to question the notion of professionalism, and presented the viewpoint that the IT profession has a long way to go before it could be considered akin to some of the other more established professions (such as law, engineering and accountancy).

During the morning presentations I also remember a reference to E-Skills, which is the Sector Skills Council for Business and Information Technology, a government organisation that aims to help to ensure that the UK has the IT skills it needs.

Computing and Forensics Stream

This stream especially piqued my interest since I had once studied a postgraduate computing forensics course, M886, through the Open University a couple of years ago.

The first paper was entitled Teaching Legal and Courtroom Issues in Digital Forensics by Anderson, Esen and Conniss.  Like so many different subject, both academic and professional skills need to be applied and considered.  Academic education considers the communication of theories and dissemination of knowledge, and learning how to think about problems in a critical way by analysing and evaluating different types and sources of information.

The second paper was about Syllabus Development with an emphasis on practical aspects of digital investigation, by Sukhvinder Hara, who drew upon her extensive experience as working as a forensic investigator.

The third paper was about how a virtualised forensics lab might be established through the application of cloud computing.  I found this presentation interesting for two reasons.  The first was due to the interesting application of virtualisation, and secondly due to a resonance with how parts of the T216 Cisco networking course is taught, where students are able to gain access to physical hardware located within a laboratory just by 'logging on' to their personal computer or laptop.

The final paper of the day was an enthusiastic presentation by David Chadwick who shared with us his approach of using problem-based learning and how it could be applied to computing forensics.

This final session of the day brought two questions to my mind.  The first related to the relationship between teaching the principles of computing forensics and the challenge of providing graduates who know the tools that are used within industry.  The second related to the general question of, 'so, how many computing forensics jobs are there?'

It stuck me that a number of the forensics courses around the UK demonstrate the use of similar technologies.  I've heard two products mentioned on a number of occasions: EnCase (Wikipedia) and FTK (Wikipedia), both of which are featured within the Open University M889 course.  If industry requires trained users of these tools, is it the remit of universities to offer explicit 'training' in commercial products such as EnCase .  Interestingly, the University of Greenwich, like the Open University (in T216 course), enables students to study for industrial certification whilst at the same time acquiring credit points that can count towards a degree.

So, are there enough forensics jobs for forensics graduates?  You might ask a very similar question which also begs an answer: are there enough psychology jobs for the number of psychology graduates?  I've heard it say that studying psychology introduces students to the notion of evidence, different research methodologies and research designs.  It is a demanding subject that requires you to write in a very clear way.  Studying psychology teaches and develops advanced numeracy and literacy as much as it introduces scientific method and the often confusing and complex nature of academic debate. 

Returning to computing forensics, I sensed that there might not be as many jobs in the field as there are graduates, but it very much depends what kind of job you might be thinking of.  Those graduates who took digital forensics courses might find themselves working as IT managers, network infrastructure designers or software developers as opposed to purely within law enforcement.  Knowing the notion of digital evidence and how to capture it is an incredibly important skill irrespective of whether or not a student becomes a fully fledged digital investigator.

Concluding Discussions

One of the best parts of the day was the discussion section.  A number of tensions became apparent.  One of the tensions relates to what a university should be and the role it should play within wider society.  Another tension is the differences that exist between the notions of training and education (and the role that universities play to support these two different aims).

Each organisation and area of industry will have a unique set of training and educational requirements.  There are, of course, more organisations than there are universities.  A particular industry may have a very specific training problem that necessitates the development of educational materials that is particular to its own context.  Universities, it can be argued, can only go so far in meeting very particular needs.

A related question, is of course, the difference between training and education.  When I worked in industry there were some problems that could be only solved by gaining task specific skills.  Within the field of software development this may be learning how to use a certain compiler or software tool set.  Learning a very particular skill (whilst building upon existing knowledge) can be viewed as training.  An engineer can either sit with a user manual and a set of notes and figure things out over a period of a month or two, or alternatively go on an accelerated training course and learn about what to do in a matter of days.

Education, of course, goes much deeper.  Education is about not just knowing have to use a particular set of tools but its about knowing how to think about your tools (and their limits) and understanding how they may fit within the 'big scheme of things'.  Education is also about learning a vocabulary that enables you to begin to understand how to communicate with others who work within your discipline (so you can talk about your tools).

Within the ICT sector the pace of change continues to astonish me.  There was a time when universities in conjunction with research organisations led the development of computing and computer science.  Meanwhile, industry has voraciously adopted ICT in such a way that it pretty much pervades all our lives.

So, where does this leave degree level education when 'general' industry may be asking for effective IT professionals?  It would be naive to believe that the university sector can fully satisfy the needs of industry since the nature of industry is so diverse.  Instead, we may need to consider how to offer education and learning (which the university sector is good at) which leads towards the efficient consumption of training (which satisfies the need of industry).   This argument implies that the university sector is for the 'common good' as opposed to being a mechanism that allows individuals to gain specialist topic specific knowledge that can immediately lead to a lucrative career.  Becoming an ICT professional requires an ability to continually learn due to perpetual innovation.  A university level education can provide a fabulous basis to gain an introduction into this rapidly challenging world.

Permalink 1 comment (latest comment by Roman Furgalski, Friday, 25 Feb 2011, 08:46)
Share post
Christopher Douce

OU Disabled Student Services Conference

Visible to anyone in the world
Edited by Christopher Douce, Tuesday, 22 Jan 2019, 09:40

I've have a fun couple of days.  I recently attended the Open University's 2010 Disabled Student Services conference.  Okay, I admit, I probably gate crashed the event since I'm not a member of the DSS group, but it was certainly a very worthwhile thing to do.  On more than one occasion colleagues said to me, 'it's great to have someone like you here; we certainly need more faculty at these events'.

The overall objectives of the conference were to develop greater awareness of issues affecting the sector, to gain information about developments within the University, gain a greater understanding of the needs of specific disabilities, work towards more standardised delivery of services and, of course, to find out what each other does.

For me, this conference was all about learning about the different roles that people have, and what information needed to be shared to ensure that all our learners get the best possible service.

Tuesday 9th November

A conference is not really a conference without a keynote.  The first day kicked off with a keynote by Will Swann who is responsible for the development, promotion and evaluation of services to support the teaching of students.

The part of Will's presentation that jumped out at me was a concise presentation of the potential ramifications of changes to Higher Education funding.  One thing was clear: things are going to change, but we're not quite sure exactly how they will change.  The changes may affect those students who wish to study for personal development as opposed to choosing to study for purely economic and career development reasons.  Underneath is an interesting philosophical debate about what higher education is good for.  Essentially, Will asked us to consider the challenge of how to maintain effective provision of services in a world where change is a certainty.

First workshop

Following the keynote, we were led toward the first set of workshops.  I attended a workshop that perhaps had the longest title: Exceptional examination arrangements and special circumstances, policies and procedures.  The event was facilitated by Ilse Berry and Peter Taylor.  Peter is the chair of the subcommittee which makes decisions about very many things exam related, such as whether individual students may be able to defer exams (due to changes in personal circumstances), or whether alternative examination arrangements could be organised.

I got a lot out of this first workshop: I gained more of an understanding of the procedures and policies, and the effect that the Disability Discrimination Act (now the Equality Act) has on these policies.  There was some debate about whether everyone knows everything they should know to best advise our students.  There was some discussion about Associate Lecturers, and I feel that I need to ask whether it might be possible to offer some internal staff development training to those who most closely work with students.

I also learnt quite a bit about the range of different examination arrangements that can be put in place.  I never knew that an exam could be taken over an extended period of time, for example.  It was all very thought provoking and showed me how much we try to collectively help.

Student session

I was unable to attend the afternoon event due to a meeting with a colleague in another department, but I was able to return to the conference in time to hear one of the student sessions.  Alex Wise, a student with dyslexia gave a very clear description of some of the challenges that he has faced during his educational career.  He also described some of the strategies and adaptations he both uses and has discovered.

Alex's presentation underlined a number of different points for me.  Firstly, the complexity and uniqueness of conditions such as dyslexia (I briefly studied the very broad subject of language processing when I was a postgrad student, but I was sorely missing a 'personal' perspective).  Secondly, the fact that effective strategies may only be discovered through a combination of hard won experience and trial and error.  A final point is that strategies and tools need not necessarily be high tech.

Wednesday 10th November

The second day (much like the first) was a delight.  In true academic style, I duly forgot which workshop I had signed up for, and was directed towards a session entitled, 'Sensory impairment: science course for screen readers, and D/eaf students and Openings courses - access for all?', presented by Jeff Bashton and Julie Morrison.  I was later to discover that it was two workshops for the price of one.  I had certainly chosen wisely.

Jeff works as a visual impairment advisor for the Open University.  He introduced the science project he is working on (which is a work in progress), and then he had a treat in store for all delegates.  One by one, we all donned blindfolds that Julie had given us, and we began to study two tactile diagrams (using only our touch).

I found both tactile diagrams unfathomable (which is, pretty much, an understatement!)  I could do nothing more than explore the boundaries of each diagram and get a rough understanding about its size and shape (and how the different elements were related spatially).  I couldn't make a jump from lines and bumps through to understanding a picture as a whole.  This, of course, was one of the points.  The tactile diagrams that I was presented with proved to be totally confusing without accompanying auditory descriptions.

Julie 'talked us through' each image (using our fingers!)  When I removed my blindfold, I was surprised by what I saw - it bore hardly any relationship to what I thought I was 'seeing'.

During the second part of the workshop Julie spoke about her with the British Sign Language, where she presented a small number of case studies to highlight the challenges that BSL users might face when trying to study.  To BSL users, English is, of course, a second language.  Julie's overview of the history of deaf education (and the role that Alexander Graham Bell played) was illuminating.  Thanks Julie!

This second workshop ended with a demonstration of how tough it can be to understand digital materials.  Taking a particularly accessible course as an example, Julie showed us a video without sound (it was an interview which had no subtitles of signing).  We then had a look at the transcript of the video.  The transcript contained all the peculiarities of expression that you find whenever you write down spoken language.  It was briefly considered that perhaps different learners may benefit from different versions of the same materials, which is one of the ideas embedded within the EU4ALL project I worked on for a couple of years.

A fabulous afternoon...

I struggled to give a name to the section of the conference where the delightful and charming Francesca Martinez came to talk to us for an hour or so.  It was only after just under a week of wondering did I come up with this final subheading.

I'm not joking; Francesca had us all rolling in the isles of the conference hall with laughter with a delicious mixture of political and observational stories.  There was, however, a serious tone that resonated clearly with the objectives of the conference: everyone is connected by a common thread of humanity regardless of who we are and what personal circumstances we face. 

Francesca is the best kind of comedian; one who makes us think about ourselves and the absurdities that we face.  I, for one, ended the day thinking to myself, 'I need to seize the day more'.  And seizing the day can, of course, mean making time to find out about new things (and having fun too, of course!) Linking this to studying, it is more than possible to find an abundance of fun in learning and maintain optimism about the way in which the fun present may potentially give rise to a fabulous future.

Themes

So, what were the overriding themes that I took away from the conference?  The first one was communication: we all need to talk to each other because internal policies (as well as external legislation) are subject to perpetual change and evolution.  Talk is an eternal necessity (which is what I continue to try to tell my colleagues when I sneak off to the cafe area...)

The second theme is that of information: advisors as well as students need information to make effective decisions about whether or not to take a course of study.  Accessibility, it was stated, wasn't just a matter of making sure that materials are available in different formats.  It also relates to whether or not materials can be study-able too, and this goes back to whether, for example, individual learning activities.

The final theme relates to challenges that are inherent within the changing political and economic climate.  Whilst education is priceless, it always has a financial cost.  Different ways to pay for education has the potential to affect the decision making of those who may wish to study for a wide range of different reasons (and not just to 'get a better job').

Consider, for example, a hypothetical potential student (who is incidentally fabulous) who might just 'try out' a module just to find out if he or she likes it, who then goes on to discover they are more than capable of degree level study.  A stumbling block to access is, of course, always going to be cost.  As mentioned in the conference keynote, there will be the need for creative solutions to ensure that all students are continued to be presented with equal opportunities to study.

The DSS conference has shown, to me, how much work goes behind the scenes (and how much still needs to be done) to ensure equal opportunity to study remains a reality for all.

Permalink Add your comment
Share post
Christopher Douce

1st International Aegis Conference

Visible to anyone in the world

 Aegis project logo: Open accessibility everywhere - groundwork, infrastructure, standards

7-8 October 2010

It seems like a lot of time has passed between this blog post and my previous one. I begin this entry with an explicit statement of: less time will pass between this one and the next!

This post is all about an accessibility conference that I recently attended in Seville, Spain, on behalf of the EU4ALL project in which the Open University has played an important part. Before saying something about the themes of the Aegis conference and summarising some of the notes that I made during some of the presentations, I guess I ought to say something about the project (from an outsiders perspective).

Aegis is an EU funded project that begins with a silent O (the O stands for Open). It then continues to use the first letters of the words Accessibility Everywhere: Groundwork, Infrastructure, Standards. My understanding is that it aims to learn more about the design, development and implementation of assistive and accessible technologies by not only carrying out what could be termed basic research, but also through the development and testing of new software.

Without further ado, here is a rough summary of my conference notes, complete with accompanying links.  I hope it is useful to someone!

Opening

After Evangelos Bekiaris presented the four cornerstones of the project (make things open, make things programmatically accessible, make sample applications and make things personalisable), Miguel Gonzalez Sancho outlined different EU research objectives and initatives. It was stated that 'there must be research in the area of ICT and accessibility, and this will continue'.

Pointers towards future research included a FP7 call that related to ICT for aging and well being. Other subjects mentioned included the areas of 'tools and infrastructures for mainstream accessibility', 'intelligent and social computing for social interaction' (which would be interdisciplinary, perhaps drawing upon the social sciences) and 'brain-neuronal computer interfaces' (BNCI), as well as plans to develop collaborations with other parts of the world.

It was useful not only get an overview of the domains that the funders are likely to be interested in, but also useful to be given a wealth of information rich links that researchers could explore later. The following links stood out for me: the EC ICT Research in FP7 site and the e-Inclusion activities page.

The Aegis Concept

Peter Korn from Oracle presented a very brief history of accessibility, drawing on the notion of 'building in accessibility' into the built environment. He presented a total of six steps, which I hope I have noted down correctly.

The first is to define what 'accessible' is. This may involve the taking of measurements, such as the width of doors and maybe the tones of elevators, or the sounds that are made of road crossings. The next (second) stage is to create standard building materials. Here you might have a building company creating standard door frames or even making electronic circuits to make consistent tones and noises (this is my own paraphrasing!). The next step is to create some tools to know how best to combine our pieces together. The tools may take the form of standardised instructions.

The next three items are more about the use of the physical items. The fourth step is that you need to make a choice as to where to place a building. Ideally it should be situated close to public transport and in a convenient place. The fifth step is to go ahead and to 'build' the building. The final step is all about dissemination: the telling of people about what has been created.

Peter drew a parallel between the process of creating physical acccessibility and creating accessibility for ICT systems. There ought to be 'stock' components of interface elements (such as the Fluid component set), developers and designers should adhere to good practice guidelines (such as the WCAG guidelines), applications need to be then created (which is akin to going ahead and making our building), and then we need to tell others what we have done.

If my memory is serving me well, Peter then went onto talk about the different generations of assistive technologies. More information about the generations can be found by jumping to my earlier blog post. From my own perspective (as a technologist), all this history stuff is really interesting, but there's such a lot of it, especially when technology is moving on so quickly. Our current challenge is to begin to understand the challenge of mobile devices and learn about how to develop tools and systems that remain optimally functional and accessible.

Other Projects

One of the great things of going to conferences (other than the cakes, of course) is an opportunity to learn about loads of other stuff that you had never heard of before. Blanca Alcanda from Technosite (Fundacion ONCE) spoke briefly about a number of projects, including T-Orienta (slideshare), Gametel (the development of accessible games) and INREDIS (self-adaptive inverfaces).

Roundtable Discussion

Karen Van Isacker was our question master. He kicked off with few killer questions (a number of which he tried to answer himself!) The panel comprised of a journalist, industrialists, researchers and user representatives. The notable questions were: ' what are your opinions about the [Aegis] products that are being developed?', 'how are you going to make sure users know about the tools [that are being developed]?', 'what are the current barriers people face?', and 'can you say something about the quality of AT training in Europe?'

In many ways, these questions were addressed by many of the conference presentations as well as by the panel. Challenges relating to the development of assistive technologies include the continual necessity of maintenance and updates, that users ought to be more aware of the different types of technologies that may be available, the price of technology is significant and one of the significant challenges relating to training is the fact of continual technological change.

After a short break the conference then split into two parallel sessions. I tended to opt for sessions that focussed on more general issues rather than those that related to particular technologies (such as mobile devices) or operating systems. This said, there is always a huge amount of cross over between the different talks.

Parallel session 1b (part 1)

It was good to see a clear presentation of a user centred design methodology (UCD) by Evangelos Bakiaris. Evangelos described user research techniques such as interviews, questionnaires and something called contextual enquiry. His talk reminded me of materials that are presented through the Open University course Fundamentals of Interaction Design (a course which I wholeheartedly recommend!)

My colleague Carlos Velasco from FIT, Germany, gave a very concise outline of early web software before introducing us to WCAG (W3C). Carlos then went onto summarise some intresting research from something called the 'technology penetration report' where it was discovered that out of 1.5 million websites, 65% of them use Javascript (which is know to yield challenges for some assistive technologies). The prevalance of Javascript relates to the increasing application and development of Rich Internet Applications (or RIAs, such as Google Maps, for instance). The characteristics of RIAs include the presentation of engaging UI's and asynchronous content retieval (getting many bits of 'stuff' at the same time). All these developments led to the creation of the WAI-ARIA guidelines (Accessible Rich Internet Applications).

Carlos argued that it was once relatively straightforward to test earlier types of web application, since the pages themselves didn't change. You could just send the pages to an 'page analysis server' or system (perhaps like Imergo), which may then persent a report, perhaps in a formal language like EARL (W3C). Due to the advent of RIAs, the situation has changed. The accessibility of a system very much depends on the state in which it is, and this can change. Testing web accessibility has therefore changed into something more resembling traditional usability testing.

A higher level question might be, 'having an application or product that is accessible is all very well, but do people have access to assistive technology (AT) that enable web sites to be used?' Other related questions include, 'if people have access to AT, do they use it? If not, why not?' These were the questions that Karel Van Isacker aimed to address.

Karel began by saying that different definitions within Europe leads to different estimates of the number of people with disabilities. He told us that the AT supplier market is rather fragmented: there are many suppliers in different countries and there are also substantial differences in terms of how purchases of AT equipment can be funded. He went on to suggest that different countries applied different models of disability (medical, social and consumer) to different market segments.

Some of the challenges were clear: people were often unaware of the solutions that best meet their ICT needs, users of AT's are just given very rudimentary training, and many people may even have a computer that they have used once, and there is a high level of users discarding their AT due to low levels of satisfaction.

Parallel session 1b (part 2)

Francesca Cesaroni began the next part of the afternoon by describing a set of projects that related to the broad theme of user requirements. These included the VISIOBOARD project (which related to eye tracking) and the CAALYX project (Complete Ambiant Assisted Living Experiment).

Harry Geyskens then went on to consider the following question from the perspective of someone with a visual impairment: 'how can I use a device in a comfortable and safe way that is good as a non-disabled person?' Harry then presented different design for all principles (wikipedia) : that a product must be equitable in use, be flexible, be simple and intuitive, provide perceptable information, be tolerant for error, permit usage through low physical effort.

Begona Pino gave an interesting presentation about the use of video game systems and how they could potentially be used for different groups, whilst clearly expressing a call for design simplicity.

The final talk of the day was given my yours truly, where I tried to present a summary of four year project called EU4ALL in twenty minutes. To summarise, the aim of EU4ALL is to try to consider how to enhance the provision of accessible systems and services in higher eduction through the creation of a small number of prototype systems. A copy of my presentation and accompanying paper can be found by visiting the OU knowledge network site (a version will eventually be deposited into the Open Research Online system).

Day 2 keynote

Gregg Venderheiden kicked off day 2 with a keynote entitled 'Roadmap for building a global public inclusive infrastructure'. Gregg imagined a future where user interfaces change to the needs of individual users. Rather than presenting a complicated set of interfaces, a system (a PC or mobile device) may present a more simplified user interface. Gregg pointed us to a project called NPII (National Public Inclusive Infrastructures). It was good to learn that some of the challenges that Gregg mentioned, specifically security and ways to gather preferences were also lightly echoed in the earlier EU4ALL presentation.

Parallel session 2a: Rich RIA!

RIA is an abbreviation for Rich Internet Application. The canonical example of a RIA is, of course, Google Maps or Gmail. Web application development techniques (such as AJAX, wikipedia) that were pioneered by Google and other organisations have now found their way into a myriad of other web products. From their inception RIAs proved to be troublesome for users of assistive technologies.

Juta Trevianus gave a talk with an intreguing title: 'changing the world - on a tiny budget'. She began by saying that being on-line and being connected is no longer an option. Digital exclusion can lead to social exclusion. The best bargain is often, in my experience, one that you can find through a web browser. I made a note of some parts of her talk that jumped out at me, i.e., 'laws work when they are clear, simple, consistent and stable', but, 'laws cannot create a culture of change'. Also, perhaps we need to move from a case where one size fits all (universal design) to the case where one size fits one (personalised design, which may be facilited through technology).

Being an engineer, I was struck by Juta's quote from computer scientist Alan Kay: 'the best way to predict the futuer is to invent it'. It's not too difficult to relate this quote back to the Aegis theme of openness and open source software (OSS): freedom of code has the potential enable the freedom of invention.

The first session was concluded by Dominique Hazael-Massieux from the W3C mobile web initative (W3C). The challenges of accessibility now reach much further than the increasingly quaint desktop PC. They now sit within the hands and pockets of users.

One early approach to dealing with the explosion of new devices was to provide a separate websites: one for mobile devices, another for 'traditional' computers. This approach yields the inevitable challenge of maintenance. Dominique told us about HTML 5 (wikipedia) and mentioned that it has the potential to help with site navigation and make it easier for developers (and end users) to work with rich media.

The remainder of the day was mainly focused upon WAI-ARIA. I particularly enjoyed Mike Squillace's presentation that returned to the challenge of testing rich internet applications. Mike presented some work that attempted to codify with WCAG rules into executable Javascript which could then be used within a test engine. Jan Richards, from OCAD, Canada, presented the Fluid project.

Parallel session 3b: Standardisation and valorisation

I split my time in the final afternoon between the two parallel sessions, visiting the standardisation session first, then moving onto the coordination strand half way through. There were presentations that described the process of standardisation and its importance in the process of accessibility. During this session Loic Martinez presented his work on the creation of a tool to support the development of accessible software.

Parallel session 3a: Coordinating research

The final session of the conference yielded a mix of presentations, ranging from description of physical centres that people could visit through to another presentation about the EU4ALL project made by my colleague from Madrid. This second EU4ALL presentation outlined a number of proposed prototype accessibility information services. Our two presentations complemented each other very well: my presentation outlined (roughly) an accessibility framework, whereas this second presentation seemed to an alternative perspective on how the framework might be applied and used within an institution.

Summary

One of the overriding themes was the necessity to not only make assistive technology available to others but also to make sure the right kind of technology was selected, and to ensure that users were given ample opportunity to learn how to use it. If you are given a car and you have never driven before you shouldn't just get into it and start driving: it takes time to learn the controls, and it takes time to build confidence and to learn about the different places you might want to go to (and besides, it's dangerous!) To risk stretching a metaphor too far, this is a bit like assistive technologies: it takes time to understand what controls you have at your disposal and where you would like to travel to. As Karol pointed out in his talk: far too much technology sits unused in a box.

Another theme of the conference was about the solidity of 'this box'. Rather than having everyting in 'a box' or installed on your computer (or mobile device), perhaps another idea might be to use technology 'on demand' from 'the cloud' (aka, the internet). Tools may have the potential to be liberated, but this depends on other types of technology 'groundwork' being available, i.e. good, fast and reliable connectivity.

Finally, another theme (and one that is pretty fundamental) is the issue of usability and simplicity. The ease of use of systems will continue to be a perpetual challenge due to the continual differences between people, task and context (where the person and the task takes place). Whilst universal design offers much possibility in terms of making product for the widest possible audience, there is also much opportunity to continue to explore the notion of interface, product and system personalisation. Through simplicity comes accessibility, and visa versa.

All in all, a very interesting couple of days. I came away feeling that there was a strong and vibrant community committed to creating useful technologies to help people in their daily lives. I also came away feeling that there is so much more to do, and a stronger feeling that whilst technology can certainly help there are many other complementary actions that need to be taken before technology can even begin to play a part.

The latest project newsletter (available at the time for writing) can now be downloaded (pdf).

See also Second International Education for All Conference (blog post), 24 October 2009.

Permalink
Share post
Christopher Douce

Understanding Moodle Themes

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 21 Jul 2010, 18:27

A section of a Moodle screen showing three icons: forums, lessons and resources

This post is about the journey that I followed trying to understand Moodle Themes.  Moodle Themes are pieces of programming magic that change the visual appearance of your Moodle installation.

If you download Moodle and play around with it, you might eventually arrive at a decision that it might be useful within your institution.  You might hold a meeting with senior management where you may say, 'I think it's a good idea if we try to use this thing call Moodle to host some of our courses'.  After answering some difficult questions about maintenance and development costs, your managers might say, 'okay, you've convinced us... let's give it a go, I'll give you a budget'

Other than figuring out which operating system and database to use and where (and how) your instance of Moodle is to be hosted, one of the first development activities you will have to do is make sure that your Moodle system is 'on brand', i.e. it's visual appearance should reflect the institution that you work for.

This is pretty much what I have to do.  I have to try and make my 'vanilla' (unmodified) version of Moodle blend in with a set of existing web pages that have been built as a part of a research project I'm working on.  Other development teams within my university have already done something similar with their production version of Moodle, but I need to tackle this problem myself.

I start with a couple of questions: what makes up a theme and how might you go about changing one or maybe even making a new one?

Resources galore

Before I can answer these questions I needed to find something to read, and it didn't take a lot of browsing to find a number of potentially useful resources.

The first page that I discovered was a link to over one hundred different themes thanks to the Database of Moodle Themes.  Perhaps I shouldn't be so surprised given the number of Moodle installations that are out there in the world.

I soon discovered the Themes documentation pages and a number of other related links, including a set of themes related FAQs and dedicated discussion forum.

The Themes documentation link (for a Themes novice) seems to be the most useful.  One of the sections says that themes can be delivered in zip files.  You download them, unzip them and place the contents in the /moodle/theme directory, and then click on some admin tools to activate it.  This sounds almost too easy!

Towards Code

Being someone who likes to view code I thought it might be useful to look at some of the magic that makes Moodle themes work.  To do this, I chose a random theme from the themes database and unzipped a folder to my desktop.  To begin to make sense of it properly, I felt that it might be a good idea to compare this random theme against one that already existed.  This made me answer, 'which theme is used by default?'

To answer this question, I logged onto my local instance of Moodle (which was running on my local machine, localhost) as an administrator.  After struggling to remember my username and password, I clicked on the Administration link, followed Appearance, Themes and then on the Theme Selector link (because I couldn't really make sense of the Theme Settings options).

I quite like the Theme Selector page.  It presents all the different themes that have been installed.  The current theme that is selected is highlighted by a black square.  The one that was selected by default (in the case of my installation - I cannot remember whether I changed it) was named standardwhite.

I delve into the Moodle code area, take a copy of standardwhite and placed it alongside the one I have randomly downloaded and started poking around.

Looking at code

I noticed similarities and differences.  The similarities are that some of the filenames are the same.  I see two PHP files, styles and config, followed by two html files, header and footer.  There seems to be a CSS file (Wikipedia) in both themes (but the downloaded theme contains a few more than the default theme).  I also notice a graphics file called gradient in the default theme (which is a jpg), and a png graphics file in the other one.  A big difference lies was that the theme I have downloaded contains a directory which seems to contain a bunch of graphic files.

Before deciding I'm terribly confused, I decide to do one more thing: open up both of the PHP files to see what they contain.

In a config script, I see assignments to a variable called $THEME.  Different attributes appear to do different kinds of magic.  Looking in the styles script, a comment tells me that 'there should be no need to modify this file'.  It seems to do something that relates to the presentation of a CSS file.  That is good enough for me!

I have a quick peek into the header and the footer html files.  It looks like these are templates (of some kind) that are filled out using the contents of some PHP variables.  Obviously the pages that the Moodle code creates have a pretty well defined structure, and presumably this structure is documented somewhere.  This is perhaps something I might need to remember later.

Return to the documentation

At this point, I roughly (think) I know what a Theme comprises: some magic scripts which define some variables (and some other stuff), some header and footer scripts which look at bit like templates, a CSS file of some kind, perhaps a graphic (which could be used by the CSS file?) and maybe a bunch of graphics that replace those that are used in Moodle by default.

If this is my current understanding, can I now find the documentation easier to understand?

I soon uncover two further pages: Make your own Theme and Creating a Custom Theme (the first link seems to be easier to understand).  A couple of clicks takes me to a documentation page called Theme config file which goes some way to explaining the variables that I have touched upon above.

The final comment in the Creating a Custom Theme page was instructive.  Other than saying that you can't change everything, if you want to make your site look like an existing site, it might be a good idea to make use of a tool called Firebug which is a plug in for your Firefox browser.

With Firebug, you can browse to a web page of your choice and uncover what CSS definitions have been used to build its visual appearance.  I've used Firebug before, and mentioning that it is a good tool is certainly a good piece of advice.  The Moodle developers have also been kind enough to prepare a CSS FAQ which is certainly worth a look.

Although I could have tried to create a new theme from scratch, I'm in a lucky position since one of my colleagues has already created a customised theme for a custom instance of Moodle.

Towards testing things out

To test things out, I copy of my customised theme into my local 'themes' directory and hit refresh on my browser.  I then select my newly installed theme and everything starts to go wrong.  The action of selecting a theme seemed to have rendered my local copy of Moodle useless since only a tiny fraction of a HTML page is created (which I see by viewing the code the browser receives).

A problem seems to have been created since the version of Moodle that I am using and the structure of the theme that I have transferred are not completely compatible with each other.  I need to go back to my default theme! But how do I do this? Where are the theme settings held?

My first guess is in the database.  I open up a front end to the MySQL database that is running on my PC, using a tool called SqlYog.  I eyeball the contents of the database to see if there's anything I can use.  I discover a 'config' table, but this doesn't tell me much.  I did, however, discover that there is a theme setting within individual courses as well as individual users.

I turn my attention towards the code by first looking at the code within the themes directory and I soon find myself fruitlessly searching through different libraries.  Finding a simple answer may necessitate spending quite a bit of time.

To get things working again, you sometimes have to cheat.  I renamed my theme to something totally different and refreshed my Moodle page.  Moodle then had no choice but to return to its default setting (which was, again, 'standardwhite').

Incrementally merging

I have two themes: one theme that I want to use but doesn't work (because it has been modified for a customised version of Moodle), and another theme that does work which I don't want to use.  When I'm faced with this situation, I try to get 'code to speak to me' by incrementally taking the one that works and making it look like the one that doesn't work.  I find I can really understand stuff when things stop working!

I begin by looking at the files and then the contents of the files.  The first thing that strikes me is that the header and footer files are quite different.  There seems to be quite a bit more happening in the customised theme when compared to the standard theme.  A step at a time I move files across and test, beginning with the favicon, then the config file, and finally the pix's.  I discover that both themes require the use of a CSS file that is contained within the standard theme directory.

The effect of moving files around seems to produce, more or less, what I was after.  The interactive 'side blocks' (particularly the show/hide buttons) are not presented as they should, but further searching reveals a magic variable, allowuserblockhiding that can be used to control this functionality.

Moodle version 2

A question to complete this post is: what is the situation regarding Moodle version 2.0?  This is a development that I have heard about for some time, but I have not heard recent any announcements regaring its expected release.  After a quick search, I reacquaint myself with something called the Moodle Roadmap.

This appears to state that there will be a beta release of V2 by the end of 2009, followed by some months of testing before a final release.  Judging by the planning document, there appears to be quite a lot more coding to do (nearly four hundred days of development time to go, so we should expect some delays).

I appreciate that giving opinion is certainly a lot easier than giving code, so I hope that Moodlers who read this section will forgive me.  I personally hope that the code for the next version is a lot cleaner.  Since the developers are forced to move to PHP version 5, I hope they will choose to adopt its object-oriented features which can help developers to form clearer (less leaky) abstractions.

In a perfect world, developers should be able to look at a screenful of code and be able to describe, more or less, what that section of code does without having to look at other code (providing, of course, they more or less have an understanding of what the product does).  From what I have seen so far in version 1.9, there is a long way to go to, but I'm certainly looking forward to learning how things have moved on in version 2.0.

Wrap-up

It's great that the developers of Moodle have designed it in such a way that it is 'themeable' (if there is such a word).  In some respects, I was surprised to discover things were not as difficult as I had expected.  Whilst, in some ways, going directly to the code and looking what is there may be a daunting challenge, it is one that I certainly recommend doing.

There's a whole lot more to the issue of Moodle themes.  I haven't touched upon the structure of Moodle pages and how they relate to elements in stylesheets, for example.  I'll leave this challenge for another day!

Permalink
Share post
Christopher Douce

Considering Middleware and Service Oriented Architecture

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 21 Jul 2010, 18:25

4815941514_0b0c87dda6_m.jpg

I wrote the following notes some time ago as a way to share information about the subject of middleware and service-oriented architecture.  I think I began by asking the questions 'what is middleware?' and 'what can it do for us?', explicitly in the context of making information systems that can help to support the delivery of useful services to support learning.

I should add a disclaimer: some of the stuff that is presented here is quite technical and seems quite a long way away from my earlier posts that relate to accessibility, but there are connections in terms of understanding how to build information systems that can help an organisation to manage the delivery of accessibility services (such as the loan of assistive technology).

Beginning my search

I began by exploring a number of definitions.  I first attacked the notion of workflow (Wikipedia).  What does workflow mean?  Is it one of those terms that can have different meanings to different people?  I rather like the Wikipedia definition, which goes:

  • A workflow is a reliably repeatable pattern of activity enabled by a systematic organization of resources, defined roles and mass, energy and information flows, into a work process that can be documented and learned. Workflows are always designed to achieve processing intents of some sort, such as physical transformation, service provision, or information processing.

I then asked myself, 'how does the idea of workflow relate to the notion of middleware?' (I had heard they were connected, but wasn't quite sure how).  Again, the Wikipedia definition of middleware proved to be useful:

  • Middleware is the software that sits 'in the middle' between applications ... stretched across multiple systems or applications. ... The software consists of a set of enabling services that allow multiple processes running on one or more machines to interact across a network. This technology evolved to provide for interoperability in support of the move to client/server architecture. It is used most often to support complex, distributed applications. ... Middleware is especially integral to modern information technology based on XML, SOAP, Web services, and service-oriented architecture.

So, these two ideas are connected.  Carrying out workflow may involve making use of a number of different services, which might be able to call through some sort of middleware...

Further links

A little more digging pointed me to a number of other directions.  Clever people have proposed something called BPEL, an abbreviation for Business Processing Execution Language.  Wikipedia is again useful:

  • WS-BPEL (or BPEL for short) is a language for specifying business process behavior based on Web Services. Processes in WS-BPEL export and import functionality by using Web Service interfaces exclusively.

On this page, there is a link to a blog post which is a very good primer and introduction.  It is lots more clearer than the Wikipedia page.

I found the following text to be useful:

  • In BPEL, a business process is a large-grained stateful service, which executes steps to complete a business goal. That goal can be the completion of a business transaction, or fulfilling the job of a service. The steps in the BPEL process execute activities (represented by BPEL language elements) to accomplish work. Those activities are centered on invoking partner services to perform tasks (their job) and return results back to the process.

Interestingly, it also contained the following:

  • As for limitations, BPEL does not account for humans in a process, so BPEL doesn't provide workflow-there are no concepts for roles, tasks and inboxes.

We are almost at the point where the same terms may be used to mean different things.  Perhaps there is a difference between what workflow is and what business processes are?  Michelson (the blog author) seems to equate workflow with 'things that people do'.  The point is that a wide definition of workflow can include things that BPEL does not.

At this point, I was wondering, 'if I have a process (say, a task that I have to complete), where half of the task has to be completed by a machine and the other half has to be completed by a person, then what technologies should I use?'.  All is not lost.  The blog mentions there is something called  BPEL4People (Wikipedia), and contains a link to an IBM whitepaper.

I've extracted some fragments that caught my eye:

  • The BPEL specification focuses on business processes ... But the spectrum of activities that make up general purpose business processes is much broader. People often participate in the execution of business processes ...

Following this, I stumbled across the following scenario:

  • Consider a service that takes place out-of-sight of the initiating process. In many circumstances, it may be immaterial as to whether the service is performed with or without user interaction, for example, a document translation service.

This made me wonder about my own involvement in the EU4ALL project, which is exploring processes that enable lecturers to order alternatives formats, such as tactile maps or other kinds of materials, for instance.

Application Servers

BPEL is represented using something called XML (Wikipedia), which is, of course (more or less) a text file that has lots of structure (created by the enthusiastic use of angled brackets).

BPEL is not the only way to represent or describe business processes (or workflow).  Another approach might be to use something called State Chart XML (SCXML), for instance.   There are probably loads more other data structures or standards you might use.

At this point, you might be asking, "okay, so there are these magic XML data structures that allow you to describe entire processes but how do you make this stuff real so people can use it?".  The answer is use something called an Application Server (Wikipedia).

Here, I am again lazy and quote from Wikipedia:

  • Application server products typically bundle middleware to enable applications to intercommunicate with dependent applications, like web servers, database management systems ...

Although an application server may be able to run middleware (and potentially sequence the order in which activities are carried out), we need to add interfaces so people can interact with it.

Always being the pragmatist, I asked myself another question, 'all this sounds like good fun, but where can I find one of these application servers that does all this magic stuff to manage our workflow and processes?'  I don't have a precise answer to this question, but I did find something called Apache ODE.

To quote from the project website,

  • Apache ODE (Orchestration Director Engine) executes business processes written following the WS-BPEL standard. It talks to web services, sending and receiving messages, handling data manipulation and error recovery as described by your process definition. It supports both long and short living process executions to orchestrate all the services that are part of your application.

Another distinction (as opposed to long and short running processes) include processes that require human intervention (or actions) and those that can run on their own, such as executing a database query or sending messages to another part of a large organisation to request the availability of resources.

All this sounds great!  All I have to do now is to find some time to study this stuff further.

Other approaches

Whilst reading all this stuff, the purpose of other products that never made sense to me started to become clear.  A couple of years ago, I had heard something called Biztalk mentioned, but never properly understood what it was.  Again, Wikipedia is useful, describing Biztalk (Wikipedia) as

  • a business process management (BPM) server. Through the use of "adapters" which are tailored to communicate with different software systems used in a large enterprise, it enables companies to automate and integrate business processes.

I've not looked into this very deeply, but it also seems that the House of Microsoft might have concocted something of their own called the Windows Workflow Foundation (Wikipedia) which I understand also connects to the topic of BPEL.

Of course, there's a whole other set of terms and ideas that I haven't even looked at.  These include technologies and ideas such as an enterprise service bus (ESB), message queues, message-oriented middleware (MOM), the list goes on and on...

A summary (of sorts)

The issue of service-oriented architecture design goes a lot deeper than simply creating a set of solitary web services running on different systems.  Designers need to consider how to ensure that messages are received successfully, how to consider or address redundancy and how to measure or ensure performance.  The ultimate choice of architectural components and elements depend very much on your requirements, the boundaries of your organisation, your needs for communication and who you need to communicate with.

What I found surprising was the number of technologies that could be potentially used within the project that I'm currently working on.  The ultimately choice of technologies are likely to boil down to the key issue of: 'what do we know about right now', and 'what is the best thing we can do'.

Footnote

I was going to add a footnote to one of the earlier sections, but because my notes have turned into a blog post, I've decided to put it here.

I like this stuff because it reminds me of two areas that always fight with each other: software maintenance and business process re-engineering.  Business practices can change more quickly than software systems.  The need for process flexibility (and abstraction) is one motivation that has driven the development of things like SOA and BPEL.

This stuff is also interesting because workflow is where the world of 'work' and the world of software nearly combine.  There is another dimension: would you like a computer telling you what to do?  Also, no matter how much we try to be comprehensive in our understanding of a particular institution there will always be exceptions.  Any resulting architecture should ideally try to accommodate change efficiently.

Middleware (in some senses) can have a role to play in terms of gathering information about the performance of services, i.e. how long it takes for certain actions to a certain kind of request, and has the potential toe manage the delivery of interventions (such as issue escalation to supervisors) should service quality be at risk.

Acknowledgements

Blog image is a modified version of the one found on the Wikipedia SOA page.  I also cheekily consulted an O'Reilly book when I was preparing an earlier version of these notes, but I've long since returned it to the library (and I can't remember its title).

Permalink
Share post
Christopher Douce

Second International Education for All Conference

Visible to anyone in the world
Edited by Christopher Douce, Thursday, 21 Nov 2019, 11:24

This blog post represents a review of the second International Education for All conference that I was lucky enough to attend in September 09.  I originally intended to post a review earlier, but mitigating circumstances (which I hope will become clear at the end) prevented this.

Like the first conference, held in '07, this conference was also hosted at the University of Warsaw.  The '07 conference represented a finale of an EU project, a collaboration between universities and other organisations from Germany, Poland, Estonia, Croatia and others (please forgive my poor memory!), but this one was slightly different.

Below I attempt to present a brief summary of some of the sessions that I attended.  There were three parallel sessions, so I had to choose carefully.  This is then followed by what I took away in terms of the themes of the conference.

Opening Keynote

The opening keynote was by Dianne Ryndak from the University of Florida.  Dianne explored the topic of inclusion, particularly the differences between generalised and specialised education.  Dianne explained how personalised support and learning activities could be provided as a part of general learning activities.

She went on to present a powerful description of two students: one who was educated within a segregated school, the other who was educated, with the provision of additional support, through a main stream (or general education) school.  I remember her saying that 'education is a service that goes to the student, not a place where the student goes' and that education should be 'only as special as necessary'.

Although Dianne's presentation primarily related to high school education the themes she highlighted can be directly brought to bear on higher education too.  Technology can be used as a way to help with the inclusion of people with disabilities main stream education.  This said, teachers have a more important role where they need to be viewed as collaborators as well as educators.

Three dimensional solid science models for tactile teaching materials

I sometimes like to visit museums.  One of the things that frustrate me is the sight of signs that say 'please do not touch!'  This strikes me as particularly unfair when I discover sculptures in art galleries.  Given that sculptors use their haptic sense when creating an object, it seems unfair to deny visitors the possibility of using this same modality.

Yoshinori Teshima, from the Digital Manufacturing Research Centre in Japan, gave an inspiring presentation where he showed a number of different models, ranging from abstract objects (such as polyhedra) through to hugely magnified representations of microscopic creatures that can only be seen under a microscope (imagine a microscoping monster the size of your fist!)

Yoshinori briefly spoke about the manufacturing methods, which included stereolithography and 3D printing using either plaster on nylon powder.  His relief emphasised models of the earth and mars were fabulous.

It struck me that his models could be used by all students, regardless of visual abilities.  It also struck me that the ultimate use of such models within the classroom depends ultimately upon the skills and the expertise of the teacher.

Talking Tactile Tablet

I have heard about tactile tablets before.  This presentation demonstrated a product that was also included within the assistive technology exhibition that was also hosted at the conference.

I came away with two points from this presentation.  Firstly, referring to three dimensional objects using two dimensional symbols is a skill that I take for granted.  Secondly, it is now possible for educators to author their own tactile materials.  We were shown how it was possible to create a small family tree.  Audio materials were recorded using Audacity (Wikipedia), which were then associated to positions on a tablet.  Corresponding tactile representations could be produced using embossers.

Opening Linux for the Visually Impaired

This presentation primarily focused upon a screen reader called Sue (Screenreader Usability Extensions) that was developed by the Study Centre for the Visually Impaired Students (SZS) based at the University of Karlsruhe, Germany.  This presentation reminded me a little of a presentation of the Orca screen reader, made at the Aegis project dissemination event that I wrote about a couple of months ago.

One of the interesting things about Sue was that it could be connected to both a refreshable braille display (Wikipedia) (visiting this page was interesting, since it mentioned a new type of refreshable display called a rotating-wheel display) and a screen magnifier at the same time.

Although Sue could be considered to be 'yet another screen reader', having multiple versions of similar products is undeniably, in my view, a good thing.  Competition between individual products, whether it is in terms of popularity or functionality can help their development and enhancement.

Distance Education and Training Programme on Accessible Web Design

I was drawn to this presentation due to involvement in the Open University Accessible online Learning and Fundamentals of Interaction Design courses.  I was not to be disappointed.  There were some strong echoes with these current courses, but I should say the curriculum is perhaps complementary.

The course was developed as a part of a European project called Accweb with the intention of creating distance learning materials that could address a need for a professional qualification or a certificate in accessible web design. The materials comprised of eighteen units which could be delivered through the ATutor VLE, amounting to the equivalent of 60 points of study.

Key elements of the curriculum included:

  • Fundamentals of web accessibility
  • Guidelines and legal requirements
  • Assistive technologies
  • Accessible content creation (which included issue such as methodology, evaluation, rich internet application and authoring tools)
  • Design and usability (themes from human-computer interaction)
  • Project development

The materials do not seem to be available through this site at the moment, but I hope they will be available in time.

Helping children to play using robots: IROMEC project experience

This presentation, by Francesca Caprino, outlined the IROMEC project, which is an abbreviation of Interactive Robotic Social Mediators as Companions and a sister project called the adapted robot project.  Francesca began by describing play: what it is, what it can do and considered the effect of play deprivation on the development of children.  Robots, it was argued, can help children with physical disabilities and other impairments participate in play activities.

The robots that were described were essentially toy robots that were modified to allow them to be controlled in different ways.  Future research objectives included uncovering of new play scenarios, considering how to adapt or modify other robots and assessing the educative and therapeutic outcomes of robot assisted  play interventions and developing associated guidelines.

Studying Sciences as a Blind Person: Challenges to AT/IT

Joachim Klaus introduced us to the Centre for the Visually Impaired Students, which seemed to have similarities to the Open University Office for Students with Disabilities (more information is available through the services for disabled students portal).

After presenting an overview of the centre, Joachim presented the ICC: International Camp on Computers and Communication.  The ICC 'tries to make young blind and visually impaired students aware what technology can do for them, which computer skills they have to have, where they should put efforts to enhance their technical skills, the level of mobility as well as their social skills'.  Using and learning to use assistive technologies can be difficult.  This international camp, which is held in different countries, can help people become more skilled at using assistive technologies, thus removing substantial barriers to access.

Improving the accessibility of virtual learning environments using the EU4ALL framework

During the conference, I gave a short talk about the EU4ALL project I am working on.  The presentation focussed on an architecture that the project has been creating and how it can improve the accessibility of services that are delivered to students.  The architecture takes into consideration a number of different stakeholders, each of which has a responsibility in helping to deliver accessibility.  My slides are available on-line through the Open University Knowledge Network (presentation information).

At the same time as my presentation, my colleague Elisabeth Unterfrauner from the Centre for Social Innovation, Technology and Knowledge, Vienna, was presenting her research, the socio-economic situation of students with disabilities, also carried out within the EU4ALL project.

Fostering accessibility through Design4All education in mainstream education

Continuing the theme of EU projects, Andrea Petz gave an interesting presentation about universal design, or Design4All.  Andrea has been involved with EDeAN, and abbreviation for the European Design for All e-Accessibility Network.  EDeAN has studied how D4ALL is covered or treated in different universities across Europe and has helped to guide the development of a masters programme.

Andrea began her presentation by saying that D4ALL is not design for the average, but for the widest possible group of users and went on to talk about the principles of universal design (Wikipedia):

  1. Equitable use
  2. Flexibility in use
  3. Accessible information
  4. Tolerance for errors
  5. Simple and intuitive
  6. Low physical effort
  7. Size and space for approach and use

Andrea pointed us towards a conference called the International Conference on Computers Helping People with Special Needs (IICHP).

Inclusive science strategies

Greg Stefanich added two more to Andrea's list of principles, namely:

  1. Build a community of learners
  2. Create a positive instructional climate, i.e. one that is welcoming.

Greg also connected his talk to the theme of inclusive education by saying 'a person in a regular class room setting will have better relationships with the general public, family and be employed' and emphasised the view that inclusion of all students will have no real consequence on other students.  An important point is to spend time to get to know the needs of individuals so they can be effectively supported.

Foreign language courses for students with hearing impairments

Ewa Domagała-Zyśk talked about her experience of teaching English to hearing impaired students.  Her presentation made me reflect on my own experience of using the virtual world SecondLife (although Ewa's presentation was mostly about why teaching foreign languages is a good thing and what some of the challenges might be).

Quite some time ago, I was adventurous enough to visit a couple of 'Polish speaking' virtual bars where I tried to interact, using text, with some of the 'regulars' and found myself totally lost.  This experience showed me that this was an interesting and predominately unthreatening way to help to learn (and understand) written language. It did make me wonder whether virtual worlds (in combination with real-word assistance, of course), might be an interesting way to expose people to new languages.  The issue of whether such environments promote the creation of new dialects is, of course, a whole other issue.

Educational practices towards increasing awareness among academic teachers

Dagmara Nowak-Adamczyk (or one of her colleagues) spoke about the DARE project, a disability awareness project.  The project website states, 'The long-term objective of the project and the DARE Consortium is raising public awareness of disability and the way people with disabilities function in modern (knowledge-based) society'.  The project has produced some training materials which are currently being evaluated.

Access for science students with disabilities in an open distance learning institution in South Africa

I was particularly looking forward to this presentation since its title contained particularly interesting themes.  Eleanore Johannes, from the University of South Africa, spoke about different models of disability, introduced us to the Advocacy and Resource Centre for Students with Disabilities (ARCSWiD) office and spoke of some of the challenges that students face: funding, electricity, inaccessibility web pages and training.  She also described a qualitative research project that is exploring the experiences of science students with disabilities that is taking place over a period of three years.

No contradiction : Design4All and assistive technologies

Michaela Freudenfeld introduced the INCOBS web portal that provides information about assistive technologies that are available in Germany.  The INCOBS portal has the objective of carrying out market surveys of products and services, testing of assistive devices and workplace technologies, evaluating the accessibility of software applications and offer seminars for facilitators and advisors.

Understanding educational needs of students with psychiatric disabilities

Enid Weiner, from the Counselling and Disability Services, York University, Canada, gave an impressive hour long talk on the theme of mental health difficulties.  The subtitle of her presentation was 'making the invisible more visible'.

Enid spoke about a number of interesting related themes, such as different illnesses, the effect of medication and that some illnesses can have an episodic nature, which may cause the place of learning to be slower or take place over an extended period.  Accommodations are to be made on a case by case basis.

Enid emphasised the importance of a 'community of support' and said how important it is for educational providers not to 'get hung up' on an individual diagnosis and instead focus on individual accommodations i.e. what learners need to study.

Environmental influences on participation with disabilities: a Sri Lankan perspective

This was an interesting presentation since it presented a different perspective.  Samanmali Kularatne outlined the situation in Sri Lanka and then described a study that is aiming to explore the experiences of children with disabilities in mainstream schools.

The study adopts a qualitative approach.  Interviews are carried out in class rooms, which are then transcribed and subjected to thematic analysis.  The participants include children, teachers, parents and non-participative observers.  Themes identified included: attitudes, values and beliefs, support and relationships, products and technology, natural and built environment.

The conclusion was that inclusion is not really a reality and that there is a lack of resources.  The actions resulting from the study includes further communication with educational authorities, an awareness programme for parents and launching a project to try to initiate more inclusion.

How AT can help with learning maths for people who are print disabled

The presentation and manipulation of mathematical notation is a perennial problem.  When some mathematical expressions are translated into spoken language ambiguities can easily arise.  One proposed solution is to make use of the LaTeX (Wikipedia) language.

The challenge is not just technical.  Acceptance of technology relates to the willingness of learners to accept technological solutions and effectiveness of teachers to communicate their potential benefits.  It also relies upon educators having both the time and wiliness to understand different tools.

Considering different definitions of disability

Paweł Wdówik from the University of Warsaw Office for Persons with Disabilities spoke about the different definitions of disability and the differences that exist between primary and secondary education.  Paweł highlighted that through the medical model, if you don't have a medical model, then a disability is not likely to exist.  As a result, people who do have disabilities are likely to slip through the system.  Paweł emphasised that the views of the individual should always be fundamental.

Working towards inclusive education in Europe

Andrea Watkings from the European Agency for Development in Special Needs Education spoke about the different agency projects and mentioned the UNESCO Salamanca statement and asked the question, 'how do we change our systems so they are inclusive from the beginning?' and made the point that inclusion needs to take account of all peoples and groups.  Amanda emphasised that inclusion is a process, not a state.

Closing Keynote

The final presentation of the conference was made by Yvonne Bonner, Reggio Emilia, Italy.  Yvonne presented a range of very thought provoking images.  She asked the question, 'why work in this area?'  She answered philosophically (and, of course, I paraphrase) that by working in this area  you are considering (and hopefully challenging others to enhance) the rights of all people.  It was a great way to close the conference.

Conference themes

One of the ways that this conference differed from the previous event was that there were more distinct themes.  Whereas the previous event focused quite a lot on an EU project that had just concluded, this conference I felt was slightly more wide ranging (but this could be cause I was more attuned to what to look out for).

One of the more prominent theme were debates surrounding inclusion or exclusion, or more specifically, how to help ensure that people with additional needs could be effectively brought into mainstream education.  This theme was clearly articulated within the opening and closing keynote speeches.  There are differences between countries (and the models of disability that are applied).  Sharing understanding of definitions was certainly a subject that was considered to be important.

Another theme was the need to listen to the individual.  I recall two projects that are aiming to learn more about the experience of individuals.

Quite a few of the presentations related to on-going projects and programmes.  Ideally I would have liked to go away with a more fuller set of conference proceedings to help me remember what was said and recall the arguments that were presented.  Hopefully, as the conference series proceeds, this may be something that the organisers may consider, but hopefully without detracting from an underlying sense that delegates are happy to discuss, share and learn from each others practice.

Addendum

After the conference ended I had some free time.  It was suggested that a fun thing to do would be to go hiking in the Tatra mountains.  I had been told a lot about a town called Zakopane and how it once exerted a strong draw for artists and Bohemians, and how a special cheese was likely to be sold everywhere in the town.  I was not to be disappointed.  As well as having extraordinary mountains and restaurants, tourists walking down the main street would stumble across cheese purveyors (Wikipedia) very thirty or so metres.

My choice of words is no accident, but instead I was embroiled in one.  Rather than literally stumbling across cheese sellers, I literally stumbled down the side of a mountain and broke my arm (although I should add that the stumble was a relatively modest one of about forty or so centimetres).  Visiting the accident and emergency ward was an experience, where a sharp witted paramedic, upon hearing that I was from the UK asked, 'were you walking on the wrong side of the path?'  There were more jokes, but their charm has long since worn away!

The upshot of the accident was that I found myself temporarily disabled, my dominant arm immobilised in plaster.  It all came as a bit of a shock.  Simple tasks suddenly became trials of patience and took considerable longer, if I could figure out how do them at all.  Shoe laces became a liability and shirt buttons became almost impossible.  I had to go about cancelling events and activities that were scheduled for the time after I was to return to the UK.

Upon return to the UK, my motivation levels nose dived. I quickly recalled the universal design principles when doors became difficult to open and jars and cans a frustrating challenge. I also had entered the UK health system, but I had no real sense of who I should ask about information about things.  Whilst monolithic institutions are there to help, individuals are necessary to provide support and peace of mind.

Despite my frustrations, I still had an overwhelming desire to do stuff.  Although I wasn't able to attend a weekend meeting I was scheduled to attend, one of my colleagues from the University of Leeds helped me to participate.  Using Skype text chat, it was possible to take part in a short group discussion activity which helped to lift my mood.  This showed me what a positive impact technology can by providing effective alternative ways of communication.  It also distinctly underlined on of the conference themes: the importance (and power) of inclusion.

I think I'm on the mend now, but I understand it's going to take some time.  All in all, the trip (both to the conference, and over a part of mountain) was an interesting experience.  I've certainly learnt a few things.

Permalink
Share post
Christopher Douce

Aegis Project : Open accessibility everywhere

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 21 Jul 2010, 18:20

Aegis project logo: Open accessibility everywhere - groundwork, infrastructure, standards

I recently attended a public dissemination event that was held by the AEGIS project, hosted by the European headquarters of the company that developed the Blackberry, Research in Motion.

The Aegis project has the strapline that has three interesting keywords: groundwork, infrastructure and standards.  When I heard about the project from a colleague, I was keen to learn what lay hidden underneath these words and how they connect to the subject of accessibility.

The Aegis project website states that it 'seeks to determine whether 3rd generation access techniques will provide a more accessible, more exploitable and deeply embeddable approach in mainstream ICT (desktop, rich Internet and mobile applications)' and goes on the say that the project will explore these issues through the development of an Open Accessibility Framework (or OAF).  This framework, it is stated, 'provides embedded and built-in accessibility solutions, as well as toolkits for developers, for "engraving" accessibility in existing and emerging mass-market ICT-based products'.  It goes on to state that the users of assistive technologies will be placed at the centre of the project.

The notion of the 'generations' of access techniques is an interesting concept that immediately jumped out at me when reading this description (i.e. what is the third generation and what happened to the other two generations?), but more of this a little later on.

Introductory presentations

The dissemination day began with a couple of contextualising presentations that outlined the importance of accessibility.  A broad outline of the project was given by the project co-ordinator who emphasised that the point that the development of accessibility required the co-operation of a large number of different stakeholders, ranging from expert users of assistive technology (AT), tutors, and developers.

There was a general call for those who are interested in the project to 'become involved' in some of the activities, particularly with regards to understanding different use cases and requirements.  I'm sure the project co-ordinator will not be offended if I provided a link to the project contacts page.

AT Generations

The next presentation was made by Peter Korn of Sun Microsystems who began by emphasising the point that every hour (or was it every second?) hundreds of new web pages are created (I forget the exact figure he presented, but the number is a big one).  He then went on to outline the three generations of assistive technologies.

The first generation of AT could be represented by the development of equipment such as the Optacon (wikipedia), an abbreviation for Optical to Tactile Converter.  This is the first time I had heard of this device before, and this represented the first 'take away' lesson of the day.  The Wikipedia page looks to be a great summary of its development and its history.

One thing that is missing is the lack of an explicit link to a personal computer.  The development of a PC gave way to a new second generation of AT that served a wider group of potential users.  This generation saw the emergence of specialist AT software vendors, such as companies who develop products such as screen readers and screen magnifiers.  Since computer operating systems are continuing to develop and hardware is continually changing (in terms of increases in processing power), this places unique pressures on the users of assistive technology.

For some AT systems to operate successfully, developers had have to apply a number of clever tricks.  Imagine a brand new application package, such as a word processing program, that had been developed for the first generation of PCs, for example.

The developers of such an application would not be able to write code in such a way that allows elements of the display to be presented to users of assistive technology.  One solution for AT vendors is to rely on tricks such as the reading of 'video memory' to convert on-screen video displays that could be presented to users with visual impairments using synthetic speech.

The big problem of this second generation of AT is that when there is a change to the underlying operating system of a computer it is possible that the 'back door' routes that assistive technologies may use to gain access to information may become closed, making AT systems (and the underlying software) rather brittle.  This, of course, leads to a potential increase in development cost and no end of end user frustration.

The second generation of AT is said to have existed between the late 1980s to the early 2000s.  The third generation of AT aims to overcome these challenges since operating systems and other applications begin to providing a series of standardised Accessibility Application Programming Interfaces (AAPIs).

This means that different suppliers of assistive technology can write software that uses a consistent interface to find out what information could be presented to an end user.  An assistive technology, such a screen reader, can ask a word processor (or any other application) questions about what could be presented.  An AAPI could be considered as a way that one system could ask questions about another.

Other presentations

Whilst an API, in some respects can represent one type of standard, there are a whole series of other standards, particularly those from the International Organization for Standardization (ISO) (and other standards bodies) that can provide useful guidance and assistance.  A further presentation outlined the complex connections between standards bodies and underlined the connection to the development of systems and products for people with disabilities.

A number of presentations focussed on technology.  One demonstration used a recent release of the OpenSolaris operating system (which makes use of the GNOME desktop system) to demonstrate how the Orca screen reader can be used in conjunction with application software such as OpenOffice.

With all software systems, there is often loads of magic stuff happening behind the scenes.  To illustrate some of this magic (like the AAPI being used to answer questions), a Gnome application called Accerciser was used.  This could be viewed as a software developer utility.  It is intended to help developers to 'check if an application is providing correct information to assistive technologies'.

OpenOffice can be extended (as far as I understand) using the Java programming language.  Java can be considered as a whole software development framework and environment.  It is, in essence, a virtual machine (or computer) running on a physical machine (the one that your operating system runs on).

One of the challenges that developers of Java had to face was to how to make its user interface components accessible to assistive technology.  This is achieved using something called the Java Access Bridge.  This software component is, in essence, 'makes it possible for a Windows based Assistive Technology to get at and interact with the Java Accessibility API'.

On the subject of Java, one technology that I had not heard of before is JavaFX.  I understand this to be a Java based language that has echoes of Adobe Flash and Microsoft Silverlight about it, but I haven't had much of a time to study it.  The 'take home' message is that rich internet applications (RIA) need to be accessible too, and having a consistent way to interface with them is in keeping with the third generation approach to assistive technologies.

Another presentation made use of a Blackberry to demonstrate real time texting and show the operation of an embedded screen reader.  A point was made that the Blackberry makes extensive use of Java, which was not something that I was aware of.  There was also a comment about the importance of long battery life, an issue that I have touched upon in an earlier post.  I agree, there is nothing worse than having to search for power sockets, especially when you rely on technology.  This is even more important if your technology is an assistive technology.

Towards the fourth generation

Gregg Vanderheiden gave a very interesting talk where he mentioned different strategies that could be applied to make systems accessible, such as making adaptations to an existing interface, providing a parallel interface (for example, you can carry out the same activities using a keyboard or a mouse), or providing an interface that allows people to 'plug in' or make use of their own assistive technology.  One example of this might be to use a software interface through an API, or to use a hardware interface, such as a keyboard, through the use of a standard interface such as USB.

Greg's talk made me think about an earlier question that I had asked during the day, namely 'what might constitute the fourth generation of assistive technologies?'  In many respects this is an impossible question to answer since we can only identify generations when they have passed.  The present and especially the future will always remain perpetually (and often uncomfortably) fuzzy.

One thought that I had regarding this area firmly connects to the area of information pervasiveness and network ubiquity.  Common household equipment such as central heating systems and washing machines often continue to remain resolutely unfathomable to many of us.   I have heard researchers talking about the notion of 'networked homes', where it is possible to control your heating system (or any other device) through your computer.

I remember hearing a comment from a delegate who attended the Open University ALPE project workshop who said, 'the best assistive technologies are those that benefit everyone, regardless of disability, such as optical character recognition'.  But what of a home of networked household goods which can potentially offer their own set of wireless accessible interfaces?  What benefit can such products provide for users who do not have the immediate need for an accessible interface?

The answer could lie with increasing awareness of the subject of energy consumption and management.  Washing machines, cookers and heating systems all consume energy.  Exposing information about energy consumption of different products could allow households to manage energy expenditure more effectively.  In my view, the need for 'green' systems and devices may facilitate the development and introduction of products could potentially contain lightweight device level accessibility APIs.

Further development directions

One of the most interesting themes of the day was the idea of the accessibility API that has made the third generation of assistive technologies what they are today.  A minor comment that featured during the day was the question of whether we might be able to make our software development tools and environments accessible.  Since accessibility and usability are intrinsically connected, the question of, 'are the current generation of accessibility API's as usable as they can be?'

Put another way, if the accessibility APIs themselves are not as usable as they could be, this might reduce the number of software developers who may make use of them, potentially reducing the accessibility of end user applications (and frustrating the users who wish to make use of assistive technologies).

Taking this point, we might ask, 'how could we test (or study) the accessibility of an API?'  Thankfully, some work has already been carried out in this area and it seems to be a field of research that is becoming increasingly active.  A quick search yields a blog post which contains a whole host of useful resources (I recommend the Google TechTalk that is mentioned).  There is, of course, a presentation on this subject that I gave at an Open University conference about two years ago, entitled Connecting Accessibility APIs.

It strikes me that a useful piece of research to carry out is to explore how to conduct studies to evaluate the usability of the various accessibility APIs and whether they might be able to be improved in some way.  We should do whatever we can to try to smooth the development path for developers.  Useful tools, in the form of APIs, have the potential to facilitate the development of useful and accessible products.

And finally...

Towards the end of the day delegates were told about a site called RaisingTheFloor.net (RTF).  RTF is described as a consortium of organizations, projects and individuals from around the world 'that is focused on ensuring that people experiencing disabilities, literacy problems, or the effects of aging are able to access and use all of the information, resources, services and communities available on or through the Web'.  The RTF site provides a wealth of resources relating to different types of assistive technologies, projects and stakeholders.

We were also told about an initiative that is a part of Aegis, called the Open Accessibility Everywhere Group (OAEG).  I anticipate that more information about OAEG will be available in due course.

I also heard about the BBC MyWebMyWay site.  One of the challenges for all computer users is learning and knowing about the range of different ways in which your system can be configured and used.  Sites like this are always a pleasure to discover.

Summary

It's great to go to project dissemination events.  Not only do you learn about what a project aims to achieve, but sometimes the presentations can often inspire new thoughts and point you toward new (and interesting) directions.  As well as learning about the Optacon (which I had never heard of before), I also enjoyed the description of the different generations of assistive technologies.  It was also interesting witness the various demonstrations and be presented with a teasing display of the complexities that lie very often hidden amidst the operating system of your computer.

The presentations helped me to connect the notions of the accessibility API and pervasive computing.  It also reminded me of some research themes that I still consider to be important, namely, the usability of accessibility APIs.  In my opinion, all these themes represent interesting research directions which have the fundamental potential of enhancing the accessibility and usability of different types of technologies.

I wish the AEGIS project the best of luck and look forward to learning more about their research findings.

Acknowlegements

Thanks are extended to Wendy Porch who took the time to review an earlier draft of this post.

Permalink
Share post
Christopher Douce

Formative e-assessment dissemination day

Visible to anyone in the world
Edited by Christopher Douce, Monday, 19 Nov 2018, 10:40

A couple of weeks ago I was lucky enough to be able to attend a 'formative e-assessment' event that was hosted by the London Knowledge Lab.  The purpose of the event was to disseminate the results of a JISC project that had the same title.

If you're interested, the final project report, Scoping a Vision for Formative e-Assessment is available for download.  The slides for this event are also available, where you can also find Elluminate recordings of the presentations.

This blog post is a collection of randomly assorted comments and reflections based upon the presentations that were made throughout the day.  They are scattered in no particular order.  I offer them with the hope that they might be useful to someone!

Themes

The keynote presentation had the subtitle, 'case stories, design patterns and future scenarios'.  These words resonated strongly with me.  Being a software developer, the notion of a design pattern (wikipedia) is one that was immediately familiar.  When you open the Gang of Four text book (wikipedia) (the book that defines them), you are immediately introduced to the 'architectural roots' of the idea, which were clearly echoed in the first presention.

The idea of a pattern, especially within software engineering, is one that is powerful since it provides software developers with an immediate vocabulary that allows effective sharing of complex ideas using seemingly simple sounding abstractions.  Since assessment is something that can be described (in some sense) as a process, it was easy to understand the objective of the project and see how the principle of a pattern could be used to share ideas and facilitate communication about practice.

Other terms that jumped out at me were 'case stories' and 'scenarios'.  Without too much imagination it is possible to link these words to the world of human-computer interaction.  In HCI, the path through systems can be described in terms of use cases, and the settings in which products or systems could be used can be explored through the deployment of descriptive scenarios and the sketching of storyboards.

Conversational framework

A highlight of the day, for me, was a description of Laurillard's conversational framework.  I have heard about it before but have, so far, not had much of an opportunity to study it in great detail.  Attending a presentation about it and learning about how it can be applied makes a conceptual framework become alive.  If you have the time, I encourage you to view the presentation that accompanies this event.

I'm not yet familiar enough with the model to summarise it eloquently, but I should state that it allows you to consider the role of teachers and learners, the environment in which the teacher carries out the teaching, and the space where a learner can carry out their own work.  The model also takes into account of the conversations (and learning) that can occur between peers.

Representation of the confersational framework which presents space for teacher, learner and other pratice and links between teacher, student and peers, indicating the types of conversations that can facilitate learning

During the presentation, I noted (or paraphrased) the words: 'the more iterations through the conversational model you do, the higher the quality of the learning you will obtain'.  Expanding this slightly, you could perhaps restate this by saying, 'the more opportunities to acquire new ideas, reflect on actions and receive feedback, the more familiar a learner will become with the subject that is the focus of study'.

In some respects, I consider the conversational framework to be a 'meta model' in the sense that it can (from my understanding) take account of different pedagogical approaches, as well as different technologies.

Links to accessibility

Another 'take away' note that I made whilst listening to the presentation was, 'learning theories are not going to change, but how these are used (and applied) will change, particularly with regards to technology'.

It was at this point when I began to consider my own areas of research.  I immediately began to wonder, 'how might this model be used to improve, enhance or understand the provision of accessibility?'  One way to do this is to consider each of the boxes the arrows that are used to graphically describe the framework.  Many of the arrows (those that are not labelled as reflections) may correspond to communications (or conversations) with or between actors.  These could be viewed as important junctures where the accessibility of the learning tools or environments that could be applied need to be considered.

Returning to the issue of technology, peers, for instance, may share ideas by posting comments to discussion forums.  These comments could then be consumed by other learners (through peer learning) and potentially permit a reformulation or strengthening of understandings.

Whilst learning technologies can permit the creation of digital learning spaces, such as those available through the application of virtual learning environments, designers of educational technologies need to take account of the accessibility of such systems to ensure that they are usable for all learners.

One of my colleagues is one step ahead of me.  Cooper writes, on a recent blog post,  'Laurillard uses [her framework] to analyse the use of media in learning. However this can be further extended to analyse the accessibility of all the media used to support these different conversations.'  The model, in essence, can be used to understand not only whether a particular part of a course is accessible (the term 'course' is used loosely here), but also be used to highlight whether there are some aspects of a course that may need further consideration to ensure that is as fully inclusive at it could be.

Returning to the theme of 'scenario', one idea might be to use a series of case studies to further consider how the framework might be used to reason about the accessibility status of a course.

Connections

There may be quite a few more connections lurking underneath the terms that were presented to the audience.  One question that I asked to myself was, 'how do these formative assessment patterns relate to the idea of learning designs?' (a subject that is the focus of a number of projects, including Cloudworks, enhancements to the Compendium authoring tool, the LAMS learning activity management system and the IMS learning design specification).

A pattern could be considered as something that could be used within a part of a larger learning design.  Another thought is that perhaps individual learning designs could be mapped onto specific elements of the conversational model.  Talking in computing terms, it could represent a specific instantiation (or instance).  Looking at it from another perspective, there is also the possibility that pedagogical patterns (whether e-assessment or otherwise) may provide inspiration to those who are charged with either constructing new or using existing learning designs.

Summary

During the course of the day, the audience were directed, on a number of occasions to the project Wiki.  One of the outcomes of the project was a literature review, which can be viewed on-line.

I recall quite a bit of debate surrounding the differences between guidelines, rules and patterns.  I also see links to the notion of learning designs too.  My understanding is that, depending on what you are referring to and your personal perspective, it may be difficult to draw clear distinctions between each of these ideas.

Returning to the issue of the conversational model being useful to expose accessibility issues, I'm glad that others before me have seen the same potential connection and I am now wondering whether there are other researchers who may have gone even further in considering the ways that the framework might be applied.

In my eyes, the idea of e-assessment patterns and the notion of learning designs are concepts that can be used to communicate and share distilled best practice.  It will be interesting to continue to observe the debates surrounding these terms to see whether a common vocabulary of useful abstractions will eventually emerge.  If they already exist, please feel free to initiate a conversation.  I'm always happy to learn.

Acknowlegements

Thanks are extended to Diana Laurillard who gave permission to share the presentation slide featured in this post.

Permalink
Share post
Christopher Douce

Green Code

Visible to anyone in the world
Edited by Christopher Douce, Friday, 3 Jan 2020, 18:34

Photograph of a beautiful young fern that is unfolding

It takes far too long for my desktop PC to finish booting up every morning.  From the moment I throw the power switch of my aging XP machine to the on position and click on my user name, I have enough time to walk to the kitchen, brew a cup of tea, do some washing and tidying up and drink half my cup of tea (or coffee), before I can begin to load all the other applications that I need to open before settling down to do some work.

I would say it takes over fifteen minutes from the point of power up to being able to do some 'real stuff'.  All this hanging around inevitably sucks up quite a bit of needless energy.  Even though I do have some additional software services installed, such as a database and a peer-to-peer TV application, I don't think my PC is too underpowered (it's a single core running just over a gigahertz with half a gig of memory).

Being of a particular age, I have fond memories of the time when you turned on a computer, the operating system (albeit a much simpler one) was almost instantly available. Ignoring the need to load software from cassettes or big floppy disks, you could start to issue commands and do useful stuff within seconds of powering up.

This is one of the reasons why I like my EEE netbook (wikipedia): if I have an idea for something to write or want to talk to someone or find something out, then I can turn it on and within a minute or two it is ready for use. (As an aside, I remember reading in Insanely Great by Steven Levy (Amazon) the issue of boot up time was an important consideration when designing the original Macintosh).

Green Code

These musings make me wonder about the notion of 'green code': computer software that is designed in such a way that it supports necessary functionality by demanding a minimal amount of processor or memory resources. Needless to say, this is by no means an original idea. It seems that other people are thinking along similar lines.

In a post entitled, Your bad code is killing my planet, Alistair Croll writes, 'Once upon a time, lousy coding didn't matter. Coder Joel and I could write the same app, and while mine might have consumed 50 percent of the machine's CPU whereas his could have consumed a mere 10 percent, this wasn't a big deal. We both paid for our computer, rackspace, bandwidth, and power.'

Croll mentions that software is often designed in terms of multiple levels of abstraction. He states that there can be a lot of 'distance and computing overhead between my code and the electricity of each processor cycle'. He goes on to write, 'Architecture choices, and even programming language, matter'. Software architecture choices do matter and abstractions are important.

Green Maintenance

Making code that is efficient is only part of the story. Abstractions allow us to hide complexity. They help developers to compartmentalise and manage the 'raw thought stuff' which is computer code. Well designed abstractions can give software developers who are charged with working and maintaining existing systems a real productivity boost.

Code that is easier to read and work with is likely to be easier to maintain. Maintenance is important since some researchers' report that maintenance accounts for up to 70% of costs of a software project.

In my opinion, clean code equals green code. Green code is code that should be easy to understand, maintain and adapt.

Green Challenges

Croll, however, does have a point. Software engineers should need to be aware of the effect that certain architectural choices may have on final system performance.

In times when IT budgets may begin to be challenged (even though IT may be perceived as technology that can help to create business and information process efficiencies), the request for an ever more powerful server may be frowned upon by those who hold the budgetary purse strings. You may be asked to do more with less.

This challenge exposes a fundamental computing dilemma: code that is as efficient as it could be may be difficult to understand and work with. Developers have to consider such challenges carefully and walk a careful path of compromise. Just as there is an eternal trade off between speed of a system and how much power is consumed, there is also a difficult trade offs to consider in terms of efficiency and clarity, along with the dimensions of system flexibility and functionality.

One of the reasons why Microsoft Vista is not considered to be popular is the issue of how resource hungry it is in terms of memory, processor speed and disk drive space. Microsoft, it seems is certainly aware of this issue (InfoWorld).

Turning off some of the needless eye candy, such as neatly shaded three dimensional buttons, can help you to get more life out of your PC. This is something that Ted Samson mentions, before edging towards discussing the related point of power management.

Ted also mentions one of those well known laws of computing. He writes, 'just because there are machines out there that can support enormous system requirements doesn't mean you have to make your software swell to that footprint'. In other words, 'your processor and disk space needs expands to the size of your machine' (another way of also saying 'your project expands to the amount of time you have available'!)

Power Budgets

Whilst I appreciate my EEE PC in terms of its quick boot up time, it does have an uncomfortable side effect: it also acts as a very effective lap warmer. Even more surprisingly, its batteries are entirely depleted within slightly over two hours of usage. This is terrible! A mobile device should not be tethered to a mains power supply. It also makes me wonder about whether its incessant demand for power is going to cut short the life of its batteries (which represent their own environmental challenge).

When working alongside electrical engineers, I would occasionally over hear them discussing power budgets, i.e. how much power would be consumed by components of a larger electrical system. In terms of software, both laptop and desktop PC offer a range of mysterious software interfaces that provide 'power management' functionality. This is something that I have not substantially explored or studied. For me, this is an area of modern PCs that remain a perpetual mystery. It is certainly something that I need do to something about.

Sometimes, the collaboration between software developers and hardware engineers can yield astonishing results. I again point towards the One Laptop per Child project. I remember reading some on-line discussions that described changes that were made to the Linux operating system kernel to make the OLPC device more power efficient. A quick search quickly throws up an Environmental Impact page.

The OLPC device, whether you agree with the objective of the OLPC project or not, has had a significant impact on the design of laptop systems. A second version of the device raises the possibility of netbooks using the energy efficient ARM processor (wikipedia) - the same processor that is used (as far as I understand) in the iPhone and iPod I, for one, look forward to using a netbook that doesn't unbearably heat up my lap and allows me to do useful work without having to needless wasted time searching for power sockets.

My desktop computer (which was assembled by my own fair hands) produces a side effect that is undeniably useful during the winter months: it perceptibly heats up my room almost allowing me to totally dispense with other forms of heating completely (but I must add that a chunky jumper is often necessary). When I told someone else about this phenomenon, I was asked, 'big computer or small room?' The answer was, inevitably, 'small room' (and small computer).

Google

On a related note, I was recently sent a link to a YouTube video entitled Google container data centre tour. It was astonishing (and very interesting!) It was astonishing due to the sheer scale of the installation that was presented, and interesting in terms of the industrial processes and engineering that were described. It reminded me of a news item that was featured in the media earlier this year that related to the carbon cost of carrying out a Google search.

The sad thing about the Google data centre (and, of course, most power plants) is that most of the heat that is generated is wasted. I recently came across this article, entitled Telehouse to heat homes at Docklands. Apparently there are other schemes to use data centres for different kinds of heating.

Before leaving Google alone, you might have heard of a site called Blackle. Blackle takes the Google homepage and inverts it. The argument is that if everyone uses a black search page, large power savings can be made.

Mark Ontkush describes the story of Black Google and others in a very interesting blog post which also mentions other useful ideas, such as the use of Firefox extensions. Cuil is another search engine (pronounced 'cool') that embodies the same idea.

Carbon Cost of Spam

I recently noticed a news item entitled Spam e-mails killing the environment (ITWorld). Despite the headline having a passing resemblance to headlines that you would find on the Daily Mail, I felt the article was worth a look. It references a more sensibly titled report, The carbon footprint of email spam, published by McAfee.

The report is interesting, pointing towards the fact that that we may spend a lot of time both reading and processing junk emails that end up in our inbox. The article has an obvious agenda: to sell spam filters. An effective spam filter, it is argued, can reduce the amount of time that email users spend processing spam, thus helping to save the planet (a bit). Spam can fill up email servers, causing network administrators to use bigger disks. To be effective, email servers need to spend time (and energy) filtering through all the messages that are received. I do sense that more research is required.

Invisible Infrastuctures

There is a further connection between the challenge of spam and the invisible infrastucture of the internet. Messages to your PC, laptop or mobile device pass through a range of mysterious switches, routers and servers. At each stage, energy is mysteriously consumed and paid for by an invisible set of financial transactions.

My own PC, I should add, is not as power friendly as it could be. It contains two hard disk drives: a main drive that contains the operating system, and a secondary drive that contains backup files and also 'swap' area. The main reason for the second drive is to gain a performance boost.

Lower power PCs

After asking the question, 'how might I create an energy efficient PC', I discovered an interesting article from Ars Technica entitled It's easy being green. It describes each of the components of a PC and considered how much power they can draw. The final page features a potential PC setup in the form of 'an extreme green box'.

It is, however, possible to go further. The Coding Horror blog presents one approach: use kit that was intended for embedded systems - a domain where power consumption is high on the design agenda. An article, entitled Building Tiny, Ultra Low Power PCs is a fun read.

Both articles are certainly worth a view. One other cost that should be considered, however, is the cost of manufacturing (and also recycling) your existing machine. I don't expect to change my PC until the second service pack for Windows 7 is released. It's going to be warming my room for quite some time, but perhaps the carbon consumption stats that relate to PC manufacture and disposal are out there somewhere that may help me to make a decision.

Concluding thoughts

Servers undeniably cost a lot of money not only in terms of their initial purchase price, but also in terms of how much energy they consume over their lifetime.

Efficient software has the potential to reduce server count, allowing more to be achieved with less. Developers should aspire to write code that is as efficient as possible, and take careful account of the underlying software infrastructures (and abstractions) that they use. At the heart of every software development lies a range of challenging compromises. It often takes a combination of experience and insight to figure out the best solution, but it is important to take account of change since the majority of the time on any software system is likely to be during the maintenance phase of a software project.

The key to computing energy reduction doesn't only rest with computer scientists, hardware designers and software engineers. There are wider social and organisational issues at play, as Samson hints at in an article entitled No good excuses not to power down PCs. The Open University has a two page OU Green computing guide that makes a number of similar points.

One useful idea is to quantify computer power in terms of megahertz per miliwatt (MPMs) instead of millions of instructions per second (MIPS) - I should add that this isn't my idea and I can't remember where it came from! It might be useful to try to establish a new aspirational computing 'law'. Instead of constantly citing Moore's law which states that the number of transistors should double every two years, perhaps we need to edge towards trying to propose a law that proposes a reduction in power consumption whilst maintaining 'transactional performance'. In this era of multi-core multi-function processors, this is likely to be a tough call, but I think it's worth a try.

One other challenge is whether it might be possible to crystalise what is meant by 'green code', and whether we can explore what it means by constructing good or bad examples. The good examples will run on low powered slower hardware, wherease the bad examples will likely to be sluggish and unresponsive. Polling (constantly checking to see whether something has changed) is obviously bad. Ugly, inelegant, poorly structured and hard to read (and hard to change) code could also be placed in a box named 'bad'.

A final challenge lies with whether it might be possible to explore what might be contained within a sub-discipline of 'green computing'. It would be interesting to see where this might take us.

Acknowlegements: Photograph of a fern, entitled 'Mandelbrot En Vert', has been liberated from Flickr. It is licensed under creative commons by GaijinSeb.

Permalink
Share post
Christopher Douce

Retro learning technology

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 21 Jul 2010, 18:15

A photograph of a speak and spell toy

I was recently told about a conference called Game Based Learning.  Although I wasn't able to attend in person (and besides, my main research interests are perhaps somewhat tangential to the topic), the subject remains of perpetual interest.   One of my colleagues, Liam Green-Hughes, who was lucky enough to be there a couple of weeks ago has written a comprehensive blog post describing some of the themes and presentations.

The appearance of this conference made me begin to reflect on my own experiences of what 'game based learning' means to me.  It soon struck me that much of my experiences are from an earlier generation of learning technologies (and I'm not just talking about books!)  This post represents a personal history (perhaps a 'retro' history) of using a number of different 'learning' games and devices.  I hope, for some, it evokes some fun memories!

Early mobile learning

My first exposure to mobile learning (or learning games) was through a device called Spelling B made by Texas Instruments (now more famous for their DSP processors).

The device presented a truly multi-modal experience.  Not only did it contain an alphabetic keyboard, it was also adorned with bright colours and came complete with a picture book.  Quite a proportion of the picture book was initially unfathomable since you had to translate the pictures into correctly spelt words.

For quite a long time after using the device, I continued to spell 'colour' incorrectly (or correctly, depending upon your point of view), and believed that all people with peaked hats (who were not witches) were Pilgrims (the historical importance of which was totally lost on a naive British child).

If you spelt some words correctly you got a chirpy bleeping (or buzzing) tune.  If you spelt something incorrectly, you were presented with a darker rasping noise.

After around two months of playing, it was game over.  I was able to spell, by rote, the one hundred and twenty seven words (or close to it), none of which probably had more than eight characters.

One Christmas I was lucky enough to graduate to the more elegantly designed and ultimately more mind blowing Speak and Spell (wikipedia).  It was astonishing to learn that something with batteries could speak to you!  Complete with integral handle, rugged membrane board (which you could spill your orange squash onto), an audio connection to prevent parents from going potty (like mine did), and a battery charge length that would put any modern laptop to shame.  In my view, the Speak and Spell is a design triumph and you might be able to see some similarities with the OLPC XO computer (if you squint hard enough).

To this day, I remember (without looking at simulations) its rhythmic incantations of 'that is correct! And now spell...' which punctuated my own personal successes and failures.

Learning technology envy

This era of retro learning technology didn't end with the Speak and Spell.  One Christmas, Santa gave a group of kids down my street a mobile device called the Little Professor, also from Texas Instruments.  Pre-dating the Nintendo DS by decades, this little hand-held beauty presented a true advance in learning technology.  When you got a calculation right, the little professors' moustache started to jump around in animated delight.  It also incidentally had one of those new fangled Liquid Crystal Display screens rather than the battery hungry LED readouts (but this was inconsequential in comparison to the hilarious moustache).

Learning technology envy was a real phenomenon.  I remember a Texas Instruments Speak and Maths devices was undoubtedly more exciting (and desirable to play with) than a lowly Little Professor.  Game based learning was a reality, and parent pester power was definitely at work.  For those kids who had parents who were well off, the nadir of learning technology envy (when I was growing up) manifested itself in the form of the programmable and mysterious Big Trak.

Big Trak inspired a huge amount of wonderment, especially for those of us who had never been near a logo turtle.  The marketing was good as the advertisements of the time (video) testify.  It presented kids the opportunity to consider the challenge of creating stored programs.

Learning with the Atari

A number of my peers were Atari 2600 gamers.  As well as being enthralled at the prospect of blasting aliens, driving racing cars and battling with dragons represented by pixels the size of a small coins, I gradually became aware of a range of educational games that some retailers were starting to sell.

A number of educational Sesame Street game cartridges were created, presumably with close collaboration wit Atari.  I personally found them rather tedious and somewhat unexciting, but an interesting innovation was the presence of a specially designed 'Kids controller'.  Each game was provided with a colourful overlay which only presented the buttons that should be used.  (As was the case with some Atari games, the box art and instructions leaflets could be arguably more exciting than the game itself).

I have no real idea whether any substantial evaluations were carried out to assess either the user experience of these products, or whether they helped to develop motor control.

Behold!  The personal computer...

My first memory of an educational game that was presented through a personal computer was a business game that ran on the BBC Model B.  The scenario was that you were the owner of a candy floss store (an obvious attraction for kids) and you had to buy raw materials (sugar) and hope that the weather was good.  If it was good, you made money.  If it was bad, you didn't.  I must add I used this game when Thatcherism was at its peak.  This incidental memory connects with wider issues relating to the link between game deployment and wider educational policy, but I digress...

Whilst using the 'candy floss game' I don't have any recollection of having to spend extra money for petrol for the generator or have to pay the council rent (or contend with issues of price rises every year) but I'm sure there was a cost category called 'overheads' that featured somwhere.  I'm also pretty sure you could set you own prices of your products and be introduced to the notion of 'breaking even'.

The BBC Model B figured again in my later education when I discovered a business studies lab that was packed with the things, and an occasional Domesday machine, running on an exotically modified BBC Master 128.  The Domesday project pointed firmly towards the future.  I remember a lunch hour passing in a flash whilst I attempted to navigate through an animated three dimensional exhibition that represented a portal to a range of different encyclopaedic themes.

During business studies classes we didn't use the Beebs (as they were colloquially known) for candy floss stall simulations, but instead we used a program called Edword (educational word processor), a spreadsheet and a simple database program.  When we were not using these applications, we were using the luxury of disk drive to play Elite (wikipedia).  A magical galaxy drawn in glorious 3D wireframe taught us about commodity trading and law enforcement.

Sir Clive

For a while the Sinclair Spectrum (a firm favorite amongst my peers) was sold with a bonus pack of cassettes.  Two memorable ones included an application called Make-A-Chip which allowed you to draw sets of logic gate circuits.  You couldn't do much more than create a binary adder, but messing around with it for a couple of hours  really helped when it came to understanding these operators when it came to working with real programming languages.

I also have recollections of using a simulation program that allowed you to become a fox or a rabbit (or something else) and forage for food.  As the seasons (and years) change, the availability of food fluctuated due to 'natural' changes in population.  I never did get the hang of that game: it was a combination of being too slow and not having sufficiently engaging feedback for it to be attractive.

Assessing their impact

If I was asked whether I learnt anything by using the beeping and speaking mobile devices that were mentioned earlier, I couldn't give you an honest answer.  I suspect I probably did, and the biggest challenge that researchers face isn't necessarily designing a new technology (which is a challenge that many people grossly underestimate), but understanding the effect the introduction that a particular technology has, and ultimately, whether it facilitates learning.

There is, of course, also a social side to playing learning games (and I write this without any sense of authority!).  I remember my peers showing me how to use a Little Professor, and looking at an on-screen 'candy floss sales' graph at feeling thoroughly puzzled at what was being presented to me.  A crowd of us used to play that game.  Some players, I seem to remember, were more agressive traders than others.

Towards the future

History has yielded an interesting turn of events.  I spent many a happy hour messing around on a BBC Model B (albeit mostly playing Elite) and later marveled at its ultimate successor, the Acorn Archimedes (some of the time playing Zarch written by the same author as Elite).

I remember my fascination at discovering it was possible to write programs using the BBC Basic programming language (version 5) without using line numbers.  Acorn computer eventually folded but left a lasting legacy in the form of ARM (Acorn Risc Machine), a company that sells its processor designs which have found their way into a whole range of different devices: phones, MP3 players, PDAs.

I recently heard that the designers of the One Laptop Per Child project were considering using ARM-based processors in the second generation designs.  The reason for this choice is directed by the need to pay careful consideration to power consumption and the fact that the current processor will not be subject to further on-going development by AMD. (One loosely connected cul-de-sac of history lies with the brief emergence of the inspirational and sadly ill fated Apple eMate, which was also ARM-based).

One dimension of learning games that is outside the immediate requirements of developing skills or knowledge in a particular subject, lies with their potential to engender motivation and enthusiasm.

It's surprising how long electricity powered educational video games and devices have been around for.  It's also surprising that it was possible to do so much with so little (in terms of processing power and memory).  Educational gaming, I argue, should not be considered as an idea that is considered to be revolutionary.  I personally look forwards to learning about what new evolutionary developments are waiting for us around the corner and what they may be able to tell us about how we learn.

Permalink
Share post
Christopher Douce

Learning technology and return on investment

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 21 Jul 2010, 18:13

Image of a man playing a tired old piano in front of a building site that has warning signes.  One says, 'danger, men working overhead'.

A couple of years ago I attended a conference held by the Association of Learning Technology (ALT).  I remember a riveting keynote that asked the members of the audience to consider not only the design and the structure of new learning technologies, but also the benefits that they can offer institutions, particularly in terms of costs.  I remember being reminded that technology (in terms of educational technology) need not be something that is considered in terms of 'high technology'.  A technology could be a bunch of cards tied to a desk chair!

I came away with a message that it is important to try to assess the effectiveness of systems that we construct, and not to get ahead of ourselves in terms of building systems that might not do the job they set out to do, or in fact may even end up solving the wrong problem.

When attending the learning technologies exhibition a couple of months ago the notions of 'return on investment' and 'efficiency' were never too far away.  E-learning, it is argued, can help companies to develop training capacities quickly and efficiently and allow company employees to prepare for change.

I might have been mistaken but I think I may have heard the occasional presenter mentioning some figures.  The presenters were saying things like, 'if you implement our system in this way, it can help you save a certain number of millions of pounds, dollars or euros per year'.  Such proclamations made me wonder, 'how do you go about measuring the return on investment into e-learning systems?' (or any other learning technology system, for that matter).

I do not profess to be an expert in this issue by any means.  I am aware that there have been many others (who have greater levels of expertise than myself) have both blogged and written about this issue at some length.  I hope by sharing my own meagre notes on the subject might make a small contribution to the debate.

Measuring usefulness

Let's say you build a learning technology or an e-learning tool.  You might create an on-line discussion forum or even a hand held classroom response system.  You might have created it with a particular problem (or pedagogic practice) in mind.  When you have built it, it might be useful to determine whether the learning technology helps learners to learn and acquire new understandings and knowledge.

One way of conducting an evaluation is to ask them about their experience.  This can allow you to understand how it worked, whether it was liked, what themes were learnt and what elements of a system, product or process might have inspired learners.

Another way to determine whether a technology is affective is to perform a test.  You could take two groups, one that has access to the new technology, the other who do not, and see which group perform better.  Of course, such an approach can open up a range of interesting ethical issues that need to be carefully negotiated and conquered.

Dimensions of measurement

When it comes to large e-learning systems the questions that can uncover learners experience can relate to a 'low level' understanding of how learning technologies are used and applied.  Attempting to measure the success of a learning technology or e-learning system for a whole department or institution could be described as a 'high level' understanding.  It is this 'high level' of understanding that relates to the theme of how much money a system may help to save (or cost) an organisation.

Bearing in mind that organisations are very unlikely to carry out experiments, how is it possible to determine how much return on investment an e-learning system might give you?  This is a really tough question to ask since it depends totally on the objectives of a system.  The approach taken to measure the return on investment of a training system is likely to be different to one that has been used to instil institutional values or create new ways of working (which may or may not yield employee productivity improvements).

When considering the issue of e-learning systems that aim to train (I'm going to try to steer clear of the debates around what constitutes training and what constitutes education), the questions that you might ask include:

  • What are the current skills (or knowledge base) of your personnel?
  • What are the costs inherent in developing a training solution?
  • What are the costs inherent in rolling this out to those who need access?

Another good question to ask is: what would you have to do to find out the same information if you had not invested in the new technologies?  A related question is: would there be any significant travel costs attached to finding out the information?, and, would it be possible to measure the amount of disruption that might take place if you have to ask other people for the information that you require?

These questions relate to actions that can be measured.  If you can put a measure on the costs of finding out key pieces of information before and after the implementation of a system, you might be able to edge towards figuring out the value of the system that you have implemented.  What, for example, is the cost of running the same face to face training course every year as opposed to creating a digital equivalent that is supported by a forum that is supported by an on-line moderator?  You need to factor in issues such as how much time it might take for a learner to take the course.  Simply providing an on-line course is not enough.  Its provision needs to be supported and endorsed by the organisation that has decided to sponsor the development of e-learning.

The above group of points represents a rather simplistic view.  The introduction of a learning technology system may also facilitate the development of new opportunities that were perhaps not previously possible.  'Up skilling' (or whatever it is called) in a limited amount of time may enable employees to respond to a business opportunity that may not have been able to exploited without the application of e-learning.

Other themes

Learning technologies are not only about the transmission of information (and knowledge) between a training department and their employees.  They can also have a role to play in facilitating the development of a common culture and strengthening bonds between work groups.

Instances of success (or failure) can be shared between fellow employees. Details of new initiatives or projects may be disseminated through a learning system.  The contents of the learning system, as a result, may gradually change as a result of such discussions.

The wider cultural perspectives that surround the application of learning technologies, in my humble opinion, are a lot harder to quantify.  It's hard to put a value on the use a technology to share information (and learning experiences) with your co-workers.

Related resources

A quick search takes me to the wikipedia definition of ROI and I'm immediately overwhelmed by detail that leaves my head spinning.

Further probing reveals a blog entitled ROI and Metrics in eLearning by Tony Karrer who kindly provides a wealth of links (some of which were updated in April 2008).  I have also uncovered a report entitled What Return on Investment does e-Learning Provide? (dated July 2005) (pdf) prepared by SkillSoft found on a site called e-Learning Centre.

Summary

The issue of e-learning and learning technology return on investment is one that appears to be, at a cursory glance, one that can be quite difficult to understand thoroughly.  Attaching numbers to the costs and benefits of any learning technology is something that is difficult to well or precisely.  This can be partly attributed to the nature of software: often, so much of the costs (whether it be in terms of administration or maintenance) or benefits (ability to find things out quicker and collaborate with new people) can be hidden amongst detail that needs to be made clearly explicit to be successfully understood.

When it comes to designing, building and deploying learning technology systems, the idea of return on investment is undoubtedly a useful concept, but those investing in systems should consider issues beyond the immediately discoverable costs and benefits since there are likely to be others lurking in the background just waiting to be discovered.

Acknowledgements: Image licensed under creative commons, liberated via Flickr from Mr Squee.  Piano busker in front of building site represents two types of investments: a long term investment (as represented by the hidden building site), and a short term investment (using the decrepit piano to busk).  The unheard music represents hidden complexities.

Permalink
Share post
Christopher Douce

Inclusive Digital Economy Network event

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 28 Jun 2023, 10:28

I recently attended an event that was hosted by the Inclusive Digital Economy Network.  The network, led by the University of Dundee, comprises of a variety of groups who wish to collectively ensure that people are able to take advantage of digital technologies.

The event was led by Prof Alan Newell from Dundee.  Alan gracefully introduced a number of keynote speakers; the vice-chancellor from City University, the dean of Arts and Social Sciences and representatives from the government and the funding body: the EPSRC.

Drama

One really interesting part of the day was the use of 'theatre' to clearly illustrate the difficulties that some people can have when using information technology.  I had heard about the use of drama when I have spoken to people from Dundee before but this was the first time I was able to witness it.  In fact, I soon found out that I was going to witness a film premiere!

After the final credits had appeared, I was surprised to discover that two of the actors who played central roles in the film were in the audience.  The film was not the end of the ‘theatre’ event, it was the beginning.  The actors carried out an improvisation (using questions from the audience) that was based upon the roles we had been introduce to through the film.

The notion of drama and computing initially seemed to me to be a challenging combination, but any scepticism that had very quickly dissipated once the connections between the two areas became plainly apparently.  Drama and theatre relies on characters.  Computer systems and technologies are ultimately used by people.  The frustrations that people encounter when they are using computer systems manifest themselves in personal (and collective) dramas that might be as small as uttering the occasional expletive when your machine doesn't do what it supposed to do, to calling up a call centre to harass an equally confused call centre operative.

The lessons of the 'computing' or 'user' theatre were clear to see: the users should be placed centre stage when we think about the design of information systems.  They may understand things in ways that designers of systems may not have imagined.  A design metaphor that might make complete sense to an architect may seem to be completely nonsensical to an end user who has a totally different outlook and background.  Interaction design tools such as creating end user personas are powerful tools that can expose differences and help to create more usable systems.

Debates

I remember a couple of important (and interesting) themes from the day.  One theme (that was apparent to me) was occasional debate about the necessity to ensure that users are involved with the design of systems from the outset to ensure that any resulting products and systems are inclusive (user led design).  This connected to a call to 'keep the geeks from designing things'.  In my view, users must be involved with the creation of interactive systems, but the 'geeks' must be included too.  The reasons for this being that the geeks may imagine functionality that the users might not be aware exits.  This argument underlines the interdisciplinary nature of interaction design (wikipedia).

Much of the focus of the day was about how technology can support elderly people; how to create technologies and pedagogies that can promote digital inclusion.  Towards the end of the day there was a panel discussion from representatives from Help the Aged, a UK government organisation called the Technology Strategy Board, the BBC, OFCOM and the University of York.

Another them that I remember relates to the cost of both computing and assistive technologies.  There was some discussion about the possibility of integrating internet access within set top boxes (and a couple of comments relating to the Digital Britain report that was recently published by the UK government).  There was also some discussion about the importance of universal design (wikipedia) and tensions with personalised design (which connects to some of the themes underpinning the EU4ALL project).

Another recollection from the event was that some presenters stated that although there is much excellent work happening within the academic community (and within other organisations) some of the lessons learnt from research are often not taken forward into policy or practice.  This said, it may be necessary to take the recommendations from a number of different research projects to obtain a rich and complete understanding of a field before fully understanding how policy might be positively influenced.  The challenge is not only combining and understanding the results from different projects, but communicating the results.

Summary

Projects such as the Inclusive Digital Economy Network, from my outsiders perspective, attempt to bridge the gaps between different stakeholders and facilitate a free exchange of ideas and experiences that may point towards areas of investigation that can allow us learn more how digital technologies can make a difference to us all.

Acknowledgements: many thanks are extended to the organisers of the event – an interesting day!

Permalink
Share post
Christopher Douce

Source code accessibility through audio streams

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 28 Jun 2023, 10:28

A screenshot of some source code being edited by a software developer

One of my colleagues volunteers for the Open University audio recording project.  The audio recording project takes course material produced by course teams and makes audio (spoken) equivalents for people with visual impairments.  Another project that is currently underway is the digital audio project which aims to potentially take advantage of advances in technology, mobile devices and international standards.

Some weeks ago, my colleague tweeted along the lines of 'it must be difficult for people with visual disabilities to learn how computer programs are written and structured' (I am heavily paraphrasing, of course!)  As soon as I read this tweet I began to think about two questions.  The first question was: how do I go about learning how a fragment of source code works? and secondly, what might be the best way to convert a function or a 'slice' of programming code into an audio representation that helps people to understand what it does and how it is structured?

Learning from source code

How do I learn how a fragment of source code works?  More often than not I view code through an integrated development environment, using it to navigate through the function (or functions) that I have to learn.  If I am faced with some code that is really puzzling I might reach for some search tools to uncover the connections between different parts of the system.

If the part of the code that I am looking at is quite small and extremely puzzling, I might go as far as grab a pen and paper and begin to sketch out some notes, taking down some of the expressions that may appear to be troubling and maybe split these apart into their constituent components.  I might even try to run the various code fragments by hand.  If I get really confused I might use the 'immediate' window of my development environment ask my computer to give me some hints about the code I am currently examining.

When trying to understand some new source code my general approach is to try to have a 'conversation' with it, asking it questions and looking at it from a number of different perspectives.  In the psychology of programming literature some researchers have written about developers using 'top down' and 'bottom up' strategies.  You might have a high level hypothesis about what something does on one hand, but on the other, sections of code might help you to understand the 'bigger picture' or the intentions behind a software system.

In essence, I think understanding software is a really hard task.  It is harder and more challenging than many people seem to imagine.  Not only do you have to understand the language that is used to describe a world, but you also have to understand the language of the world that is described.  The world of the machine and the world of the problem are intrinsically and intimately connected through what can sometimes seem an abstract collection of words and symbols.  Your task, as a developer, is to make sense of two hidden worlds.

I digress slightly... If learning about computer programming code is a hard task, then it is possible that it is likely to be harder for people with visual impairments.  I cannot imagine how difficult it must be to be presented with a small computer program or a function that has been read out to you.  Much of the 'secondary notation', such as tabbing and white space can be easily lost if there are no mechanisms to enable them to be presented through another modality.  There is also the danger that your working memory may become quickly overwhealmed with names of identifiers and unfamiliar sounding functions.

Assistive technology for everyone

The tasks of learning the fundamentals of programming and learning about a program are different, yet related.  I have heard it said that people with disabilities are given real help if technologies are created that are useful for a wide audience.  A great example of this is, for example, optical character recognition.  Whilst OCR technology can save a great deal of cost typing, it has also created tools that enable people with low vision to scan and read their post.

Bearing the notion of 'a widely applicable technology' in mind, could it be possible to create a system that creates an interactive audio description that could potentially help with the teaching of some of the concepts of computer programming for all learners?

Whenever I read code I immediately begin to translate the notion of code into my own 'internal' notation (using different types of memory, both internal and external - such as scraps of paper!) to iteratively internalise and make sense of what I am being presented with.  Perhaps equivalents of programming code could be created in a form that could be navigated.  Code it not something that you read in a linear fashon - code is something you work with.

If an interesting and useful (and interactive) audio equivalent of programming code could be created there then might be the potential that these alternative forms might be useful to all students, not only to learners who necessarily require auditory equivalents.

Development directions

There are a number of tools that could help us to create what might amount to 'interactive audio descriptions of programming code'.  The first is the idea of plan or schema theory (wikipedia) – the notion that your understanding of something is drawn from previous experience.  Some theorists from the Psychology of Programming have extended and drawn upon these ideas, positing ideas such as key lines of code such as beacons.

Another is Green's Cognitive Dimensions framework (wikipedia).  Another area to consider looking at is the interesting sub-field of Computer Science Education research.  There must be other tools, frameworks and ideas that can be drawn upon.

Have you got a sec?

Another approach that I sometimes take when trying to understand something is that I ask other more experienced people for help.  I might ask the question, 'what does this section represent?' or, 'what does this section do?'  The answers from collegues can be instrumental in helping me to understand the purpose behind fragments of programming code.

Considering browsing

I can almost imagine what could be an audio code browser that has some functionality that allows you to change between different levels of abstraction.  At one level, you may be able to navigate through sets of different functions and hear descriptions of what they are intended to do and hope to receive by way of parameters (which could be provided through comments).  On another level there may be summaries of groups of instructions, like loops, with descriptions that might sound like, 'a foreach loop that contains four other statements and a call to two functions'.  Finally, you may be able to tab into a group of statements to learn about what variables are manipulated, and how.

Of course this is all very technical stuff, and it could be stuff that has already been explored before.  If you know of similar (or related) work, please feel free to drop me a line!

Acknowledgement: random image of code by elliotcable, licenced under creative commons, discovered using Flickr.

Permalink
Share post
Christopher Douce

Exploring Moodle forums

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 21 Jul 2010, 18:08

A set of spanners loosely referring to moodle tools and debugging utilities

Following on from the previous post, this post describes my adventures into the Moodle forums source code.

Forums, I understand, can be activities (a Moodle term) that can be presented within individual weeks or topics. I also know that forums can be presented through blocks (which can be presented on the left or right hand side of course areas).

To begin, and remembering the success that I had when trying to understand how blocks work, I start by looking at what the database can tell me and quickly discover quite a substantial number of tables.  These are named: forum (obviously), forum_discussions, forum_posts, forum_queue, forum_ratings (ratings is not something that I have used within the version of Moodle that I am familiar with), forum_read, forum_descriptions, forum_subscriptions and forum_track_prefs.

First steps

Knowing that some of the data tables are called, I put aside my desire to excitedly eyeball source code and sensibly try to find some documentation.

I begin by having a look at the database schema introduction page (Moodledocs), but find nothing that immediately helps.  I then discover an end user doc page that describes the forum module (and the different types of forum that are on offer in Moodle).  I then uncover a whole forum documentation category (Moodledocs) and I'm immediately assaulted by my own lack of understanding of the capabilities system (which I'll hopefully blog about at some point in the future – one page that I'll take note of here is the forum permissions page).

From the forums category page I click on the various 'forum view pages', which hints that there are some strong connections with user settings.

Up to this point, what have I learnt?

I have learnt that Moodle permits only certain users to carry out certain actions to Moodle forums.  I have also learnt that Moodle forums have different types.  These, I am lead to believe (according to this documentation page) are: standard, single discussion, each person posts one discussion, and question and answer.  I'm impressed:  I wasn't expecting so much functionality!

So, can we discover any parallels with the database structures?

The forum table contains fields which are named: course, type, name, description followed by a whole other bunch of fields I don't really understand.  The course field associates a forum with a course (I'm assuming that somewhere in the database there will be some data that connects the forum to a particular part or section of a course) and the type (which is interestingly, an enumerated type) which can hold data values that roughly represents the forum types that were mentioned earlier.

A brief look at the code

I remember that the documentation that I uncovered told me that the 'forums' was a module. In the 'mod' directory I see notice a file called view.php.  Other interesting files are named: post.php, lib.php, search.php and discuss.php.  View.php seems to be one big script which contains a big case statement in the middle.  Post.php looks similar, but has a beguiling sister called post_form which happens to be a class.  Lib, I discover, is a file of mystery that contains functions and fragments of SQL and HTML.  Half of the search file seems to retrieve input parameters, and discuss is commented as, 'displays a post, and all the posts below it'.

Creating test data

To learn more about the data structures I decide to create some test data by creating a forum and making a couple of posts.  I open up an imaginatively titled course called 'test' and add an equally imaginatively titled forum called 'test forum'.  When creating the forum I'm asked to specify a forum type (the options are: single simple discussion, Q and A forum, standard forum for general use).  I choose the standard forum and choose the default values for aggregate type and time period for blocking.  The aggregate type appears to be related to functionality that allows students to grade or rate posts.

When the forum is live, I then make a forum post to my test forum that has the title 'test post'.

Reviewing the database

The action of creating a new forum appears to have created a record in the forum table which is associated to a particular course, using the course id.  The act of adding a post to the test forum has added data to forum_discussions, where the name field corresponds to the title of my thread: 'test post'.  A link is made with the forum table through a foreign key, and a primary key keeps track of all the discussions held by Moodle.

The forum_posts table also contains data.  This table stores the text that is associated with a particular post.  There is a link to the discussion table through a discussion id number.  Other tables that I looked at included forum_queue (not quite sure what this is all about yet), forum_ratings (which probably stores stuff depending on your forum settings), and forum read, which simply stores an association between user id, forum id, discussion id and post id.

One interesting thing about forums is that they can have a recursive structure (you can send a reply to a reply to a reply and so on).  To gain more insight into how this works, I send a reply to myself which has the imaginative content, 'this is a test post 2'.

Unexpectedly, no changes are made to the forum_discussions table, but a new entry is added to the forum_posts table.  To indicate hierarchy a 'parent' field is populated (where the parent relates to an earlier entry within the forum_posts table).  I'm assuming that the sequence of posts is represented by the 'created' field which stores a numerical representation of the time.

Tracing the execution flow

These experiments have given me with three questions to explore:

  1. What happens within the world of Moodle code the user creates a new forum?
  2. What happens when a user adds a new discussion to a forum?
  3. What happens when a user posts a reply?

Creating a new forum

Creating a new forum means adding an activity.  To learn about what code is called when a forum is added, I click on 'add forum' and capture the URL.  I then give my debugger the same parameters that are called (id, section, sesskey and add) and then begin to step through the course/mod.php script.  The id number seems to relate to the id of the course, and the add parameter seems to specify the type of the activity or resource that is to be added.

I quickly discover a redirect to a script called modedit.php, where the parameters add=forum, type= (empty), course=4, section=1, return=0.  To further understand what is going on, I stop my debugger and start modedit.php with these parameters.

There is a call to the database to check the validity of the course parameter, fetching of a course instance, something about the capability, fetching of an object that corresponds to a course section (call to get_course_section in course/lib code).   Data items are added to a $form variable (which my debugger tells me is a global).  There is then the instantiation of a class called mod_forum_mod_form (which is defined within mod/forum/mod_form.php).  The definition class within mod_forum_mod_form defines how the forum add or modification form will be set out.  There is then a connection between the data held within $form and the form class that stores information about what information will be presented to the user.

After the forum editing interface is displayed, the action of clicking the 'save and return to course' (for example) there is a postback to the same script, modedit.php.  Further probing around reveals a call to forum_add_instance within forum/lib.php (different activities will have different versions of this function) and forum_update_instance.  At the end of the button clicking operation there is then a redirect to a script that shows any changes that have been made.

The code to add a forum to course will be similar (in operation) to the code used to add other activities.  What is interesting is that I have uncovered the classes and script files that relate to the user interface forms that are presented to the user.

Adding a new discussion

A new discussion can be added by clicking on the 'Add a new discussion topic' button once you are within a forum.  The action of clicking on this button is connected to the forum/post.php script.  The most parameter associated to this action is the forum number (forum=7, for example).

It's important to note the use of the class mod_frum_post_form contained within post_form.php which represents the structure of the form that the user enters discussion information to.

The code checks the forum id and then finds out which course it relates to.  It then creates the form class (followed by some further magic code that I quickly stepped through).

The action of clicking on the 'post to forum' button appears to send a post back (along with all of the contents of the form) to post.php (the same script used to create the form).  When this occurs, a message is displayed and then a redirect occurs to the forum view summary.  But where in the code is the database updated?  One way to do this is to begin with a search to the redirect.  Whilst browsing through the code I stumble across a comment that says 'adding a new discussion'.  The database appears to be updated through a call to forum_add_discussion.

Posting a reply to a discussion

The post.php script is also used to save replies to discussions (as well as adding new discussions) to the database.  When a user clicks on a discussion (from a list of discussions created by discuss.php), the link to send replies are represented by calls to post.php with a reply parameter (along with a post number, i.e. post.php?reply=4).  The action of clicking on this link presents the previous message, along with the form where the user can enter a response.

Screen grab of user sending a reply to a forum discussion

To learn more about how this code works, I browse through the forums lib file and uncover a function called forum_add_new_post.  I then search for this in post.php and discover a portion of code that handles the postback from the HTML form.  I don't explore any further having learnt (quite roughly) where various pieces of code magic seems to lie.

Summary

The post.php script does loads of stuff.  It weighs in at around seven hundred lines in length and contains some huge conditional statements.

Not only does post appear to manage the adding of new discussions to a forum but it also appears to manage the adding, editing and deletion of forum messages.  To learn about how this script is structured I haven't been able to look at function definitions (because it doesn't contain any) but instead I have had to read comments.  Comments, it has been said, can lie, whereas code always tells the truth.  More functions would have helped me to more quickly learn the structure of the post.php script.

The creation of the user interfaces is partially delegated to the mod and post form classes.  Database updates are performed through the forum/lib.php file.  I like some of the function abstractions that are beginning to emerge but any programming file that contains both HTML and SQL indicates there is more work to be done.  The reason for this aesthetic (and person) opinion is simple: keeping these two types of code separate has the potential to help developers to become quickly familiar where certain types of software actions are performed.  This, in turn, has the potential to save developer time.

One of the central areas of functionality that forum developers need to understand is how Moodle works and uses forms.  This remains an area of mystery to me, and one that I hope to continue to learn about.  Another area that I might explore is how PHP has been used to implement different forum systems so I can begin to get a sense of how PHP is written by different groups of developers.

Acknowledgements: Photograph licenced under creative commons by ciaron, liberated from Flickr.

Permalink
Share post
Christopher Douce

Forums 2.0

Visible to anyone in the world
Edited by Christopher Douce, Tuesday, 20 May 2014, 09:52

I like forums, I use them a lot.  I can barely remember when I didn’t know what one was.  I think my first exposure to forums might have been through a dial-up bulletin board system (used in the dark ages before the internet, of course).  This was followed through a brief flirtation with usenet news groups.

When trying to solve some programming problems, I more often than not would search for a couple of keywords and then stumble across a multitude of different forums where tips, tricks and techniques might be debated and explored.  A couple of years ago I was then introduced to the world of FirstClass forums (wikipedia) and then, more recently, to Moodle forums.  Discussions with colleagues has since led me towards the notion of e-tivities.

I have a confession to make: I use my email account for a whole manner of different things.  One of the things that I incidentally use my email account for is sending and receiving email!  I occasionally use email as a glorified ‘todo’ list (albeit one that has around a thousand items!)  If something comes in that is interesting and needs attention, I might sometimes use click on an ‘urgent’ tick box so that I remember to look at the message again at a totally unspecified time in the future.  If it is something that must be bounded by time, I might drag the item into my calendar and ask my e-mail client to remind me about it at a specified time in the future (I usually ponder over this for around half a minute before choosing one of two options: remind me in a weeks time, or remind me in a fortnight).

I have created a number of folders within my email client where I can store interesting stuff (which I very often subsequently totally forget about).  Sometimes, when working on a task, I might draft out some notes using my email editor and them store them to a vaguely titled folder.

The ‘saving of draft’ email doesn’t only become something that is useful to have when the door knocks or the telephone rings – email, to me, has gradually become an idea and file storage (and categorisation) tool that has become an integral part of how I work and communicate.  I think I have heard it said that e-mail is the internet’s killer application (wikipedia).  For me, it is a combined word processor, associative filing cabined, ideas processor and general communications utility.

Returning to the topic of forums… Forums are great, but they are very often nothing like email.  I can’t often click and drag forum messages from one location into folder or to a different part of the screen.  I can’t add my own comments to other people’s posts that only I can see (using my mail client I can save copies of email that other people send me).  On some forum systems I can’t sort the messages using different criteria, or even search for keywords or phrases that I know were used at some point.

My forum related gripes continue: I cannot delete (or at least) hide the forum message that I don’t want to see any more.  On occasions I want to change the ‘read status’ from ‘read’ to ‘unread’ if I think that a particular subject that is being discussed might be useful to remember when I later turn to an assessment that I have to submit.  I might also like to take fragments of different threads and group them together in a ‘quotation set’, building a mini forum centric e-portfolio of interesting ideas (this said, I can always copy and paste to email!)If a forum were like a piece of paper where you could draw things at any point I might want to put some threads on the left of the page (those points that I was interested in) and others on the right of the page (or visa-versa).

I might want to organise the threads spatially, so that the really interesting points might be at the top, or the not so interesting points at the bottom – you might call this ‘reader generated threading!’  When one of my colleagues makes a post, there might be an icon change that indicates that a contribution has been made against a particular point.

I might also be able to save thread (or posting) layout, depending on the assignment or topic that I am currently performing research.  It might be possible to create a ‘thread timeline’ (I have heard rumours that Plurk might do something like this), where you see your own structured representation of one or more forums change over time.  Of course, you might even be able to share your own customised forumscape with other forum users.

An on-line forum is undoubtedly a space where learning can occur.  When we think about how we might further develop the notion of a forum we soon uncover the dimension of control.

Currently, the layout and format of a forum (and what you can ultimately do with it) is ultimately constrained by the design of the forum software and a combination of settings assigned by an administrator.  Allowing forum users to create their own customised view of a forum communication space may allow learners tools to make sense of different threads of communication.  Technology can be then used to enable an end user to formulate a display that most effectively connects new and emerging discussions with existing knowledge.

This display (or forumscape) might also be considered as a mask.  Since many different discussions can occur on a single forum at the same time choosing the right mask may help salient information become visible.

The FirstClass system, with its multiple discussion areas and the ability to allow the end user to change the locations of forum icons on a ‘First Class’ desktop begins to step toward some of these ideas.

Essentially, I would like discussion forums to become more like my email client: I would like them to do different things for me.  I would like forum software to not only allow users to share messages.  I would like forum software to become richer and permit the information they display to the users be more malleable (and manageable).  I know this would certainly be something that would help me to learn!

Acknowlegements: Picture from Flickr taken by stuckincustoms, licenced under creative commons.

Permalink 1 comment (latest comment by Sam Marshall, Thursday, 5 Feb 2009, 12:30)
Share post
Christopher Douce

How Moodle block editing works: database (part 2)

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 21 Jul 2010, 18:05

Pattern of old computer tapes intended to represent databases

This is a second blog entry about how Moodle manages its blocks (which can be found either at a site level or at a course level).  In my previous post I wrote about the path of execution I discovered within the main Moodle index.php file.  I discovered that the version of Moodle that I was using presented blocks using tables, and that blocks made use of some interesting object-oriented features of PHP to create the HTML code that is eventually presented to the end user.

This post has two objectives.  The first is to present something about the database structures that are used to store information about which blocks are stored where, and secondly to explore what happens when an administrator clicks on the various block editing functions.  The intention behind this post is to understand Moodle in greater detail to uncover a little more of how it has been designed.

Blocks revisited

Screen grab of the latest news block with moving and deletion editing icons

Blocks, as mentioned earlier, are pieces of functionality that can sit on the left hand or right hand borders of courses (or the main Moodle site page).  Blocks can present a whole range of functions ranging from news items through to RSS feeds.

Blocks can be moved around within a course page with relative ease by using the Moodle edit button.  Once you click on ‘edit’ (providing it is there and you have the appropriate level of permissions), you can begin to add, remove and move blocks around using a couple of icons that are presented.  Clicking on the left icon moves the block to the left hand margin, clicking the down arrow icon changes its vertical position and so on.

One of my objectives with this post is to understand what happens when these various buttons are clicked on.  What I am hoping to see are clearly defined functions which will be called something like moveBlockUp, moveBlockDown or deleteBlock.

Perhaps with future versions it might be possible to have a direct manipulation interface (wikipedia) where rather than having buttons to press, users will be able to drag blocks around to rapidly customise course displays.  Proposing ideas and problems to be solved is a whole lot more easier than going ahead and solving them.  Also, to happily prove there’s no such thing as an original thought, I have recently uncovered a Moodle documentation page.  It seems that this idea has been floating around since 2006.

Before I delve into trying to uncover how each of the Moodle block editing buttons work, it is worthwhile spending some time to look at how Moodle remembers what block is placed where.  This requires looking at the database.

Remembering block location

I open up my database manipulation tool (SqlYog) and begin to browse through the database tables that are used with Moodle.  I quickly spot a bunch of tables that contain the name block.  One that seems to be particularly relevant is a table called block_instance.

The action of creating a course (and adding blocks to it) seems to create a whole bunch of records in the block_instance.  Block_instance appears to be the table that Moodle uses to remember what block should be displayed and when.

The below graphic is an excerpt from the block_instance data table:

Fragment of the block_instance datatable showing a number of different fields

The field weight seems to relate to the vertical order of blocks on the screen (I initially wondered whether it related to, in some way, some kind of graphical shading, thinking of the way that HTML uses the term weight).  Removing a block from a course seems to change the data within this table.

The blockid seems to link each entry within block_instance to data items held within the  Block table:

Fragment of the blocks table, showing the field headings and the data items

The names held within the name field (such as course_summary) are connected to the programming code that relates to a particular block.  The cron (and the lastcron) relate to regular processes that Moodle must execute.  With the default installation of Moodle everything is visible, and at the time of writing I have no idea what multiple means.

Returning to block_instance, does the pageid field relate to the id used in the course?  Looking at the course table seems to add weight to his hypothesis.

I continue my search for truth by rummaging around in the Moodle documentation, discovering a link to the database schema and uncover some Block documentation that I haven’t seen before (familiarity with material is a function of time!)  This provides a description of the block development system as described by the original developer.

Knowing that these two tables are now used to store block location my question from this point onwards is: how does this table get updated?

Database updates

To answer this question I applied something that I have called ‘the law of random code searching’: if you don’t know what to look for and you don’t know how things work, carry out a random code search to see what the codebase tells you.  Using my development environment I search to find out where the block_instance datatable is updated.

Calls to the database to be spread out over a number of files: blocks, lib, accesslib, blocklib, moodlelib, and chat/lib (amongst others).  This seems to indicate that there is quite a lot of coupling between the different sections of code (which is probably a bad thing when it comes to understanding the code and carrying out maintenance).

Software comprehension is sometimes an inductive process.  Occasionally you just need to read through a code file to see if it can yield any clues about its design, its structure and what it does.  I decided to try this approach for each of the files my search results window pointed to:

Accesslib
Appears to access control (or permission management) to parts of Moodle.  The comments at the top of the file mention the notion of a ‘context’ (which is a badly overloaded word).  The comments provide me no clue as to the context in which context is used.  The only real definition that I can uncover is the database description documentation which states, ‘a context is a scope in Moodle, for example the whole system, a course, a particular activity’.  In AccessLib, there are some hardcoded definitions for different contexts, i.e. CONTEXT_SYSTEM, CONTEXT_USER, CONTEXT_COURSECAT and so on.

The link to the blocks_instance database lies within a huge function called create_context which updates a database table of the same name.  I’ve uncovered a forum explanation that sheds a little more light onto the matter, but to be honest, the purpose of these functions is going to take some time to uncover.  There is a clue that the records held within the context table might be cached for performance reasons.  Moving on…

Moodlelib

Block_instance is mentioned in a function named remove_course_contents which apparently ‘clears out a course completely, deleting all content but don’t delete the course itself’.  When this function is called, modules and blocks are removed from the course.  Moodlelib is described as ‘main library file of miscellaneous general-purpose Moodle functions’ (??), but there is a reference towards another library called weblib which is described as ‘functions that provide web output’.

Blocks
A comment at the top of the blocks.php file states that it ‘allows the admin to configure blocks (hide/show, delete and configure)’.  There is some code that retrieves instances of a block and then deletes the whole block (but in what ‘context’ this is done, at the moment it’s not clear).

Blocklib
The file contains the lion’s share of references to the block_instance database.  It is said to include ‘all the necessary stuff to use blocks in course pages’ (whatever that means!)  At the top there are some constants for actions corresponding to moving a block around a course page.  Database calls can be found within blocks_delete_instance, blocks_have_content, blocks_print_group and so on.  The blocks_move_block seems to adjust the contents of the database to take account of moment.  There also appears to be some OO type magic going on that I’m not quite sure about.  Perhaps the term ‘instance’ is being used in too many different ways.  I would agree with the coder: blocklib does all kinds of ‘stuff’.

Lib files
Reference to block_instance can be found in lib files for three different blocks: chat, lesson and quiz.  The functions that contain the call to the database relate to the removing of an ‘instance’ of these blocks.  As a result, records from the block_instance table are removed when the functions are called.

So, what have I learnt by reading all this stuff?  I’ve seen how the database stores stuff, that there is a slippery notion of a course context (and mysterious paths), and know the names of some files that do the block editing work, but I’m not quite sure how.  There is quite a lot of complexity that has not yet been uncovered and understood.

Digressions

I have a cursory glance through the lib folder to see what else I can discover and find an interestingly named script file entitled womenslib.php.  Curious, I open it and see a redirect to a wikipedia page.  The Moodle developers obviously have a sense of humour but unfortunately mine had failed!  This minor diversion was unwelcome (humour failure exception), costing me both time and ‘head’ space!

Bizarrely I also uncover seemingly random list of words (wordlist.txt) that begins: ‘ape, baby, camel, car, cat, class, dog, eat …’ etc.  Wondering whether one of the developers had attended the famous Dali school of software engineering, I searched for a file reference to this mysterious ‘wordlist’.

It appeared that our mysterious list of words was referenced in the lib\setup.php file, where a path to our  worldlist was stored in what I assumed to be a Moodle configuration variable.  How might this file be used?  It appears it is used within a function called generate_password.

Thankfully the developers have been kind enough to say where they derived some of their inspiration from.   The presence of the wordlist is explained by the need to construct a function to create pronounceable automatically generated passwords (but perhaps only in English?)

This was all one huge digression.  I pulled myself together just enough to begin to uncover what happens when a user clicks on either the block move up, down, or delete buttons when a course is running in edit mode.

Button click action

Returning to the task in hand, I add two blocks (both in the right hand column, and one situated on top of the other) to my local Moodle site with a view to understanding that function code that contributes to the moveBlockUp and deleteBlock functionality.

4815869094_9c27f1aaf4.jpg

I take a look at the links that correspond to the move up and the delete icons.  I notice that the action of clicking sends a bunch of parameters to the main Moodle index.php.  The parameters are sent via get (which means they are sent as a part of the hypertext link).  They are: instanceid (which comes straight out of the block_instance table), sesskey (which reminds me, I really must try to understand how Moodle handles sessions (wikipedia) at some point), and a blockaction parameter (which is either moveup or delete in the case of this scenario).

The question here is: what happens within index.php?  Luckily, I have a debugger that will be able to tell me (or, at least, help me!)

I log in as an administrator through my debugger.  When I have established a session, I then add some breakpoints on my index.php code and launch the index.php code using the parameters for ‘move activity upwards’.

Index.php begins to execute, and a call to page_create_object is made. It looks like a new object is created.  An initialisation function within the page_base class is called (contained within pagelib).  A blocks_setup function is called and block positions from the block_instance database is retrieved.  After some further tracking I end up at a function called blocks_execute_url_action.  The instanceid is retrieved and a call is made to blocks_execute_action where the block action (moveup or delete) is passed in as a parameter with the block instance record that has just been retrieved from the database.

In blocks_execute_action a 'mother of all switch statements' makes a decision about what should be done next.  After some checks, two update commands to the database are issued through the update_record function updated weight values (to change the order of the respective blocks).  With all the database changes complete, a page redirect occurs to index.php.  Now that the database has the correct representation of where each block should be situated index.php can now go ahead and display them.

Is the same mechanism used for course pages?

A very cursory understanding tells me that the course/view.php script has quite a lot to do with the presentation of courses, and at this point gathering an understanding of it is proving to be elusive.  Let’s see what I can find.

4815245535_c6182b86b1.jpg

Initially it does seem that the index.php script controls the display of a Moodle site and course/view.php script does control the course display.  Moving the mouse over the ‘move block up’ icons reveals a hyperlink to the view.php script with get parameters of: id (which corresponds to the course number held within the course data table), instance id (which corresponds to a record within the block_instance table) and sesskey and blockaction parameters (as with index.php).

To get a rough understanding of how things work, I do something similar as before: open up a session through my debugger and launch the view.php with this bunch parameters.  The view.php course is striking.  It doesn’t seem to be very long and nor does it produce any HTML so it looks like there’s something subtle going on.

In view.php, there are some parameter safety checks, followed by some context_instance magic, checking of the session key followed by calls to the familiar page_create_object (mentioned in the earlier section).  Blocks_setup is then called, followed by blocks_get_by_page_pinned and blocks_get_by_page which asks the database which blocks are associated to this particular page (which is a course page).

Like earlier, there is a call to blocks_execute_url_action when updates the database to carry out the action that the administrator clicked on.  At the end of the database update there is a redirect.  Instead of going to index, the redirect is to view.php along with a single parameter which corresponds to the course id.

This raises the question: what happens after the view.php redirect?

Redirect to view.php

When view.php makes a call to the database to get the data that corresponds to the course id number it has been given.  There is then a check to make sure that the user who is requesting the page is logged into Moodle and eventually our old friends page_create_object and blocks_setup are called, but this time since no buttons have been clicked on, we don’t redirect to another page after we have updated the database.

Towards the end of view.php we can begin to see some magic that begins to produce the HTML that will be presented to the user.  There is a call to print_header.  There is then a script include (using the PHP keyword ‘required’) which then creates the bulk of the page that is presented to the user, building the HTML to present the individual blocks.  When running within my debugger, the script course/format/weeks/format.php was included.  The script that is chosen depends on the format of the course that has been chosen.  When complete, view.php adds the footer and the script ends.

Summary

So, what have I learnt from all this messing about?

It seems that (broadly speaking) the code used to move blocks around on the main Moodle site is also used to move blocks around on a course page, but perhaps this isn’t too surprising (but it is reassuring).  I still have no idea what ‘pinned blocks’ means or what the corresponding data table is for but I’m sure I’ll figure it out in time!

Another thing that I have learnt is that the view course and the main index.php pages are built in different ways.  As a result, if I ever need to change the underlying design or format of a course, I now know where to look (not that I ever think this is something that I’ll need to do!)

I have seen a couple of references to AJAX (MoodleDocs) but I have to confess that I am not much wiser about what AJAX style functionality is currently implemented within the version of Moodle I have been playing with.  Perhaps this is one of those other issues that will become clearer with time (and experience).

One thing, however, does strike me: the database and the user interface components are very closely tied together (or closely coupled) which may make, in some cases, change difficult.  One of the things that I have on my perpetual ‘todo’ list is to have a long hard look at the Fluid Project, but other activities must currently take precedence.

This pretty much concludes my adventure into the world of Moodle blocks. There’s a whole load of Moodle related stuff that I hope to look at (and hopefully describe) at some point in the future: groups, roles, contexts, and forums.  Wish me luck!

Acknowlegements: Image from lifeontheedge, licenced under Creative Commons.

Permalink
Share post
Christopher Douce

How Moodle block editing works : displaying a block (part 1)

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 21 Jul 2010, 17:58

A photograph of a car engine

One of the great things about Moodle (other than the number of databases it can use!) is the way that courses can be easily created and edited.  One of its best features is the edit button that can be found at the top of many pages.  Administrators and course managers can push this button and quickly add and remove functionality to redesign a course or a site within seconds.

This blog post is the first in a series of two (but might even extend to three) that aims to answer the question of: how does the Moodle block editing magic work?  To answer this question I found that it was useful to split this big question into a number of smaller questions.  These are: how are blocks presented to the user?, how are block layouts stored in the Moodle database?, and what happens when the user clicks on the edit button and makes changes to the layout of a site or a course?

There are two reasons for wanting to answer to these questions.  The first is that knowing something about this key part of Moodle might help me to understand more about its architecture which might help me in the future if I have to make any changes as a part of the EU4ALL project.  The second is pure curiosity, particularly regarding the database tables and structures - I would like to know how they work.

There are two broad approaches that I could take to answer these questions: look at things from the top down, or from the bottom up.  I could either look at how the user interfaces are created, or I could have a look at the database to see if I can find data tables that might be used to store data that is used when the Moodle user interface is created.  In the end I used a combination of top down and bottom up approaches to understand a bit of what is going on.

This post will present what I have learnt about how Moodle presents blocks.  The second post will be about what I have found out about the database and how it works (in relation to Moodle blocks) and what happens when you click on the various block editing icons.

There will be load of detail which will remain unsaid and I’ll be skipping over loads of code subtleties that I haven’t yet fully understood!  I’ll also be opinionated, so advance apologies to and Moodle developers who I might inadvertently offend.  I hope my opinions are received with positive spirit, which is the way that they are intended.

Introducing blocks

Blocks are bits of functionality which sit on either side of a Moodle site or course.  They can do loads of stuff: provide information to students about their assignment dates, and provide access to discussion forums.  When first looking at Moodleworld, I had to pause a moment to distinguish between blocks, resources and activities.  Blocks, it might be argued, are pieces of functionality that can support your learning, whilst activities and resources may be a central part of your learning (but don’t quote me on that!)

Screen grab of an administrator editing a Moodle course

Not long after starting looking at the blocks code, I discovered a developer page on the subject.  This was useful.  I soon found out that apparently there are plans to improve the block system for the next version of Moodle.  The developers have created an interestingly phrased bug to help guide the development of the next release.  This said, all the investigations reported here relate to version 1.9+, so things may very well have moved on.

Looking at Index

Blocks can be used in at least two different ways: on the main Moodle site area (which is seen when you enter a URL which corresponds to a Moodle installation) and within individual courses.  I immediately suspect that there is some code that is common between both of them.  To make things easy for myself, I’ve decided (after a number of experiments) to look at how blocks are created for a Moodle site.

To start to figure out how things work the first place that I look at is the index.php file.  (I must confess that I actually started to try to figure out what happened when you click on the editing button, but this proved to be too tough, so I backtracked…)

So, what does the index.php file do?  I soon discover a variable called $PAGE and asked the innocuous question of ‘why are some Moodle variables in UPPERCASE and others in lowercase?’ I discover an answer in the Moodle coding guidelines.  Anything that is in uppercase appears to be global variables.  I try to find a page that describes the purpose of the different global variables, but I fail, instead uncovering a reference to session variables, leaving me wondering what the $PAGE class is all about.

Pressing on I see that there are some functions that seem to calculate the width of the left and the right hand areas where the blocks are displayed.  There is also some code that seems to generate some standard HTML for a header (showing the Moodle instance title and login info).

The index page then changes from PHP to HTML and I’m presented with a table.  This surprises me a little.  Tables shouldn’t really be used for formatting and instead should only be used to present data.  It seems that the table is used to format the different portions of the screen, dividing it unto the left hand bunch of columns, a centre part where stuff is displayed, and a right hand column. 

It appears that the code that co-ordinates the printing of the left and right blocks is very similar, with the only difference being different parameters to indicate whether things appear on the left, and things appear on the right.

The index file itself doesn’t seem to display very much, so obviously the code that creates the HTML for the different blocks is to be found in other parts of the Moodle programming.

Seeding program behaviour

To begin to explore how different blocks are created I decide to create some test data.  I add a single block to the front page of Moodle and position it at the top on the right hand side:

Screen shot showing empty news block

Knowing that I have one block that will be displayed, I can the trace through the code when the ‘create right hand side’ code is executed using my NuSphere debugger to see what is called and when.

One thing that I’m rather surprised about is how much I use the different views that my debugger offers.  It really helps me to begin learn about the structure of the codebase and the interdependencies between the different files and functions.

Trying to understand the classes

It soon becomes apparent that the developers are making use of some object-oriented programming features.  In my opinion I think this is exactly the right thing to do.  I hold the view that if you define the problem in the right way then its solution (in terms of writing the code that connects the different definitions together) can be easy, providing that you write things well (this said, I do come from a culture of Java and C# and brought up, initially, on a diet of Pascal).

After some probing around there seem to be two libraries that seem to be immediately important to know about: weblib and blocklib.  The comments at the top of weblib describes it as ‘library of all general-purpose Moodle PHP functions and constants that produce HTML output’.  Blocklib is described as, ‘all the necessary stuff to use blocks in course pages’.

In index, there is a call to a function called blocks_setup (blocks, I discover, can be pinned true, pinned both, or pinned false – block pinning is associated to lessons, something that I haven’t studied).  This function appears to call another function named   blocks_get_by_page (passing it the $PAGE global).  This function returns a data structure that contains two arrays.  One array is called l and the other is called r.  I’m assuming here that array data has been pulled from the database.

The next function that follows is called blocks_have_content. This function does quite a bit.  It takes the earlier data structure, and translates the block number (which corresponds to which block is to be displayed on the page) and converts it into a block name through a database call.  The code then uses this name to instantiate an object whose class is similar to the block name (it does this by prepending ‘block_’ to the start).

There is something to be cautious about here: there is a dependency between the contents of the database (which are added to when the Moodle database is installed) and the name of the class.  If either one of these were to change the blocks would not display properly.

The class that corresponds to the news block is named ‘block_news_items’.  This class is derived from (or inherits) another class called block_base that is defined within the file moodleblock.class.php.  A similar pattern is followed with other blocks.

Is_empty()

Following the program flow, there is a call to a function called is_empty() within blocklib.php.   This code strikes me as confusing since is_empty should only be doing one thing.  Is_empty appears to have a ‘side effect’ of storing the HTML for a block that comes from a call to get_content to a variable called ‘content’.  Functions should only do what they say they do.  Anything else risks increasing the burden of program comprehension and maintenance.

The Moodle codebase contains several versions of get_content, one for each of the different blocks that can be displayed.  The version that is called depends on which object Moodle is currently working through.  Since there is only one block, the get_content function within block_news_items is called.  It then returns some HTML that describes how the block will be presented. 

This HTML will be stored to the structure which originally described which block goes where.  If you look through the pageblocks variable, the HTML can be found by going to either the left or right array, looking in the ‘obj’ field, then going to ‘content’.  In ‘content’ you will find a further field called ‘text’ that contains the HTML that is to be displayed.

When all the HTML has been safely stored away in memory it is almost ready to be printed (or presented to a web client). 

Calls to print_container_start() and print_container_end() delineate a call to blocks_print_group.  In this function there will then be a database call to check to see if the block is visible and then a call to _print_block() is made.  This is a member function of a class, as indicated by the proceeding underscore.  The _print_block() can be found within the moodleblock.class file.  This function (if you are still following either me or the code!) makes a call to print_side_block function (which is one of those general purpose PHP functions) contained within weblib.php.

Summary and towards part 2

I guess my main summary is that to create something that is simple and easy to use can require quite a lot of complicated code.

My original objective was to try to understand the mechanisms underpinning the editing and customising of course (particularly blocks) but I have not really looked at the differences between how blocks presented within the course areas and how blocks are presented on the main site.  Learning about how things work has been an interesting exercise.  One point that I should add is that from an accessibility perspective, the use of tables for layout purposes should ideally be avoided.

What is great is that there is some object-oriented code beginning to appear within the Moodle codebase.  What is confusing (to me, at least) is the way that some data structures can be so readily changed (or added to) by PHP.  I hold the opinion that stronger data types can really help developers to understand the code that they are faced with since they constrain the actions that can be carried out to those types.  I also hold the view that data stronger typing can really help your development since you give your development tools more of an opportunity to help you (through presenting you with autocomplete or intellisense options), but these opinions probably reflect my earlier programming background and experience.

On the subject of data types, the next post in this series will be about how the Moodle database stores information about the blocks that are seen on the screen.  Hopefully this might fill the gaps of this post where the word ‘database’ is mentioned.

Acknowlegement
: Picture by zoologist, from Flickr.  Licenced under creative commons.

Permalink
Share post
Christopher Douce

Learning Technologies 2009

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 21 Jul 2010, 17:49

Conference logo

Yesterday I went to the Learning Technologies exhibition held at Kensington Olympia, London.  This is the third time I have been to this event.  The first time I went (back in 2004) was because I also attended a related exhibition called BETT which is hosted a couple of weeks earlier.

The two shows have different audiences: BETT is more focussed towards the schools and government funded education sector whereas the Learning Technologies exhibition focuses more on education (or training) software, services and systems for private sector companies (but there is much cross over, of course).  Every year there seems to be a conference that is linked to the exhibition but I have so far never been able to attend.

Last year

Last year I came away from the exhibition learning a few new things.  I learnt that there was a range of products called competency management systems which enables corporations to learn about what their employees know about (and how these map to individual training courses).  I also learnt about the release of new mobile learning systems.  The prevailing theme of last year’s exhibition seemed to be the concept of Rapid E-learning (more of this later).

My objective for this visit was to determine whether there were any new themes (or innovations) in learning technologies that are emerging from the commercial sectors.  I also had one eye on the subject of accessibility and the extent to which Moodle was beginning to feature in the commercial e-learning sphere.

Themes

Whilst walking around the exhibition I asked a number of exhibitors whether they thought there were any differences between this years exhibition and the previous years exhibition.  Two main seemed to dominate.  The first is the application of web 2.0 ideas into learning systems.  The second is the idea of informal learning.  Both of these themes were, perhaps unsurprisingly, reflected in articles that were provided in the free magazine that came with admission.  I also picked up on a number of other themes too.  These are listed below.

Web 2.0

The notion of web 2.0 (or the 'participatory web'), seemed to feature quite heavily.  Given the amount of discussion this label has generated this perhaps isn’t surprising.  It was interesting to see that an article written by the current Open University vice-chancellor was given a mention in the exhbition and conference magazine.

One comment that I heard from the exhibitors is that there is a more wider acceptance of the use of blogs and wikis.  One vendor who I spoke to was called Infinity Learning.  Infinity were presenting something called their 'learning portal' product which provided some functionality to allow learners to rate and review courses.  It was interesting since it featured a recommendation system akin to something that Amazon does when it offers you products that other people have bought.  I presume this will expose the learning pathways that other employees or learners have followed, allowing water cooler discussions about what learning activities were helpful to become more explicit.

Informal learning

I have to confess, I do struggle with understanding the concept of formal learning, but the exhibition magazine points me in the direction of a related blog post.  There are a couple of links within this link that might be useful.

One vendor connected e-learning and informal learning by describing an approach where large quantities of digital resources are placed on-line allowing employees to gain access to useful information as and when they are required, allowing gaps of knowledge about procedure or practice to be filled.

Informal learning, in this sense, can be connected to some of the other themes that could be found within the exhibition, specifically ‘bite sized’ or on-demand learning (which may or may not incorporate product simulations).

Gaming

There seemed to be a bit of a buzz about gaming, but I didn’t get a sense that this was one of the big topics of the show.  When speaking to one exhibitor, gaming was mentioned in the same sentence as virtual worlds.

Rapid e-learning

The idea of rapid e-learning initially puzzled me when I first came across it last year.  I soon realised that  rapid e-learning is facilitated by tools that allow e-learning designers to create their own in-house courses without having to go outside to professional e-learning content development companies (of which there are many).

Last year, the word at the exhibition was that rapid e-learning tools were causing the decline in the price of bespoke e-learning contracts.  Every exhibitor that had a rapid e-learning tool seemed to have their own learning management system of some kind.  When it comes to industry standards (in the e-learning world), the one that is most often mentioned is SCORM (wikipedia).

Bite sized e-learning

Bite sized e-learning seems to relate primarily to e-learning objects that are quite small.  You might use informal learning and bite sized learning in the same sentence.  These might be small 'mini courses' that give you instruction about how to carry out a particular task or operation within your institution.  This is also related to the next theme: simulations.  (As an aside, I'm assuming that a bite sized piece of e-learning doesn't last more than ten or twenty minutes, but this wasn't a question that I really asked).

Simulations

A number of vendors were selling tools that enable you to build simulations of any IT system that your organisation might have deployed.  Simulations can be used to either train up new employees, or to offer 'bite sized' reminder courses that can help to guide employees through the features of a large system that might not be used very often.

The presence of these products did make me wonder about how the provision of simulation recording (and development) systems might stack up against quick and easy to use open source tools such as Wink (but this exposes a dimension of simulation systems that has illustration at one end and involvement at the other).

Competency Management

I love this term!  It has such a positive feel to it!

Like last year there were some vendors who were selling systems that attempted to bridge the gap between human-resources systems and training delivery systems.  I know very little about human resource management systems but I can see that the link between LMS systems that deliver different kinds of learning might be useful.  When asking about the different personnel management systems that were on the market, Oracle seemed to be the one that was mentioned most frequently, having acquired Peoplesoft (wikipedia).

Content Development

I stumbled across the term 'workflow management' a couple of times.  I can see the purpose of using an e-learning material workflow management system: a company needs to draw upon the skills and abilities of different people within an organisation, some of whom might be external contractors.  I find the area of workflow management systems interesting since they can really take advantage of the fact that IT systems are exceptionally good at remembering stuff about who did what and when.

Moodle

Moodle cropped up a couple of times.  Kineo, a company based in Brighton in the UK was offering a cut-price hosted solution for a period of twelve months.  As a part of the package they appeared to be offering customising (or branding) of the Moodle instance to match the identity of your institution, and some training.  Sadly, all the guys at Kineo were way too busy to have a chat with me!

The second big Moodle related find was a product called Moomis marketed by Aardpress.  Moomis is apparently a Moodle 'plug-in'  that can add Continuing Professional Development (CPD) and competency management functionality (my favourite term) to make Moodle more flavoursome for the more commercially inclined.

Accessibility

Since e-learning materials appear to be often created using rapid e-learning tools, the accessibility of the resulting material is likely to partially dependent upon the structure of the digital resources that are generated.  I didn't have much of a chance to quiz vendors about this issue, but well known UK companies such as Epic and Brightwave are known to appreciate the importance of accessibility.

On another note, I was interested to discover the presence of Texthelp, a company who produce a tool called ReadWrite&Gold (they also produce the BrowseAloud system which can be used in conjunction with the main Open University website).  They kindly gave me quick demo and said that they had just release a new version which incorporates new synthetic voices and updated dictionaries.

I also discovered the presence of the UK Council for Access and Equality, a not for profit organisation.

The downturn

The Learning Technologies exhibition seemed to be as busy as it was last year – it was certainly buzzing with visitors.  I asked a couple of people about their opinions about the current concerns about 'the downturn' and received a mixed set of responses.  Some companies, it was reasoned, were choosing to bring their training spend 'in-house', choosing to use rapid e-learning tools (but this was in line with some of the trends I felt were at the exhibition last year).

Other companies seemed to state that they had been affected, whereas others had a deliberate strategy of going after public sector projects.  In one of the presentations that I briefly attended contained the argument that organisations should make use of learning technologies to ensure that employees are able to perform as efficiently as possible.  On-demand 'bite sized' e-learning will certainly help when it comes to carrying out complex infrequent tasks.

And finally

I also discovered the presence of a project called Next Generation Learning , a campaign sponsored by Becta.

As well as noticing the presence of organisations like the British Computer Society, I also noticed an organisation called the e-Learning Network (which appears to be a partner with the Association of Learning Technology), and was duly informed that associate membership was free.  Might be worth a look.

Summary

I quite like the Learning Technologies exhibition (I might even be able to attend the conference one day).  It's a good way to find out (very roughly and quickly) what's happening in the wider e-learning industry. 

Its interesting to see that vendors offer a portfolio of different services which often includes content creation, tool development, managed learning environment provision and system hosting.  The concept of 'web 2.0' (whatever that means) seems to be a salient theme this year.  It was interesting to see the substantial use of the term informal learning.  It'll be interesting to see how the exhibition looks next year.

Acknowlegements: thanks to all those exhibitors who I spoke to!

Permalink
Share post
Christopher Douce

Personalising museum experience

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 21 Jul 2010, 17:48

Thyssen-Bornemisza museum, Madrid

 Last year has been a fun year.  At one point I found I had a number of hours to kill before I caught an onward travel connection.  Since I was travelling through a city, I decided to kill some time by visiting some museums.

I have to confess I really like museums.  My favourite type is science and engineering museums. I really like looking at machines, mechanisms and drawings, learning about the people and situations that shaped them.  I also like visiting art museums too, but I will be the first to confess that I do find some of the exhibits that they can contain a little difficult to understand.

Starting my exploration

I stepped into the Thyssen-Bornemisza museum (wikipedia) with mild trepadation, not really knowing what I was letting myself in for.  After the entrance area I discovered a desk that was renting audio guides.  Since I felt that I might be able to gain something from the use of an audio guide (and since I was travelling alone, it could offer me some company), I decided to rent one for a couple of hours.

With my guide in hand I started to wander around the gallery.  The paintings appeared to be set out in a very particular and deliberate way.  The gallery designer was obviously trying to tell me something about the history of art (of which I know next to nothing about).  The paintings gradually changed from impressionism, to modernism, through to paintings that I could only describe as thoroughly abstract (some of which I thoroughly liked!)

Extending my guide

I remember stopping at a couple of paintings at the impressionist section.  The disembodied voice of my guide was telling me to pay attention to the foreground, and the background: particular details were considered to be important.  I was given some background information, about where the painter was working and who he was working with.

On a couple of occasions I felt that I had been told a huge amount of detail, but I felt that none of it was sticking.  I didn't have a mental framework around which to store these new facts that I was being presented with.  Art history students, on the other hand, might have less trouble.

What I did discover is that some subjects interested me significantly more than others.  I wanted to know which artists were influenced by others.  I wanted to hear a timeline of how they were connected.

I didn't just want my guide to tell me about what I was looking at, I wanted my audio guide to be a guide, to be more like a person who would perhaps direct me to things that I might be interested in looking at or learning about.  I wanted my audio guide to branch off on an interesting anecdote about the connections between two different artists, about the trials and tribulation of their daily lives.  I felt that I needed this functionality not only to uncover more about what I was seeing, but also to help me to find a way to structure the information that I was hearing.

Alternative information

Perhaps my mobile device could present a list of topics of themes that related to a particular painting.  It might display the name of the artist, some information about the scene that was being depicted, perhaps some keywords that correspond to the type under which it could be broadly categorised.

Choosing these entries might direct you to related audio files or perhaps other paintings.  A visitor might be presented with words like, 'you might want to look at this painting by this artist', followed by some instructions about where to find the painting in the gallery (and its unique name or number).

If this alternative sounded interesting (but it wasn't your main interest) you might be able to store this potentially interesting diversion into a 'trail store', a form of bookmark for audio guides.

Personalised guides

Of course, it would be much better if you had your own personal human guide, but there is always the fear of sounding like an idiot if you ask questions like, 'so, erm, what is impressionism exactly?', especially if you are amongst a large group of people!

There are other things you could do too.  Different visitors will take different routes through a gallery or museum.  You might be able to follow the routes (or footsteps) that other visitors have taken.

Strangers could be able to name and store their own routes and 'interest maps'.  You could break off a route half way through a preexisting 'discovery path' and form your own.  This could become, in essence, a form of social software for gallery spaces.  A static guide might be able to present user generated pathways through gallery generated content.

Personal devices

One of the things I had to do when I explored my gallery was exchange my driving licence for a piece of clumsy, uncomfortable mobile technology.  It was only later that it struck me that I had a relatively high tech piece of mobile technology in my pocket: a mobile phone. 

To be fair, I do hold a bit of fondness for my simple retro Nokia device, but I could imagine a situation where audio guides are not delivered by custom pieces of hardware, but instead streamed directly to your own hand held personal device.  Payment for a 'guide' service could be made directly through the phone.  Different galleries or museums may begin to host their own systems, where physical 'guide access posters' give users instructions about how visitors could access a parallel world of exploration and learning.

Rather than using something that is unfamiliar, you might be able to use your own headphones, and perhaps use your device to take away souvenirs (or information artefacts) that relate to particular exhibits.  Museums are, after all, so packed with information, it is difficult to 'take everything in'.  Your own device may be used to augment your experience, and remind you of what you found to be particularly interesting.

Pervasive guides

If each user has their own device, it is possible that this device could store a representation of their own interests or learning preferences.  Before stepping over the threshold of a museum, you might have already told your device that you are interested in looking at a particular period of painting.  A museum website might be able to offer you some advice about what kinds of preferences you might choose before your visit.

With the guide that I used, I moved between the individual exhibits entering exhibit numbers into a keypad.  Might there be a better less visible way to tell the guide device what exhibits are of interest?

In museums like Victoria and Albert and the Natural History Museum, it takes many visits to explore the galleries and exhibits.  Ideally a human guide would remember what you might have seen before and what interests you have.  Perhaps a digital personalized guide may able to store information about your previous visits, helping you to remember what you previously studied.  A digital system might also have the power to describe what has changed in terms of exhibits if some time has elapsed between your different visits.  A gallery may be able to advertise its own exhibits.

Challenges

These thoughts spring from an idealised vision of what a perfect audio (or mobile) guide through a museum or gallery might look like.  Ideally it should run on your own device, and ideally it should enable to learn and allow you to take snippets or fragments of your experience away with you.   In some senses, it might be possible to construct a museum exhibit e-portfolio (wikipedia), to store digital mementoes of your real-world experiences.

There are many unsaid challenges to realise a pervasive personalized mobile audio guide.  We need to understand how to best create material that works for different groups of learners.  In turn, we need to understand how to best create user models (wikipedia) of visitors.

Perhaps one of the biggest challenges may lie with the creation of a standards-based interoperable infrastructure that might enable public exhibition spaces to allow materials and services to be made available to personal hand held devices.

Acknowlegement: image from Flickr by jonmcalister, licenced under creative commons.

Permalink
Share post
Christopher Douce

Database abstraction layers and Moodle

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 21 Jul 2010, 17:47

4815223101_d522684c99.jpg

One of the great things about Moodle is that it can be used with a number of different database systems.  It can use popular open source databases such as MySQL or Postgres, or commercial offerings from Oracle or Microsoft. 

The great thing about offering this level of flexibility is that it can make the adoption of Moodle into an existing IT infrastructure a whole lot easier.  If you have an IT department which is Microsoft centric, then adopting Moodle or slotting it into an existing IT infrastructure might not cause too much upset.  Similarly, if your IT department uses Linux and has a dedicated database server that runs Postgres, offering choice of back end technologies can make things easier for system administrators.

This post is all about exploring how Moodle works with so many different database systems.  The keywords that I am starting with is database abstraction layer.  Wikipedia defines a database abstraction layer as ‘an application programming interface which unifies the communication between a computer application and databases’.  In some cases, a database abstraction layer can also help to maintain good application performance by caching important data, avoiding the need to repeatedly request data from a database engine.

Here are my questions: how does a Moodle developer save stuff to and get stuff from a database? Does Moodle have a database abstraction layer?  If it does, how might it work?  Finally, are there other database abstraction layers or mechanisms out there that could be used?  Let’s begin with the first question.

Getting stuff in and out

What instructions or mechanisms do developers used to get data into and out of Moodle, or a database that Moodle is using?  My first port of call is the Moodle documentation.  After a couple of clicks I find something called the Moodle Database Abstraction Layer.  This looks interesting but way too complicated (and initially confusing) for me to understand in one go.  What I’m interested in is an example.

I turn to the Moodle codebase and using my development environment I perform a text search (or grep) with  the word SELECT, which I know to be a frequently used part of the SQL database language which underpins most relational database systems, and browse through the results.  I quickly uncover a function called get_record_sql which seems to be the way to send SQL language commands to a database.

Another search reveals that the function is defined within a file called dmlib.php.  This library is said to contain all the Data Manipulation Language functions used to interact with the DB.  Comments within the file are reported to be ‘generic’ and work with a number of different databases.  A link to a documentation page is also provided, but seems to be describe functions that relate to the development version of Moodle, not the version that I am using (version 1.9).

It seems that functions named get_record_sql, get_record_select and update_record (amongst others) are all used to write to and read from a database that is used with Moodle.  To write new Moodle modules requires a developer to know a vocabulary of abstraction functions. 

The second question can be answered relatively easily: Moodle does seem to have a database abstraction layer.  Judging from the documentation it seems to have two different types of abstraction layers: one for the usage of a database, another for the creation of database structures.  I’ll try to write something about this second type in another post.

How does it work?

How does the Moodle abstraction layer work?  How does it act as an intermediary between the Moodle application and the chosen database engine? There seems to be a magic global variable called $db, and the abstraction layer code seems to be replete with comments about something called ADOdb.  Is ADOdb the magic that speaks to the different databases?

Another search for the phrase '$db =’ yields a set of interesting results, including files contained within a folder called adodb (lib/adodb).  This seems to be a database access library for PHP.  I uncover a link to the ADOdb sourceforge project from where the code originated and I’m rudely confronted with some sample code.

At this point, it seems that Moodle uses different two layers to 'get' and 'set' data.  It begins with the Moodle-world functions (the database manipulation language functions).  Calls are then passed to ADOdb, where they are magically ushered towards the database.

Other questions come to mind, such as: why did the Moodle developers choose ADOdb?  This question does not have an answer that can be easily uncovered.

Other abstraction layers

A quick glance at two of my PHP books point towards different database (or data) abstraction layers.  My copy of Programming PHP, for example, emphasises the use of a library called PEAR DB (named after the PHP Extension and Application Repository).  Clicking this previous link tells me that the PEAR DB library has since been replaced by something called MDB2.  My PHP Cookbook, on the other hand emphasises the use of PDO, which is a part of PHP 5 (a version of the PHP engine that the Moodle community has only relatively recently adopted).

So, why did the Moodle developers choose ADOdb when there are all these other mechanisms on offer?  I haven't managed to uncover forum discussion that explains the precise motivation for the choice.  Moodle release notes go back to May 2005, but the earliest forum discussion I can find that relates to ADOdb dates back to 2002.  Perhaps the choice could be put down as a happy accident of history and one that has facilitated amazing database interoperability.

One thing is clear: PDO is the (relatively) new kid on the 'database abstraction' block, and other software developers are asking the interesting (and difficult to answer) question of 'ADOdb or PDO: which is better?'  In trying to answer this question myself, I uncovered a slideshare presentation and a blog post that tries to compare the two technologies by using benchmarks to see which is faster.  PDO, it seems, is a central part of PHP 5 and has been written in 'native code' which might explain why is reported as being faster.

The debates about which database interface technology is better are interesting but don't directly arrive at a clear conclusion.  Different technologies may do similar things in slightly different ways, and sometimes a choice of one or the other may boil down to what the programmers have used in the past.  Unpicking the subtle advantages and disadvantages of each approach needs lots of time and determination.  And when you have an answer, affecting a change may be difficult.

Future developments?

I recently uncovered a really interesting Moodle forum discussion on the topic of database abstraction (amongst other things).  Subjects included differences between various database systems, the possibility of using stored procedures, the difficulty of mapping object-oriented data structures to relational database engines and so on.  All great fun for computer scientists and application developers!

One thing bugs me about the Moodle database abstraction layer is that it is very shallow.  It requires module developers to know a lot of stuff about things that ideally they shouldn't need to know about.  To add courses and modules, you have to know a little about the structure of the Moodle database and how to work with it.  There is very little code that separates the world of SQL statements (passed on to databases using DML and ADOdb) and the interfaces that are presented to users.

It could be argued adding additional layers of abstraction to more firmly manage data flow between Moodle application code and the database would place additional demands on the Moodle programmers.  In turn, it this could make it harder for occasional contributors, particularly those working within academic institutions to make contributions to the code base.  I strongly disagree with this argument.  Creating a more sophisticated (or layered) database abstraction approach may open up the possibility of making more effective use of functions of different database engines and make the Moodle code base easier to understand (if the abstractions are designed correctly).

One way to consider ways about how the abstraction layer might be improved is to look at how other open source projects solve the same problem.  I was recently told about the Drupal database abstraction layer.  One useful activity might be to investigate its design and learn about what decisions have helped to guide its development.

Summary

Databases can be a tough subject.  Creating an application that can work with different database engines effectively and efficiently is a big software engineering challenge.  This challenge, on the other hand, can make things a lot easier for those people who are responsible for the management and operation of IT services.  Providing application choice can increase the opportunities for an application to be used.

What is certain is that the database abstraction mechanisms that are currently used in Moodle will change as Moodle evolves and database engines are updated.  At the time of writing work is underway to further develop the Moodle database abstraction layer.  I look forward to seeing how it changes.

Image acknowledgement: pinksherbert, from Flickr.

Permalink
Share post
Christopher Douce

Big wins in accessibility?

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 21 Jul 2010, 13:33

A sign in a field which says: no visitor access beyond this point

In 2004 a report was published by the Disability Rights Commission (now known as the Equality and Human Rights Commission) that explored the state of website accessibility.  The DRC report, which is also summarised bythe on-law website analysed one thousand different web sites and evaluated their accessibility against the WCAG 1.0 guidelines.  It was concluded that 81% of the sites that were surveyed failed to reach the lowest level of accessibility (level A).

This statistic is surprising because it is such an alarmingly high figure.  This causes me to ask a closely related question: what does not being able to access websites mean?  One answer is that it can mean some people being unable to access goods, services and information.  It may also mean not being able to use tools that can be used to communicate with others.

Another question (and perhaps this is not a 'million dollar' question, but a 'multi-million dollar' question) is: what could we do to reduce this figure?  The DRC report presents a set of very sensible recommendations for different stakeholders: support service providers, assistive technology providers, operating system developers, website developers and owners, and developers of checking tools.

An alternative vision?

I think there is another approach that we could use.  The world-wide-web would not be what it is today without open source software (OSS).  You could even consider OSS to be the web’s backbone.  OSS powers the programming languages used to create open source operating systems (Linux).  These operating systems can play host to open source web servers (Apache), which in turn can offer functionality by through open-source software development frameworks build using open-source programming languages.

Some open source developments are more popular than others.  There may be a whole range of reasons that might contribute to success or popularity.  Usually it amounts to a vigorous development community and the fact that a product happens to solve a precise problem very well.

The 81% figure mentioned earlier relates only to web sites.  Many open source software developments are created especially to make it easier for other developers to build and manage different types of end-user facing web-based applications.

If we take the argument that there are open source software packages that are used to power web sites, and acknowledge the fact that some open source applications are likely to be more popular than others, we could argue that by improving the accessibility of certain web frameworks we might be able to reduce that 81% figure.

Of course, there is the difference between making changes to a software framework to make it more accessible to users, and making the materials that are presented using a framework more accessible.  Rather than tacking these two issues together, let's just thing about choosing software frameworks.

Choosing frameworks to explore

I use the web for loads of things.  I use it to both write and consume blogs.  I also use the web to buy stuff (especially around Christmas time!)  Very occasionally I might poke my head into on-line discussion forums, especially those that discuss programming or software development related topics.  I also browse to news portals (such as the BBC), and find myself on various information exchanges. 

In essence, I use the web for a whole range of different stuff.  If I take each of my personal 'web use cases', I can probably find an open source application that supports each of these tasks.  Let’s begin with the most obvious.  Let’s begin with blogs.

Blogging tools

Here, I have two questions: how accessible are blogging tools (to both read and write entries), and what blogging tools are out there?

I don’t know the answer to the first question, but I suspect that their accessibility could be improved.  On some sites you are presented with a whole range of different adverts and links.  Headings and tagging may be mysterious.  The blog editing tools may present users with a range of confusing icons and popups.  This is a topic ripe for investigation.

But what tools are out there?  A quick exploration of Wikipedia takes you to an article called Weblog software.  Immediately we are overwhelmed with a list of free and open source software.  But which are the most popular?  A quick poke around reveals two popular contenders for accessibility evaluation: Moveable Type and WordPress

A related question is: how many blogs do these systems collectively represent?  WordPress, for example, claims to be used with 'hundreds of thousands of sites' (and seen by tens of millions of people everyday), and reported 3.8 million downloads in 2007.  These are impressive figures.

Content management systems

Blogs are often referred to in the same sentence as a broader category of web software known as content management systems (or CMS for short).  As always, a quick probe around in Wikipedia reveals an interesting page entitled List of Content Management Systems. It appears there are loads of them!

CMS systems are used for different things.  You might use a CMS to create a way to more easily manage a static website that represents the 'store front' of a company or organisation (or brochureware sites, as I believe they might be know).  If used in this way a CMS can make the task of making updates a lot easier: you might not need a web designer to modify HTML code or add new files. Some CMS systems contain integrated blog tools.  As well as representing a store front, there might be a 'product' or 'service blog' to provide information to customers about new developments.

You might also use a CMS as an information portal.  A charity might use a CMS to provide fact sheets or articles on a particular subject.  A CMS may also provide additional functionality such as discussion forums, allowing users to share points of view on particular subjects.

A simple question is: which are the most popular open source content management systems?  This simple question is not easy to answer. It strikes me that you have to be closely involved with the world of content management systems to begin to answer this question effectively.  This said, a couple of systems jump out at me, all of which seem to have funny names.  Three systems that I have directly heard of are: Joomla!, Mambo and Drupal.  Other interesting systems include TangoCMS and PHPNuke

Unfortunately it is difficult to get a clear and unambiguous picture of how many web sites are created by these systems.  You cannot always tell by looking at the code of a website which content management system is has been created by.  This said, some research has been performed to explore other measures of popularity, such as downloads and search engine ranking values.  (Waterandstone Open Source CMS market share report - 5mb PDF)

What is certain, exploring the status of accessibility of one content management system may have a positive impact on wider set of websites.

Shopping

E-commerce isn’t the preserve of on-line megastores like Amazon.  Small specialist shops selling anything from diet pet food through to hi-fi speaker cables have the potential to become global 'clicks-and-mortar' retailers.

Some content management systems can be extended by installing additional 'blocks' to  add e-commerce functionality.  There is also a category of software that could be loosely described as shopping cart software (there is also a Wikipedia shopping software comparison page for the curious).  Further probing uncovers a category entitled Free electronic commerce software.

Following the links to osCommerce website, some interesting claims can be revealed.  It is stated that over fourteen thousand shops using this one platform have been voluntary added to a directory of on-line businesses. 

I also clicked on another shopping site provider: CubeCart. Although not an open source platform, CubeCart claims that it is used by a 'million store owners around the world'.  It is interesting to note that accessibility is not one of its selling points.

Community sites or forums

Content management systems have begun to step on the toes of what might be considered to be an older category of web software: community or on-line discussion forums.  As ever, Wikipedia is useful, offering a comparison page. Whatever your interest, there will be a forum on the web in which you can share opinions and experience with others.  Forums should be accessible too.

Summary

Creating a web site, or a web based application is hard work (in my opinion).  There is so much to think about: information architecture, graphical design, HTML coding, databases, CSS files.  To help you, there are loads of software development frameworks that can help out.  Many of these frameworks are open source, which means you can modify software so it can match your precise needs.

Another great thing about open source software is that if you find a framework that does not generate HTML code that is accessible as it could be, any improvements that you make has the potential to affect a wider user community of both developers and end users.

What is not clear, however, is the precise extent of the accessibility of some of the software frameworks that have been presented here.  Whilst it is true that accessibility is more a matter of changing or correcting programming code, exploring some of these projects in depth may be one way to increase the accessibility and on-line experience for the benefit of all web users.

Acknowlegements

Posting image, licenced under creative commons from chough, from Flickr.

Permalink
Share post
Christopher Douce

Reflections on learning object granularity

Visible to anyone in the world
Edited by Christopher Douce, Tuesday, 1 Sep 2020, 07:57

4815260870_ea30750a28.jpg

I first discovered the notion of learning object granularity when I was tasked with creating my first learning object.  I was using an authoring tool that allowed you to describe (or tag) absolutely anything.  This was a revelation!  My tool allowed me to assign descriptions to individual photographs and sets of navigable pages that could contain any type of digital media you could imagine.  You could also assign descriptions to an entire learning object.  Not only was I struggling with how to use the fields (title, description, keywords) that I had to complete, it was also difficult to know where I should stop!

Terms of reference

There are a significant number of terms here that beg further explanation.  The idea of a learning object (wikipedia) is one that is slippery: it varies depending upon who you speak to.  I see a learning object as one or more digital resources that have the potential to provide useful information to, or serve a useful function for, a consumer.  Consumers, I argue, are both the end-users (learners), and those who might use learning objects within a course of study.

An alternative definition might be: a set of learning resources that can be used together to help a learner achieve a defined set of learning objectives.  I think I prefer this second definition.  It feels a little more precise, but there are few words that allude to how large a learning object might be.

Benefits of learning objects

One of the often cited benefits of learning objects is that they have the potential to be reused.  A digital resource could be taken from one learning situation could be reused (or repurposed) to another situation.  The benefits could include an increase in the quality of the resulting material and possible savings in terms of time and money.

Learning objects are sometimes held within mysterious instruments called repositories.  If existing materials are taken and modified, they could then be later returned to a repository and placed back into circulation for other people can use and modify, thus creating a virtuous cycle.  One problem with placing material in a repository is that if your repository contains tens of thousands of individual objects, finding what you want (to solve your particular teaching need) can become difficult (as well as tedious).

Metadata (wikipedia) has the ability to 'augment’ textual searching, potentially increasing the quality of search results.  Metadata also has the ability to offer you additional information or guidance about what an object might contain and how it might have been used, allowing you to make judgements regarding its applicability in your own teaching context.

There is a paradox: the more granular (or mutable) a learning object is, the more easily it can be reused, but the less useful it is likely to be.  The larger a learning object is, the more useful it is to an individual user (since it may attempt to satisfy a set of learning objectives), and the less likely it could be transferred or 'repurposed' to different learning and teaching contexts or situations.  Furthermore, the smaller the learning object, the more moral fibre one needs to successfully create correct (and relevant) metadata.

Repurposing

'Repurposing' is a funny word. I understand it to mean that you take something that already exists and modify it so it becomes useful for your own situation. I think repurposing is intrinsically difficult.  I don't think it's hard in the sense that it's difficult to change or manipulate different types of digital resources (providing you already have skills to use the tools to effect a change).  I think it's hard because of the inherent dependencies that exist within an object.  You have to remember to take care of all those little details. 

I consider repurposing akin to writing an essay. To write a really good essay you have to first understand the material, secondly understand the question that you are writing about, and then finally, understand who you are writing it for.  If you write an essay that consists of paragraphs which have been composed in such a way that they could be used in other essays, I sense you will end up with an essay that is somewhat unsatisfactory (and rather frustrating to read).

There is something else that can make learning object repurposing difficult.  Learning objects are often built with authoring tools.  Some tools begin with a source document and then spit out a learning object at the other end.  The resulting object may (or may not) contain the source from which it was created.  This is considered to be 'destructive' (or one way) 'authoring', where the resulting material is difficult to modify.

Even if we accept that reuse is difficult, there are other reasons why it is not readily performed.  One reason is that there is no real sense of prestige in using other people materials (but you might get some credit if you find something that is particularly spectacular!).  Essentially, employers don't pay people to re-purpose learning materials; they pay people to convey useful and often difficult ideas of learners in a way that is understandable.  There is no reward structure or incentive to reuse existing material or build material that can be easily reused.  Repurposing takes ingenuity and determination, but within the end result, much of this may be hidden.

There is a final reason why people may like to create and use their own learning resources rather than reuse the work of others.  The very act of creating a resource allows one to acquire an intimate understanding of the very subject that you are intending on teaching.  Creating digital resources is a creative act.  Learning object construction can be constructivism used to prepare for teaching.

Considering metadata granularity

The terms 'aggregate' (or 'composite') and 'atomic' objects are sometimes used when talking about learning objects.  An atomic object, quite simply, are objects  that cannot be decomposed.  An atomic object might well be an image or a sound file, whereas an aggregate object might be a content package or a SCORM object.

In my opinion, many aggregate objects should be considered and treated as atomic objects since it could be far too difficult, complex and expensive to treat them in any other way.  I hold this view since learning objects are ultimately difficult to reuse and repurpose for the reasons presented earlier, but this should not detract from the creation and use of repositories.  Repositories are useful, especially if their use is supported by organisational structures and champions.

I hold the view that metadata should match the size of a resource that it describes.  There should be metadata that describes an object in terms of overall learning objectives.  Lower-level metadata can be used to add additional information to a composite object (such as an image file) that cannot be directly gained from examining its properties or structure (such as using an algorithm to determine its type).

In essence, tagging operations for aggregate and atomic object types must be simple, economic and pragmatic.  If you need to do some tagging to add additional information to a resource (a pragmatic decision), the tagging operation should be simple, and in turn, should be cost effective.

High and low-level metadata tagging

The purpose of high level tagging, the description of high level aggregate object, should be obvious.  Consider a book.  A book has metadata that describes it so it can be found within a library with relative ease (of course, things get more complicated when we consider more complex artefacts, such as journals!).

Low-level (or lower level) metadata may correspond to descriptions of individual pages or images (I should stress at this point, my experience in this area comes from the use of software tools, rather than any substantial period of formal education!).  Why would one want to 'tag' these smaller items (especially if it costs time and money)?  One reason is to provide additional functionality

Metadata helps you to do stuff, just in the same way that storing a book title and list of authors help you to find a book within a library.  Within the EU4ALL project, metadata has the potential to allow you to say that one page (which may contain an audio file) is conceptually equivalent to another page (which contains a textual equivalent).

By describing the equivalence relationships between different resources, the users experience can be optimised to their preferences.  There is also the notion of adaptability, for example, whether a resource can be dynamically changed so it can be efficiently consumed using the device from where it was accessed (this might be a mobile device, a PC, or a PC that is using assistive technology).

Moving forwards

One of the biggest challenges within EU4ALL is to ensure that the users interface to an adaptable learning technology system is coherent, consistent and understandable.  By way of addressing accessibility concerns, all users could potentially benefit.  Learners could potentially be presented with an interface or a sign that indicates that different alternatives are available at certain points during a learning activity, should they be found to exist.  Presenting alternatives in a way that does not cause disruption to learning, yet remains flexible by permitting users to change their preferences, is a difficult task.

Creating metadata is something that is difficult and tiresome (not to mention expensive).  As a result, mistakes can be easily introduced.  Some researchers (slideshare) have been attempting to explore whether it is possible to automatically generate metadata using information about the context in which they are deployed.  In fact, it appears to be a subject of a recent research tender.  But ultimately, humans will be the final consumer of metadata, although metadata languages are intended to be used by machines.

Summary

The notion of a learning object is something that is difficult to define.  Speak to ten different people and you are likely to get ten different answers.  I hold the view that the most useful learning object is an aggregate (or composite) learning object. 

Just as the idea of a learning object can be defined in different ways, the notion of granularity can also have different definitions.  The IEEE LOM standard offers four different levels of 'aggregation', ranging from 1, which refer to 'raw media data' (or media objects), through to individual lessons (level 2), a set of lessons (i.e. a course, level 3), to finally a set of courses which could lead to a formal qualification or certificate (level 4).  I hold the opinion that metadata should match the size of a 'learning object'.  Otherwise, you might end up in a situation where you have to tag everything 'in case it might be used'.  This is likely to be expensive.

High level metadata (in my opinion) is great for storing larger objects within repositories, whereas low-level metadata can be used to describe the adaptability and similarity properties of smaller resources which opens up the possibility of delivering learning resources that match individual user needs and preferences.

Acknowledgements

Posting image: from cocoi_m, from Flick.  Thanks go to an anonymous reviewer whose comments have been very instructive, and all those on the EU4ALL project.  The opinions that are presented here are my own rather than those of the project (or the Open University).

Permalink 1 comment (latest comment by Jonathan Vernon, Friday, 5 Oct 2012, 06:45)
Share post
Christopher Douce

Using OpenLearn resources with Moodle

Visible to anyone in the world
Edited by Christopher Douce, Monday, 29 Apr 2019, 13:38

OpenLearn logo

One of the things that we need to do in the EU4ALL project is to create a prototype. To show the operation of a prototype, we need to show how content can be personalised. To show content personalisation happening we need some content. Luckily, the OpenLearn project is at hand to provide some Open Educational Resources (wikipedia) that we may be able to use.

The OpenLearn project provides learning materials in a number of formats. These formats range from native OU XML files, raw HTML files, IMS content packages through to RSS feeds and Moodle backup formats. This post is all about finding the most effective way to transfer OpenLearn to Moodle (and uncovering the best approach to use for on-going development work).

Using a Moodle Course Backup

The sample content that I'm going to use is a learning package about the Forth Road bridge (openlearn). This learning package (for want of a better term) is interesting since it contains a couple of different resources, including a video, a transcript in the form of a PDF file and some HTML pages.

In terms of loading the package to Moodle, I thought the easiest route would be to import the backup file type. The course backup facility allows Moodle users to make copies of entire courses (and their setup) for safe keeping. If inadvertent changes are made, a user then has the possibility of restoring (Moodle documentation) a course to your Moodle installation.

Others at the OU have blogged about similar issues, providing a more comprehensive description about how to setup an EEE netbook to allow users to view the OpenLearn material whilst on the move. This post takes (more or less) a similar approach, but focuses more on the different OpenLearn filetypes.

After downloading an OpenLearn Moodle backup course, I logged into Moodle as an administrator then clicked around on the 'course' menu options to see what I could find. It wasn't immediately clear what to do, so I went to the documentation for help. I found quite a few things.

I discovered that you needed to use the Course administration block. But this could be only accessed from within a course. It was apparent that to import a course, I needed to also create one.

After creating an empty course (using all the default settings), the course administration block duly appeared. From faint memories of this process, having played with this part of Moodle a couple of years ago I remember that restore was a two step process: first you had to upload the backup file, then you had to click on a restore link to start the backup process.

After trying to upload my backup package I was presented with a message that read 'a required parameter (id) was missing' ('what on earth does this mean?' I wondered). I then noticed that the size of my OpenLearn zip file was bigger (because it contained a video) than maximum supported upload file size in Moodle. Obviously I need to change a setting somewhere.

The first place that I looked was the Moodle system configuration file called config.php, but this didn't tell me much. I then delved into the area of my computer that contained the PHP installation and found a file called php.ini.

After a quick search, I discovered two places which might explain the maximum file size that Moodle has told me about. I subsequently make a change to the upload_max_filesize variable, setting it to 32MB, restarted my web server and then refreshed my browser. As if by magic, the maximum file size that Moodle allows has changed.

When trying the upload again, everything seemed to work okay (but I should say that the error message that I was presented with does need some attention).

When the upload from local file store to a Moodle folder was completed, I could see an adjacent 'Restore' button which I clicked. I was then presented with a question: 'Later in this process you will have a choice of adding this backup to an existing course or creating a completely new course – do you want to continue?' In my situation I initially want to do the latter operation, but I'm forced to do the former. I click yes to continue.

I was then presented with a list of actions that have been carried out: creating temporary structure, deleting old data etc, with no button or option to click on afterwards when it appears that everything has finished. Obviously things were not working as they should be. I carried out a further web search for answers.

I discovered the following from the Moodle backup and restore FAQ: 'Attempting to restore a course to an older version of Moodle than the one the course was backed up on can result in the restore process failing to complete'.

So, what versions am I using, and what version is the OpenLearn backup software provided in? To find the version of your Moodle installation, you have to go to the site administration menu (when logged in as an administrator), and click on Environment. I soon discover that I was using version 1.9+. I extract the contents of the OpenLearn Moodle Backup file and discover that it might be version 1.9, according to the first set of lines in an XML file that I discover. It seems I might be in a spot of trouble.

Getting Moodle Restore working

All was not lost, however. After some random searches I found a forum discussion. Fred has a suggestion: change a more recent programming file to an older version (which can be downloaded from the Moodle code repository). I changed the name of my 'restorelib.php' to 'backup restorelib.php' and download the version he suggests.

After replacing the file and restarting the restore process, magic begins to happen and messages are displayed on the screen. I'm then presented with a course restore screen, where a drop down box has the options: restore to new course (what I wanted to do initially), existing course deleting it first, existing course adding data to it. I chose 'existing course deleting it first', carelessly ignore everything else (which has been automatically ticked), and faithfully click on continue. I'm then presented with a list of courses to overwrite (I was surprised by this option since I thought I was automatically going to overwrite the course from where I clicked the 'restore' option through). Ignoring the warning, 'this process can take a long time', I clicked on 'restore this course now!'

4815233102_2dd79fab23.jpg

It didn't take a long time, and a minute or so later, I could happily browse through (and edit) my newly imported OpenLearn courses. Fred saved the day!

But what of the other OpenLearn file options? I'll steer clear of the 'Unit Content XML', the 'OU XML Package' and IMS Common Cartridge for now and instead focus on some of the others.

IMS Content Package

IMS publishes specifications that aim to make learning technology systems interoperate with each other. One of the specifications that they have publishes is the content package (CP). A CP is essentially a bunch of files which are contained with a zip file. In the zip file there is something called a manifest file. This manifest file is, more or less, like a table of contents, when is read by a VLE/LMS like Moodle.

In Moodle, a CP can be a resource (interactive components are called activities). I create a new course, set a course to have a topic format, and choose to upload my CP to the first topic. When this is done, I browse to the newly added resource and Moodle tells me that it is about to deploy the CP (meaning, uncompress its contents and read the table of contents file). When complete, I can now navigate through the different pages of my material.

4815233182_9a3fe65ec5.jpg

One of the differences between this format and the Moodle format is that the content is a lot more difficult to change. You have to use special tools, such as Reload to edit the manifest file, and HTML editors (and other similar tools) to change the contents of individual pages. Also, there is no direct way to include VLE supported interactive tools such as Wikis, blogs or on-line discussion forums in the middle of the material other than using the navigation mechanisms that the VLE provides (this will hopefully become a bit clearer later on).

SCORM

SCORM is an industry standard for the sharing of e-learning materials. SCORM makes use of IMS content packaging and defines an interface between the learning material and the VLE that is used to present the material.

This interface allows the VLE to record information such as whether the user has viewed all the pages of a SCORM resource, store interaction state to the VLE (such as answers for formative questions) and retrieve information from the VLE, such as the name of the current user (to allow partial customisation of a learning experience).

IMS content packages created by OpenLearn can also be viewed using the Moodle SCORM player (but I don't know if there are any problems doing this!).

4814610459_4a2ff30fd5.jpg

SCORM originated from a US government initiative called Advanced Distributed Learning (wikipedia). As a result, it reflects its training origins. Like IMS CP, it does not directly support the inclusion of interactive activities that are provided by a VLE (other than the activities that are contained within the boundaries of a content package).

In Moodle, there are two ways to present SCORM resources. The first, as presented above, is to add it as an 'activity'. The other way is to create a course that has a SCORM format. Rather than having individual weeks or topics, a single SCORM occupies centre stage. Surrounding the centre, it is possible to create Moodle supported activities, such as forums. Here I have create a Moodle wiki, allowing consumers of the OpenLearn course to share links about bridge engineering (!), for example.

4814610507_d113253322.jpg

The way that Moodle presents IMS packages and SCORM objects (or SCOs – sharable content objects) are similar, but subtly different, making me wonder about the underlying source code. When I have time I'll explore the code development history to see whether they are related in any way.

Plain Zip

One of the simplest formats that OpenLearn supports is called plain zip.

Unzipping a 'plain zip' file reveals all the resources for a course (images, video and transcripts), along with two types of HTML file: an index file (which is similar to the Moodle course summary screen that was presented earlier), and set of content pages. The content pages themselves have their own navigation links, i.e. page 1 is connected to page 2 and so on. SCORM, on the other hand, provides its own mechanism to navigate between resource pages, generated by the information contained within a manifest file.

Two other things are provided in the plan zip package: a creative commons deed (describing licencing terms), and a formatting stylesheet. If you want, you can change the font and the colours of the content pages by changing the stylesheet. The action of double-clicking on any of the HTML files within this package displays the material directly in a browser.

So, how can a plain zip OpenLearn package be used in Moodle? Is it possible?

The answer is that it is possible, and it's quite easy, but the end result is obviously not as 'integrated' as the other approaches. First of all, I create a new course. I give my course an obvious name and set it to use the 'topics' format. Then I transfer my OpenLearn zip package to Moodle. To do this, I click on the Files menu (from administration block whilst logged in as an administrator), and upload the zip file to the course (each course has its own file area). When the file has been uploaded, I unzip the zip file. After pressing the course edit button, I can now add a link.

From the resource menu I click on 'link to a file or website'. Here I select 4ROAD_1_section0.html. This is the first content file in a sequence of four. It is the file that presents the learning objectives to a learner.

4814610291_a5b503bbb1.jpg

I turn editing off to see the effect of what I have done. Clicking on the new link takes you to the first page in the OpenLearn content, providing further navigation links that allows you to access all the other resources.

One thing that should be noted is that you have not directly uploaded the resource into a directory on the web server that anyone can access to. Only people who have legitimate access rights can gain access to these files.

These approaches rely on content being downloaded from the OpenLearn site to Moodle. Are there any other ways to tell your students about the OpenLearn content through Moodle?

RSS

The final way that I will describe is through RSS (wikipedia). RSS is most commonly associated with blog syndication. RSS can be described as an XML data structure that contains links to interesting material. OpenLean also provide RSS Feeds to individual courses. If you take a copy of a RSS feed link, you can use it within other tools. One of those tools is Moodle.

Moodle can make use of activities, resources and blocks. Blocks are the pieces of functionality that can surround courses. Blocks can be added, deleted and moved around. One of the blocks that Moodle provides is an RSS block.

Using course I created earlier, I add a new block and paste in the RSS feed link that I gathered from the OpenLearn course, then ticked a tickbox and confirmed something. As if by magic, my new block was populated by the contents of the OpenLearn course I had just told it about.

4815233350_3665e64f54.jpg

Clicking on one of these links takes you directly to the OpenLearn site, where you can access the material directly. The advantages of this approach is that you don't have to do very much, plus the material is always up to date.

There is an outstanding question that this section of the blog raises: could it be possible to create a Moodle activity (or resource?) called an 'RSS feed' that could be placed within the main body of a course? This way, educators could be able to quickly and efficiently group together different OpenLearn (or other forms of OER) resources. Furthermore, this would make it possible to group different 'blog reading or reviewing' activities together which may culminate in a forum discussion or even an on-line audio conference at a pre-arranged time. But here, I'm starting to digress...

Further information

After having completed (more or less) the first section in this post, I discovered an OpenLearn course entitled Re-using, Remixing and Creating Content. This provides further information about the different file types and how they can be manipulated.

Conclusions

There are a number of different ways to use OpenLearn content in Moodle. Each of them differ in terms of how much you have to do and how the end result appears. Taking a personal perspective, which one might be the best approach to use within my project?

What I want is flexibility: the ability to change a course and add an additional category of resource to the middle of it, should this be required. Since I'm going to be using Moodle as my main research tool, it makes sense to make use of the Moodle course format. I can then make use of the Moodle tools (should this be necessary) and move resources and sections around with relative ease.

Permalink
Share post

This blog might contain posts that are only visible to logged-in users, or where only logged-in users can comment. If you have an account on the system, please log in for full access.

Total visits to this blog: 2012403