Aegis Project : Open accessibility everywhere

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 21 Jul 2010, 18:20

I recently attended a public dissemination event that was held by the AEGIS project, hosted by the European headquarters of the company that developed the Blackberry, Research in Motion.

The Aegis project has the strapline that has three interesting keywords: groundwork, infrastructure and standards.  When I heard about the project from a colleague, I was keen to learn what lay hidden underneath these words and how they connect to the subject of accessibility.

The Aegis project website states that it 'seeks to determine whether 3rd generation access techniques will provide a more accessible, more exploitable and deeply embeddable approach in mainstream ICT (desktop, rich Internet and mobile applications)' and goes on the say that the project will explore these issues through the development of an Open Accessibility Framework (or OAF).  This framework, it is stated, 'provides embedded and built-in accessibility solutions, as well as toolkits for developers, for "engraving" accessibility in existing and emerging mass-market ICT-based products'.  It goes on to state that the users of assistive technologies will be placed at the centre of the project.

The notion of the 'generations' of access techniques is an interesting concept that immediately jumped out at me when reading this description (i.e. what is the third generation and what happened to the other two generations?), but more of this a little later on.

Introductory presentations

The dissemination day began with a couple of contextualising presentations that outlined the importance of accessibility.  A broad outline of the project was given by the project co-ordinator who emphasised that the point that the development of accessibility required the co-operation of a large number of different stakeholders, ranging from expert users of assistive technology (AT), tutors, and developers.

There was a general call for those who are interested in the project to 'become involved' in some of the activities, particularly with regards to understanding different use cases and requirements.  I'm sure the project co-ordinator will not be offended if I provided a link to the project contacts page.

AT Generations

The next presentation was made by Peter Korn of Sun Microsystems who began by emphasising the point that every hour (or was it every second?) hundreds of new web pages are created (I forget the exact figure he presented, but the number is a big one).  He then went on to outline the three generations of assistive technologies.

The first generation of AT could be represented by the development of equipment such as the Optacon (wikipedia), an abbreviation for Optical to Tactile Converter.  This is the first time I had heard of this device before, and this represented the first 'take away' lesson of the day.  The Wikipedia page looks to be a great summary of its development and its history.

One thing that is missing is the lack of an explicit link to a personal computer.  The development of a PC gave way to a new second generation of AT that served a wider group of potential users.  This generation saw the emergence of specialist AT software vendors, such as companies who develop products such as screen readers and screen magnifiers.  Since computer operating systems are continuing to develop and hardware is continually changing (in terms of increases in processing power), this places unique pressures on the users of assistive technology.

For some AT systems to operate successfully, developers had have to apply a number of clever tricks.  Imagine a brand new application package, such as a word processing program, that had been developed for the first generation of PCs, for example.

The developers of such an application would not be able to write code in such a way that allows elements of the display to be presented to users of assistive technology.  One solution for AT vendors is to rely on tricks such as the reading of 'video memory' to convert on-screen video displays that could be presented to users with visual impairments using synthetic speech.

The big problem of this second generation of AT is that when there is a change to the underlying operating system of a computer it is possible that the 'back door' routes that assistive technologies may use to gain access to information may become closed, making AT systems (and the underlying software) rather brittle.  This, of course, leads to a potential increase in development cost and no end of end user frustration.

The second generation of AT is said to have existed between the late 1980s to the early 2000s.  The third generation of AT aims to overcome these challenges since operating systems and other applications begin to providing a series of standardised Accessibility Application Programming Interfaces (AAPIs).

This means that different suppliers of assistive technology can write software that uses a consistent interface to find out what information could be presented to an end user.  An assistive technology, such a screen reader, can ask a word processor (or any other application) questions about what could be presented.  An AAPI could be considered as a way that one system could ask questions about another.

Other presentations

Whilst an API, in some respects can represent one type of standard, there are a whole series of other standards, particularly those from the International Organization for Standardization (ISO) (and other standards bodies) that can provide useful guidance and assistance.  A further presentation outlined the complex connections between standards bodies and underlined the connection to the development of systems and products for people with disabilities.

A number of presentations focussed on technology.  One demonstration used a recent release of the OpenSolaris operating system (which makes use of the GNOME desktop system) to demonstrate how the Orca screen reader can be used in conjunction with application software such as OpenOffice.

With all software systems, there is often loads of magic stuff happening behind the scenes.  To illustrate some of this magic (like the AAPI being used to answer questions), a Gnome application called Accerciser was used.  This could be viewed as a software developer utility.  It is intended to help developers to 'check if an application is providing correct information to assistive technologies'.

OpenOffice can be extended (as far as I understand) using the Java programming language.  Java can be considered as a whole software development framework and environment.  It is, in essence, a virtual machine (or computer) running on a physical machine (the one that your operating system runs on).

One of the challenges that developers of Java had to face was to how to make its user interface components accessible to assistive technology.  This is achieved using something called the Java Access Bridge.  This software component is, in essence, 'makes it possible for a Windows based Assistive Technology to get at and interact with the Java Accessibility API'.

On the subject of Java, one technology that I had not heard of before is JavaFX.  I understand this to be a Java based language that has echoes of Adobe Flash and Microsoft Silverlight about it, but I haven't had much of a time to study it.  The 'take home' message is that rich internet applications (RIA) need to be accessible too, and having a consistent way to interface with them is in keeping with the third generation approach to assistive technologies.

Another presentation made use of a Blackberry to demonstrate real time texting and show the operation of an embedded screen reader.  A point was made that the Blackberry makes extensive use of Java, which was not something that I was aware of.  There was also a comment about the importance of long battery life, an issue that I have touched upon in an earlier post.  I agree, there is nothing worse than having to search for power sockets, especially when you rely on technology.  This is even more important if your technology is an assistive technology.

Towards the fourth generation

Gregg Vanderheiden gave a very interesting talk where he mentioned different strategies that could be applied to make systems accessible, such as making adaptations to an existing interface, providing a parallel interface (for example, you can carry out the same activities using a keyboard or a mouse), or providing an interface that allows people to 'plug in' or make use of their own assistive technology.  One example of this might be to use a software interface through an API, or to use a hardware interface, such as a keyboard, through the use of a standard interface such as USB.

Greg's talk made me think about an earlier question that I had asked during the day, namely 'what might constitute the fourth generation of assistive technologies?'  In many respects this is an impossible question to answer since we can only identify generations when they have passed.  The present and especially the future will always remain perpetually (and often uncomfortably) fuzzy.

One thought that I had regarding this area firmly connects to the area of information pervasiveness and network ubiquity.  Common household equipment such as central heating systems and washing machines often continue to remain resolutely unfathomable to many of us.   I have heard researchers talking about the notion of 'networked homes', where it is possible to control your heating system (or any other device) through your computer.

I remember hearing a comment from a delegate who attended the Open University ALPE project workshop who said, 'the best assistive technologies are those that benefit everyone, regardless of disability, such as optical character recognition'.  But what of a home of networked household goods which can potentially offer their own set of wireless accessible interfaces?  What benefit can such products provide for users who do not have the immediate need for an accessible interface?

The answer could lie with increasing awareness of the subject of energy consumption and management.  Washing machines, cookers and heating systems all consume energy.  Exposing information about energy consumption of different products could allow households to manage energy expenditure more effectively.  In my view, the need for 'green' systems and devices may facilitate the development and introduction of products could potentially contain lightweight device level accessibility APIs.

Further development directions

One of the most interesting themes of the day was the idea of the accessibility API that has made the third generation of assistive technologies what they are today.  A minor comment that featured during the day was the question of whether we might be able to make our software development tools and environments accessible.  Since accessibility and usability are intrinsically connected, the question of, 'are the current generation of accessibility API's as usable as they can be?'

Put another way, if the accessibility APIs themselves are not as usable as they could be, this might reduce the number of software developers who may make use of them, potentially reducing the accessibility of end user applications (and frustrating the users who wish to make use of assistive technologies).

Taking this point, we might ask, 'how could we test (or study) the accessibility of an API?'  Thankfully, some work has already been carried out in this area and it seems to be a field of research that is becoming increasingly active.  A quick search yields a blog post which contains a whole host of useful resources (I recommend the Google TechTalk that is mentioned).  There is, of course, a presentation on this subject that I gave at an Open University conference about two years ago, entitled Connecting Accessibility APIs.

It strikes me that a useful piece of research to carry out is to explore how to conduct studies to evaluate the usability of the various accessibility APIs and whether they might be able to be improved in some way.  We should do whatever we can to try to smooth the development path for developers.  Useful tools, in the form of APIs, have the potential to facilitate the development of useful and accessible products.

And finally...

Towards the end of the day delegates were told about a site called RaisingTheFloor.net (RTF).  RTF is described as a consortium of organizations, projects and individuals from around the world 'that is focused on ensuring that people experiencing disabilities, literacy problems, or the effects of aging are able to access and use all of the information, resources, services and communities available on or through the Web'.  The RTF site provides a wealth of resources relating to different types of assistive technologies, projects and stakeholders.

We were also told about an initiative that is a part of Aegis, called the Open Accessibility Everywhere Group (OAEG).  I anticipate that more information about OAEG will be available in due course.

I also heard about the BBC MyWebMyWay site.  One of the challenges for all computer users is learning and knowing about the range of different ways in which your system can be configured and used.  Sites like this are always a pleasure to discover.

Summary

It's great to go to project dissemination events.  Not only do you learn about what a project aims to achieve, but sometimes the presentations can often inspire new thoughts and point you toward new (and interesting) directions.  As well as learning about the Optacon (which I had never heard of before), I also enjoyed the description of the different generations of assistive technologies.  It was also interesting witness the various demonstrations and be presented with a teasing display of the complexities that lie very often hidden amidst the operating system of your computer.

The presentations helped me to connect the notions of the accessibility API and pervasive computing.  It also reminded me of some research themes that I still consider to be important, namely, the usability of accessibility APIs.  In my opinion, all these themes represent interesting research directions which have the fundamental potential of enhancing the accessibility and usability of different types of technologies.

I wish the AEGIS project the best of luck and look forward to learning more about their research findings.

Acknowlegements

Thanks are extended to Wendy Porch who took the time to review an earlier draft of this post.

Share post

Formative e-assessment dissemination day

Visible to anyone in the world
Edited by Christopher Douce, Monday, 19 Nov 2018, 10:40

A couple of weeks ago I was lucky enough to be able to attend a 'formative e-assessment' event that was hosted by the London Knowledge Lab.  The purpose of the event was to disseminate the results of a JISC project that had the same title.

If you're interested, the final project report, Scoping a Vision for Formative e-Assessment is available for download.  The slides for this event are also available, where you can also find Elluminate recordings of the presentations.

This blog post is a collection of randomly assorted comments and reflections based upon the presentations that were made throughout the day.  They are scattered in no particular order.  I offer them with the hope that they might be useful to someone!

Themes

The keynote presentation had the subtitle, 'case stories, design patterns and future scenarios'.  These words resonated strongly with me.  Being a software developer, the notion of a design pattern (wikipedia) is one that was immediately familiar.  When you open the Gang of Four text book (wikipedia) (the book that defines them), you are immediately introduced to the 'architectural roots' of the idea, which were clearly echoed in the first presention.

The idea of a pattern, especially within software engineering, is one that is powerful since it provides software developers with an immediate vocabulary that allows effective sharing of complex ideas using seemingly simple sounding abstractions.  Since assessment is something that can be described (in some sense) as a process, it was easy to understand the objective of the project and see how the principle of a pattern could be used to share ideas and facilitate communication about practice.

Other terms that jumped out at me were 'case stories' and 'scenarios'.  Without too much imagination it is possible to link these words to the world of human-computer interaction.  In HCI, the path through systems can be described in terms of use cases, and the settings in which products or systems could be used can be explored through the deployment of descriptive scenarios and the sketching of storyboards.

Conversational framework

A highlight of the day, for me, was a description of Laurillard's conversational framework.  I have heard about it before but have, so far, not had much of an opportunity to study it in great detail.  Attending a presentation about it and learning about how it can be applied makes a conceptual framework become alive.  If you have the time, I encourage you to view the presentation that accompanies this event.

I'm not yet familiar enough with the model to summarise it eloquently, but I should state that it allows you to consider the role of teachers and learners, the environment in which the teacher carries out the teaching, and the space where a learner can carry out their own work.  The model also takes into account of the conversations (and learning) that can occur between peers.

During the presentation, I noted (or paraphrased) the words: 'the more iterations through the conversational model you do, the higher the quality of the learning you will obtain'.  Expanding this slightly, you could perhaps restate this by saying, 'the more opportunities to acquire new ideas, reflect on actions and receive feedback, the more familiar a learner will become with the subject that is the focus of study'.

In some respects, I consider the conversational framework to be a 'meta model' in the sense that it can (from my understanding) take account of different pedagogical approaches, as well as different technologies.

Another 'take away' note that I made whilst listening to the presentation was, 'learning theories are not going to change, but how these are used (and applied) will change, particularly with regards to technology'.

It was at this point when I began to consider my own areas of research.  I immediately began to wonder, 'how might this model be used to improve, enhance or understand the provision of accessibility?'  One way to do this is to consider each of the boxes the arrows that are used to graphically describe the framework.  Many of the arrows (those that are not labelled as reflections) may correspond to communications (or conversations) with or between actors.  These could be viewed as important junctures where the accessibility of the learning tools or environments that could be applied need to be considered.

Returning to the issue of technology, peers, for instance, may share ideas by posting comments to discussion forums.  These comments could then be consumed by other learners (through peer learning) and potentially permit a reformulation or strengthening of understandings.

Whilst learning technologies can permit the creation of digital learning spaces, such as those available through the application of virtual learning environments, designers of educational technologies need to take account of the accessibility of such systems to ensure that they are usable for all learners.

One of my colleagues is one step ahead of me.  Cooper writes, on a recent blog post,  'Laurillard uses [her framework] to analyse the use of media in learning. However this can be further extended to analyse the accessibility of all the media used to support these different conversations.'  The model, in essence, can be used to understand not only whether a particular part of a course is accessible (the term 'course' is used loosely here), but also be used to highlight whether there are some aspects of a course that may need further consideration to ensure that is as fully inclusive at it could be.

Returning to the theme of 'scenario', one idea might be to use a series of case studies to further consider how the framework might be used to reason about the accessibility status of a course.

Connections

There may be quite a few more connections lurking underneath the terms that were presented to the audience.  One question that I asked to myself was, 'how do these formative assessment patterns relate to the idea of learning designs?' (a subject that is the focus of a number of projects, including Cloudworks, enhancements to the Compendium authoring tool, the LAMS learning activity management system and the IMS learning design specification).

A pattern could be considered as something that could be used within a part of a larger learning design.  Another thought is that perhaps individual learning designs could be mapped onto specific elements of the conversational model.  Talking in computing terms, it could represent a specific instantiation (or instance).  Looking at it from another perspective, there is also the possibility that pedagogical patterns (whether e-assessment or otherwise) may provide inspiration to those who are charged with either constructing new or using existing learning designs.

Summary

During the course of the day, the audience were directed, on a number of occasions to the project Wiki.  One of the outcomes of the project was a literature review, which can be viewed on-line.

I recall quite a bit of debate surrounding the differences between guidelines, rules and patterns.  I also see links to the notion of learning designs too.  My understanding is that, depending on what you are referring to and your personal perspective, it may be difficult to draw clear distinctions between each of these ideas.

Returning to the issue of the conversational model being useful to expose accessibility issues, I'm glad that others before me have seen the same potential connection and I am now wondering whether there are other researchers who may have gone even further in considering the ways that the framework might be applied.

In my eyes, the idea of e-assessment patterns and the notion of learning designs are concepts that can be used to communicate and share distilled best practice.  It will be interesting to continue to observe the debates surrounding these terms to see whether a common vocabulary of useful abstractions will eventually emerge.  If they already exist, please feel free to initiate a conversation.  I'm always happy to learn.

Acknowlegements

Thanks are extended to Diana Laurillard who gave permission to share the presentation slide featured in this post.

Share post

Green Code

Visible to anyone in the world
Edited by Christopher Douce, Friday, 3 Jan 2020, 18:34

It takes far too long for my desktop PC to finish booting up every morning.  From the moment I throw the power switch of my aging XP machine to the on position and click on my user name, I have enough time to walk to the kitchen, brew a cup of tea, do some washing and tidying up and drink half my cup of tea (or coffee), before I can begin to load all the other applications that I need to open before settling down to do some work.

I would say it takes over fifteen minutes from the point of power up to being able to do some 'real stuff'.  All this hanging around inevitably sucks up quite a bit of needless energy.  Even though I do have some additional software services installed, such as a database and a peer-to-peer TV application, I don't think my PC is too underpowered (it's a single core running just over a gigahertz with half a gig of memory).

Being of a particular age, I have fond memories of the time when you turned on a computer, the operating system (albeit a much simpler one) was almost instantly available. Ignoring the need to load software from cassettes or big floppy disks, you could start to issue commands and do useful stuff within seconds of powering up.

This is one of the reasons why I like my EEE netbook (wikipedia): if I have an idea for something to write or want to talk to someone or find something out, then I can turn it on and within a minute or two it is ready for use. (As an aside, I remember reading in Insanely Great by Steven Levy (Amazon) the issue of boot up time was an important consideration when designing the original Macintosh).

Green Code

These musings make me wonder about the notion of 'green code': computer software that is designed in such a way that it supports necessary functionality by demanding a minimal amount of processor or memory resources. Needless to say, this is by no means an original idea. It seems that other people are thinking along similar lines.

In a post entitled, Your bad code is killing my planet, Alistair Croll writes, 'Once upon a time, lousy coding didn't matter. Coder Joel and I could write the same app, and while mine might have consumed 50 percent of the machine's CPU whereas his could have consumed a mere 10 percent, this wasn't a big deal. We both paid for our computer, rackspace, bandwidth, and power.'

Croll mentions that software is often designed in terms of multiple levels of abstraction. He states that there can be a lot of 'distance and computing overhead between my code and the electricity of each processor cycle'. He goes on to write, 'Architecture choices, and even programming language, matter'. Software architecture choices do matter and abstractions are important.

Green Maintenance

Making code that is efficient is only part of the story. Abstractions allow us to hide complexity. They help developers to compartmentalise and manage the 'raw thought stuff' which is computer code. Well designed abstractions can give software developers who are charged with working and maintaining existing systems a real productivity boost.

Code that is easier to read and work with is likely to be easier to maintain. Maintenance is important since some researchers' report that maintenance accounts for up to 70% of costs of a software project.

In my opinion, clean code equals green code. Green code is code that should be easy to understand, maintain and adapt.

Green Challenges

Croll, however, does have a point. Software engineers should need to be aware of the effect that certain architectural choices may have on final system performance.

In times when IT budgets may begin to be challenged (even though IT may be perceived as technology that can help to create business and information process efficiencies), the request for an ever more powerful server may be frowned upon by those who hold the budgetary purse strings. You may be asked to do more with less.

This challenge exposes a fundamental computing dilemma: code that is as efficient as it could be may be difficult to understand and work with. Developers have to consider such challenges carefully and walk a careful path of compromise. Just as there is an eternal trade off between speed of a system and how much power is consumed, there is also a difficult trade offs to consider in terms of efficiency and clarity, along with the dimensions of system flexibility and functionality.

One of the reasons why Microsoft Vista is not considered to be popular is the issue of how resource hungry it is in terms of memory, processor speed and disk drive space. Microsoft, it seems is certainly aware of this issue (InfoWorld).

Turning off some of the needless eye candy, such as neatly shaded three dimensional buttons, can help you to get more life out of your PC. This is something that Ted Samson mentions, before edging towards discussing the related point of power management.

Ted also mentions one of those well known laws of computing. He writes, 'just because there are machines out there that can support enormous system requirements doesn't mean you have to make your software swell to that footprint'. In other words, 'your processor and disk space needs expands to the size of your machine' (another way of also saying 'your project expands to the amount of time you have available'!)

Power Budgets

Whilst I appreciate my EEE PC in terms of its quick boot up time, it does have an uncomfortable side effect: it also acts as a very effective lap warmer. Even more surprisingly, its batteries are entirely depleted within slightly over two hours of usage. This is terrible! A mobile device should not be tethered to a mains power supply. It also makes me wonder about whether its incessant demand for power is going to cut short the life of its batteries (which represent their own environmental challenge).

When working alongside electrical engineers, I would occasionally over hear them discussing power budgets, i.e. how much power would be consumed by components of a larger electrical system. In terms of software, both laptop and desktop PC offer a range of mysterious software interfaces that provide 'power management' functionality. This is something that I have not substantially explored or studied. For me, this is an area of modern PCs that remain a perpetual mystery. It is certainly something that I need do to something about.

Sometimes, the collaboration between software developers and hardware engineers can yield astonishing results. I again point towards the One Laptop per Child project. I remember reading some on-line discussions that described changes that were made to the Linux operating system kernel to make the OLPC device more power efficient. A quick search quickly throws up an Environmental Impact page.

The OLPC device, whether you agree with the objective of the OLPC project or not, has had a significant impact on the design of laptop systems. A second version of the device raises the possibility of netbooks using the energy efficient ARM processor (wikipedia) - the same processor that is used (as far as I understand) in the iPhone and iPod I, for one, look forward to using a netbook that doesn't unbearably heat up my lap and allows me to do useful work without having to needless wasted time searching for power sockets.

My desktop computer (which was assembled by my own fair hands) produces a side effect that is undeniably useful during the winter months: it perceptibly heats up my room almost allowing me to totally dispense with other forms of heating completely (but I must add that a chunky jumper is often necessary). When I told someone else about this phenomenon, I was asked, 'big computer or small room?' The answer was, inevitably, 'small room' (and small computer).

On a related note, I was recently sent a link to a YouTube video entitled Google container data centre tour. It was astonishing (and very interesting!) It was astonishing due to the sheer scale of the installation that was presented, and interesting in terms of the industrial processes and engineering that were described. It reminded me of a news item that was featured in the media earlier this year that related to the carbon cost of carrying out a Google search.

The sad thing about the Google data centre (and, of course, most power plants) is that most of the heat that is generated is wasted. I recently came across this article, entitled Telehouse to heat homes at Docklands. Apparently there are other schemes to use data centres for different kinds of heating.

Before leaving Google alone, you might have heard of a site called Blackle. Blackle takes the Google homepage and inverts it. The argument is that if everyone uses a black search page, large power savings can be made.

Mark Ontkush describes the story of Black Google and others in a very interesting blog post which also mentions other useful ideas, such as the use of Firefox extensions. Cuil is another search engine (pronounced 'cool') that embodies the same idea.

Carbon Cost of Spam

I recently noticed a news item entitled Spam e-mails killing the environment (ITWorld). Despite the headline having a passing resemblance to headlines that you would find on the Daily Mail, I felt the article was worth a look. It references a more sensibly titled report, The carbon footprint of email spam, published by McAfee.

The report is interesting, pointing towards the fact that that we may spend a lot of time both reading and processing junk emails that end up in our inbox. The article has an obvious agenda: to sell spam filters. An effective spam filter, it is argued, can reduce the amount of time that email users spend processing spam, thus helping to save the planet (a bit). Spam can fill up email servers, causing network administrators to use bigger disks. To be effective, email servers need to spend time (and energy) filtering through all the messages that are received. I do sense that more research is required.

Invisible Infrastuctures

There is a further connection between the challenge of spam and the invisible infrastucture of the internet. Messages to your PC, laptop or mobile device pass through a range of mysterious switches, routers and servers. At each stage, energy is mysteriously consumed and paid for by an invisible set of financial transactions.

My own PC, I should add, is not as power friendly as it could be. It contains two hard disk drives: a main drive that contains the operating system, and a secondary drive that contains backup files and also 'swap' area. The main reason for the second drive is to gain a performance boost.

Lower power PCs

After asking the question, 'how might I create an energy efficient PC', I discovered an interesting article from Ars Technica entitled It's easy being green. It describes each of the components of a PC and considered how much power they can draw. The final page features a potential PC setup in the form of 'an extreme green box'.

It is, however, possible to go further. The Coding Horror blog presents one approach: use kit that was intended for embedded systems - a domain where power consumption is high on the design agenda. An article, entitled Building Tiny, Ultra Low Power PCs is a fun read.

Both articles are certainly worth a view. One other cost that should be considered, however, is the cost of manufacturing (and also recycling) your existing machine. I don't expect to change my PC until the second service pack for Windows 7 is released. It's going to be warming my room for quite some time, but perhaps the carbon consumption stats that relate to PC manufacture and disposal are out there somewhere that may help me to make a decision.

Concluding thoughts

Servers undeniably cost a lot of money not only in terms of their initial purchase price, but also in terms of how much energy they consume over their lifetime.

Efficient software has the potential to reduce server count, allowing more to be achieved with less. Developers should aspire to write code that is as efficient as possible, and take careful account of the underlying software infrastructures (and abstractions) that they use. At the heart of every software development lies a range of challenging compromises. It often takes a combination of experience and insight to figure out the best solution, but it is important to take account of change since the majority of the time on any software system is likely to be during the maintenance phase of a software project.

The key to computing energy reduction doesn't only rest with computer scientists, hardware designers and software engineers. There are wider social and organisational issues at play, as Samson hints at in an article entitled No good excuses not to power down PCs. The Open University has a two page OU Green computing guide that makes a number of similar points.

One useful idea is to quantify computer power in terms of megahertz per miliwatt (MPMs) instead of millions of instructions per second (MIPS) - I should add that this isn't my idea and I can't remember where it came from! It might be useful to try to establish a new aspirational computing 'law'. Instead of constantly citing Moore's law which states that the number of transistors should double every two years, perhaps we need to edge towards trying to propose a law that proposes a reduction in power consumption whilst maintaining 'transactional performance'. In this era of multi-core multi-function processors, this is likely to be a tough call, but I think it's worth a try.

One other challenge is whether it might be possible to crystalise what is meant by 'green code', and whether we can explore what it means by constructing good or bad examples. The good examples will run on low powered slower hardware, wherease the bad examples will likely to be sluggish and unresponsive. Polling (constantly checking to see whether something has changed) is obviously bad. Ugly, inelegant, poorly structured and hard to read (and hard to change) code could also be placed in a box named 'bad'.

A final challenge lies with whether it might be possible to explore what might be contained within a sub-discipline of 'green computing'. It would be interesting to see where this might take us.

Share post

Retro learning technology

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 21 Jul 2010, 18:15

I was recently told about a conference called Game Based Learning.  Although I wasn't able to attend in person (and besides, my main research interests are perhaps somewhat tangential to the topic), the subject remains of perpetual interest.   One of my colleagues, Liam Green-Hughes, who was lucky enough to be there a couple of weeks ago has written a comprehensive blog post describing some of the themes and presentations.

The appearance of this conference made me begin to reflect on my own experiences of what 'game based learning' means to me.  It soon struck me that much of my experiences are from an earlier generation of learning technologies (and I'm not just talking about books!)  This post represents a personal history (perhaps a 'retro' history) of using a number of different 'learning' games and devices.  I hope, for some, it evokes some fun memories!

Early mobile learning

My first exposure to mobile learning (or learning games) was through a device called Spelling B made by Texas Instruments (now more famous for their DSP processors).

The device presented a truly multi-modal experience.  Not only did it contain an alphabetic keyboard, it was also adorned with bright colours and came complete with a picture book.  Quite a proportion of the picture book was initially unfathomable since you had to translate the pictures into correctly spelt words.

For quite a long time after using the device, I continued to spell 'colour' incorrectly (or correctly, depending upon your point of view), and believed that all people with peaked hats (who were not witches) were Pilgrims (the historical importance of which was totally lost on a naive British child).

If you spelt some words correctly you got a chirpy bleeping (or buzzing) tune.  If you spelt something incorrectly, you were presented with a darker rasping noise.

After around two months of playing, it was game over.  I was able to spell, by rote, the one hundred and twenty seven words (or close to it), none of which probably had more than eight characters.

One Christmas I was lucky enough to graduate to the more elegantly designed and ultimately more mind blowing Speak and Spell (wikipedia).  It was astonishing to learn that something with batteries could speak to you!  Complete with integral handle, rugged membrane board (which you could spill your orange squash onto), an audio connection to prevent parents from going potty (like mine did), and a battery charge length that would put any modern laptop to shame.  In my view, the Speak and Spell is a design triumph and you might be able to see some similarities with the OLPC XO computer (if you squint hard enough).

To this day, I remember (without looking at simulations) its rhythmic incantations of 'that is correct! And now spell...' which punctuated my own personal successes and failures.

Learning technology envy

This era of retro learning technology didn't end with the Speak and Spell.  One Christmas, Santa gave a group of kids down my street a mobile device called the Little Professor, also from Texas Instruments.  Pre-dating the Nintendo DS by decades, this little hand-held beauty presented a true advance in learning technology.  When you got a calculation right, the little professors' moustache started to jump around in animated delight.  It also incidentally had one of those new fangled Liquid Crystal Display screens rather than the battery hungry LED readouts (but this was inconsequential in comparison to the hilarious moustache).

Learning technology envy was a real phenomenon.  I remember a Texas Instruments Speak and Maths devices was undoubtedly more exciting (and desirable to play with) than a lowly Little Professor.  Game based learning was a reality, and parent pester power was definitely at work.  For those kids who had parents who were well off, the nadir of learning technology envy (when I was growing up) manifested itself in the form of the programmable and mysterious Big Trak.

Big Trak inspired a huge amount of wonderment, especially for those of us who had never been near a logo turtle.  The marketing was good as the advertisements of the time (video) testify.  It presented kids the opportunity to consider the challenge of creating stored programs.

Learning with the Atari

A number of my peers were Atari 2600 gamers.  As well as being enthralled at the prospect of blasting aliens, driving racing cars and battling with dragons represented by pixels the size of a small coins, I gradually became aware of a range of educational games that some retailers were starting to sell.

A number of educational Sesame Street game cartridges were created, presumably with close collaboration wit Atari.  I personally found them rather tedious and somewhat unexciting, but an interesting innovation was the presence of a specially designed 'Kids controller'.  Each game was provided with a colourful overlay which only presented the buttons that should be used.  (As was the case with some Atari games, the box art and instructions leaflets could be arguably more exciting than the game itself).

I have no real idea whether any substantial evaluations were carried out to assess either the user experience of these products, or whether they helped to develop motor control.

Behold!  The personal computer...

My first memory of an educational game that was presented through a personal computer was a business game that ran on the BBC Model B.  The scenario was that you were the owner of a candy floss store (an obvious attraction for kids) and you had to buy raw materials (sugar) and hope that the weather was good.  If it was good, you made money.  If it was bad, you didn't.  I must add I used this game when Thatcherism was at its peak.  This incidental memory connects with wider issues relating to the link between game deployment and wider educational policy, but I digress...

Whilst using the 'candy floss game' I don't have any recollection of having to spend extra money for petrol for the generator or have to pay the council rent (or contend with issues of price rises every year) but I'm sure there was a cost category called 'overheads' that featured somwhere.  I'm also pretty sure you could set you own prices of your products and be introduced to the notion of 'breaking even'.

The BBC Model B figured again in my later education when I discovered a business studies lab that was packed with the things, and an occasional Domesday machine, running on an exotically modified BBC Master 128.  The Domesday project pointed firmly towards the future.  I remember a lunch hour passing in a flash whilst I attempted to navigate through an animated three dimensional exhibition that represented a portal to a range of different encyclopaedic themes.

During business studies classes we didn't use the Beebs (as they were colloquially known) for candy floss stall simulations, but instead we used a program called Edword (educational word processor), a spreadsheet and a simple database program.  When we were not using these applications, we were using the luxury of disk drive to play Elite (wikipedia).  A magical galaxy drawn in glorious 3D wireframe taught us about commodity trading and law enforcement.

Sir Clive

For a while the Sinclair Spectrum (a firm favorite amongst my peers) was sold with a bonus pack of cassettes.  Two memorable ones included an application called Make-A-Chip which allowed you to draw sets of logic gate circuits.  You couldn't do much more than create a binary adder, but messing around with it for a couple of hours  really helped when it came to understanding these operators when it came to working with real programming languages.

I also have recollections of using a simulation program that allowed you to become a fox or a rabbit (or something else) and forage for food.  As the seasons (and years) change, the availability of food fluctuated due to 'natural' changes in population.  I never did get the hang of that game: it was a combination of being too slow and not having sufficiently engaging feedback for it to be attractive.

Assessing their impact

If I was asked whether I learnt anything by using the beeping and speaking mobile devices that were mentioned earlier, I couldn't give you an honest answer.  I suspect I probably did, and the biggest challenge that researchers face isn't necessarily designing a new technology (which is a challenge that many people grossly underestimate), but understanding the effect the introduction that a particular technology has, and ultimately, whether it facilitates learning.

There is, of course, also a social side to playing learning games (and I write this without any sense of authority!).  I remember my peers showing me how to use a Little Professor, and looking at an on-screen 'candy floss sales' graph at feeling thoroughly puzzled at what was being presented to me.  A crowd of us used to play that game.  Some players, I seem to remember, were more agressive traders than others.

Towards the future

History has yielded an interesting turn of events.  I spent many a happy hour messing around on a BBC Model B (albeit mostly playing Elite) and later marveled at its ultimate successor, the Acorn Archimedes (some of the time playing Zarch written by the same author as Elite).

I remember my fascination at discovering it was possible to write programs using the BBC Basic programming language (version 5) without using line numbers.  Acorn computer eventually folded but left a lasting legacy in the form of ARM (Acorn Risc Machine), a company that sells its processor designs which have found their way into a whole range of different devices: phones, MP3 players, PDAs.

I recently heard that the designers of the One Laptop Per Child project were considering using ARM-based processors in the second generation designs.  The reason for this choice is directed by the need to pay careful consideration to power consumption and the fact that the current processor will not be subject to further on-going development by AMD. (One loosely connected cul-de-sac of history lies with the brief emergence of the inspirational and sadly ill fated Apple eMate, which was also ARM-based).

One dimension of learning games that is outside the immediate requirements of developing skills or knowledge in a particular subject, lies with their potential to engender motivation and enthusiasm.

It's surprising how long electricity powered educational video games and devices have been around for.  It's also surprising that it was possible to do so much with so little (in terms of processing power and memory).  Educational gaming, I argue, should not be considered as an idea that is considered to be revolutionary.  I personally look forwards to learning about what new evolutionary developments are waiting for us around the corner and what they may be able to tell us about how we learn.

Share post

Learning technology and return on investment

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 21 Jul 2010, 18:13

A couple of years ago I attended a conference held by the Association of Learning Technology (ALT).  I remember a riveting keynote that asked the members of the audience to consider not only the design and the structure of new learning technologies, but also the benefits that they can offer institutions, particularly in terms of costs.  I remember being reminded that technology (in terms of educational technology) need not be something that is considered in terms of 'high technology'.  A technology could be a bunch of cards tied to a desk chair!

I came away with a message that it is important to try to assess the effectiveness of systems that we construct, and not to get ahead of ourselves in terms of building systems that might not do the job they set out to do, or in fact may even end up solving the wrong problem.

When attending the learning technologies exhibition a couple of months ago the notions of 'return on investment' and 'efficiency' were never too far away.  E-learning, it is argued, can help companies to develop training capacities quickly and efficiently and allow company employees to prepare for change.

I might have been mistaken but I think I may have heard the occasional presenter mentioning some figures.  The presenters were saying things like, 'if you implement our system in this way, it can help you save a certain number of millions of pounds, dollars or euros per year'.  Such proclamations made me wonder, 'how do you go about measuring the return on investment into e-learning systems?' (or any other learning technology system, for that matter).

I do not profess to be an expert in this issue by any means.  I am aware that there have been many others (who have greater levels of expertise than myself) have both blogged and written about this issue at some length.  I hope by sharing my own meagre notes on the subject might make a small contribution to the debate.

Measuring usefulness

Let's say you build a learning technology or an e-learning tool.  You might create an on-line discussion forum or even a hand held classroom response system.  You might have created it with a particular problem (or pedagogic practice) in mind.  When you have built it, it might be useful to determine whether the learning technology helps learners to learn and acquire new understandings and knowledge.

One way of conducting an evaluation is to ask them about their experience.  This can allow you to understand how it worked, whether it was liked, what themes were learnt and what elements of a system, product or process might have inspired learners.

Another way to determine whether a technology is affective is to perform a test.  You could take two groups, one that has access to the new technology, the other who do not, and see which group perform better.  Of course, such an approach can open up a range of interesting ethical issues that need to be carefully negotiated and conquered.

Dimensions of measurement

When it comes to large e-learning systems the questions that can uncover learners experience can relate to a 'low level' understanding of how learning technologies are used and applied.  Attempting to measure the success of a learning technology or e-learning system for a whole department or institution could be described as a 'high level' understanding.  It is this 'high level' of understanding that relates to the theme of how much money a system may help to save (or cost) an organisation.

Bearing in mind that organisations are very unlikely to carry out experiments, how is it possible to determine how much return on investment an e-learning system might give you?  This is a really tough question to ask since it depends totally on the objectives of a system.  The approach taken to measure the return on investment of a training system is likely to be different to one that has been used to instil institutional values or create new ways of working (which may or may not yield employee productivity improvements).

When considering the issue of e-learning systems that aim to train (I'm going to try to steer clear of the debates around what constitutes training and what constitutes education), the questions that you might ask include:

• What are the current skills (or knowledge base) of your personnel?
• What are the costs inherent in developing a training solution?
• What are the costs inherent in rolling this out to those who need access?

Another good question to ask is: what would you have to do to find out the same information if you had not invested in the new technologies?  A related question is: would there be any significant travel costs attached to finding out the information?, and, would it be possible to measure the amount of disruption that might take place if you have to ask other people for the information that you require?

These questions relate to actions that can be measured.  If you can put a measure on the costs of finding out key pieces of information before and after the implementation of a system, you might be able to edge towards figuring out the value of the system that you have implemented.  What, for example, is the cost of running the same face to face training course every year as opposed to creating a digital equivalent that is supported by a forum that is supported by an on-line moderator?  You need to factor in issues such as how much time it might take for a learner to take the course.  Simply providing an on-line course is not enough.  Its provision needs to be supported and endorsed by the organisation that has decided to sponsor the development of e-learning.

The above group of points represents a rather simplistic view.  The introduction of a learning technology system may also facilitate the development of new opportunities that were perhaps not previously possible.  'Up skilling' (or whatever it is called) in a limited amount of time may enable employees to respond to a business opportunity that may not have been able to exploited without the application of e-learning.

Other themes

Learning technologies are not only about the transmission of information (and knowledge) between a training department and their employees.  They can also have a role to play in facilitating the development of a common culture and strengthening bonds between work groups.

Instances of success (or failure) can be shared between fellow employees. Details of new initiatives or projects may be disseminated through a learning system.  The contents of the learning system, as a result, may gradually change as a result of such discussions.

The wider cultural perspectives that surround the application of learning technologies, in my humble opinion, are a lot harder to quantify.  It's hard to put a value on the use a technology to share information (and learning experiences) with your co-workers.

Related resources

A quick search takes me to the wikipedia definition of ROI and I'm immediately overwhelmed by detail that leaves my head spinning.

Further probing reveals a blog entitled ROI and Metrics in eLearning by Tony Karrer who kindly provides a wealth of links (some of which were updated in April 2008).  I have also uncovered a report entitled What Return on Investment does e-Learning Provide? (dated July 2005) (pdf) prepared by SkillSoft found on a site called e-Learning Centre.

Summary

The issue of e-learning and learning technology return on investment is one that appears to be, at a cursory glance, one that can be quite difficult to understand thoroughly.  Attaching numbers to the costs and benefits of any learning technology is something that is difficult to well or precisely.  This can be partly attributed to the nature of software: often, so much of the costs (whether it be in terms of administration or maintenance) or benefits (ability to find things out quicker and collaborate with new people) can be hidden amongst detail that needs to be made clearly explicit to be successfully understood.

When it comes to designing, building and deploying learning technology systems, the idea of return on investment is undoubtedly a useful concept, but those investing in systems should consider issues beyond the immediately discoverable costs and benefits since there are likely to be others lurking in the background just waiting to be discovered.

Acknowledgements: Image licensed under creative commons, liberated via Flickr from Mr Squee.  Piano busker in front of building site represents two types of investments: a long term investment (as represented by the hidden building site), and a short term investment (using the decrepit piano to busk).  The unheard music represents hidden complexities.

Share post

Inclusive Digital Economy Network event

Visible to anyone in the world
Edited by Christopher Douce, Monday, 9 Mar 2009, 09:29

I recently attended an event that was hosted by the Inclusive Digital Economy Network.  The network, led by the University of Dundee, comprises of a variety of groups who wish to collectively ensure that people are able to take advantage of digital technologies.

The event was led by Prof Alan Newell from Dundee.  Alan gracefully introduced a number of keynote speakers; the vice-chancellor from City University, the dean of Arts and Social Sciences and representatives from the government and the funding body: the EPSRC.

Drama

One really interesting part of the day was the use of 'theatre' to clearly illustrate the difficulties that some people can have when using information technology.  I had heard about the use of drama when I have spoken to people from Dundee before but this was the first time I was able to witness it.  In fact, I soon found out that I was going to witness a film premiere!

After the final credits had appeared, I was surprised to discover that two of the actors who played central roles in the film were in the audience.  The film was not the end of the ‘theatre’ event, it was the beginning.  The actors carried out an improvisation (using questions from the audience) that was based upon the roles we had been introduce to through the film.

The notion of drama and computing initially seemed to me to be a challenging combination, but any scepticism that had very quickly dissipated once the connections between the two areas became plainly apparently.  Drama and theatre relies on characters.  Computer systems and technologies are ultimately used by people.  The frustrations that people encounter when they are using computer systems manifest themselves in personal (and collective) dramas that might be as small as uttering the occasional expletive when your machine doesn't do what it supposed to do, to calling up a call centre to harass an equally confused call centre operative.

The lessons of the 'computing' or 'user' theatre were clear to see: the users should be placed centre stage when we think about the design of information systems.  They may understand things in ways that designers of systems may not have imagined.  A design metaphor that might make complete sense to an architect may seem to be completely nonsensical to an end user who has a totally different outlook and background.  Interaction design tools such as creating end user personas are powerful tools that can expose differences and help to create more usable systems.

Debates

I remember a couple of important (and interesting) themes from the day.  One theme (that was apparent to me) was occasional debate about the necessity to ensure that users are involved with the design of systems from the outset to ensure that any resulting products and systems are inclusive (user led design).  This connected to a call to 'keep the geeks from designing things'.  In my view, users must be involved with the creation of interactive systems, but the 'geeks' must be included too.  The reasons for this being that the geeks may imagine functionality that the users might not be aware exits.  This argument underlines the interdisciplinary nature of interaction design (wikipedia).

Much of the focus of the day was about how technology can support elderly people; how to create technologies and pedagogies that can promote digital inclusion.  Towards the end of the day there was a panel discussion from representatives from Help the Aged, a UK government organisation called the Technology Strategy Board, the BBC, OFCOM and the University of York.

Another them that I remember relates to the cost of both computing and assistive technologies.  There was some discussion about the possibility of integrating internet access within set top boxes (and a couple of comments relating to the Digital Britain report that was recently published by the UK government).  There was also some discussion about the importance of universal design (wikipedia) and tensions with personalised design (which connects to some of the themes underpinning the EU4ALL project).

Another recollection from the event was that some presenters stated that although there is much excellent work happening within the academic community (and within other organisations) some of the lessons learnt from research are often not taken forward into policy or practice.  This said, it may be necessary to take the recommendations from a number of different research projects to obtain a rich and complete understanding of a field before fully understanding how policy might be positively influenced.  The challenge is not only combining and understanding the results from different projects, but communicating the results.

Summary

Projects such as the Inclusive Digital Economy Network, from my outsiders perspective, attempt to bridge the gaps between different stakeholders and facilitate a free exchange of ideas and experiences that may point towards areas of investigation that can allow us learn more how digital technologies can make a difference to us all.

Acknowledgements: many thanks are extended to the organisers of the event – an interesting day!

Share post

Source code accessibility through audio streams

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 21 Jul 2010, 18:12

One of my colleagues volunteers for the Open University audio recording project.  The audio recording project takes course material produced by course teams and makes audio (spoken) equivalents for people with visual impairments.  Another project that is currently underway is the digital audio project which aims to potentially take advantage of advances in technology, mobile devices and international standards.

Some weeks ago, my colleague tweeted along the lines of 'it must be difficult for people with visual disabilities to learn how computer programs are written and structured' (I am heavily paraphrasing, of course!)  As soon as I read this tweet I began to think about two questions.  The first question was: how do I go about learning how a fragment of source code works? and secondly, what might be the best way to convert a function or a 'slice' of programming code into an audio representation that helps people to understand what it does and how it is structured?

Learning from source code

How do I learn how a fragment of source code works?  More often than not I view code through an integrated development environment, using it to navigate through the function (or functions) that I have to learn.  If I am faced with some code that is really puzzling I might reach for some search tools to uncover the connections between different parts of the system.

If the part of the code that I am looking at is quite small and extremely puzzling, I might go as far as grab a pen and paper and begin to sketch out some notes, taking down some of the expressions that may appear to be troubling and maybe split these apart into their constituent components.  I might even try to run the various code fragments by hand.  If I get really confused I might use the 'immediate' window of my development environment ask my computer to give me some hints about the code I am currently examining.

When trying to understand some new source code my general approach is to try to have a 'conversation' with it, asking it questions and looking at it from a number of different perspectives.  In the psychology of programming literature some researchers have written about developers using 'top down' and 'bottom up' strategies.  You might have a high level hypothesis about what something does on one hand, but on the other, sections of code might help you to understand the 'bigger picture' or the intentions behind a software system.

In essence, I think understanding software is a really hard task.  It is harder and more challenging than many people seem to imagine.  Not only do you have to understand the language that is used to describe a world, but you also have to understand the language of the world that is described.  The world of the machine and the world of the problem are intrinsically and intimately connected through what can sometimes seem an abstract collection of words and symbols.  Your task, as a developer, is to make sense of two hidden worlds.

I digress slightly... If learning about computer programming code is a hard task, then it is possible that it is likely to be harder for people with visual impairments.  I cannot imagine how difficult it must be to be presented with a small computer program or a function that has been read out to you.  Much of the 'secondary notation', such as tabbing and white space can be easily lost if there are no mechanisms to enable them to be presented through another modality.  There is also the danger that your working memory may become quickly overwhealmed with names of identifiers and unfamiliar sounding functions.

Assistive technology for everyone

The tasks of learning the fundamentals of programming and learning about a program are different, yet related.  I have heard it said that people with disabilities are given real help if technologies are created that are useful for a wide audience.  A great example of this is, for example, optical character recognition.  Whilst OCR technology can save a great deal of cost typing, it has also created tools that enable people with low vision to scan and read their post.

Bearing the notion of 'a widely applicable technology' in mind, could it be possible to create a system that creates an interactive audio description that could potentially help with the teaching of some of the concepts of computer programming for all learners?

Whenever I read code I immediately begin to translate the notion of code into my own 'internal' notation (using different types of memory, both internal and external - such as scraps of paper!) to iteratively internalise and make sense of what I am being presented with.  Perhaps equivalents of programming code could be created in a form that could be navigated.  Code it not something that you read in a linear fashon - code is something you work with.

If an interesting and useful (and interactive) audio equivalent of programming code could be created there then might be the potential that these alternative forms might be useful to all students, not only to learners who necessarily require auditory equivalents.

Development directions

There are a number of tools that could help us to create what might amount to 'interactive audio descriptions of programming code'.  The first is the idea of plan or schema theory (wikipedia) – the notion that your understanding of something is drawn from previous experience.  Some theorists from the Psychology of Programming have extended and drawn upon these ideas, positing ideas such as key lines of code such as beacons.

Another is Green's Cognitive Dimensions framework (wikipedia).  Another area to consider looking at is the interesting sub-field of Computer Science Education research.  There must be other tools, frameworks and ideas that can be drawn upon.

Have you got a sec?

Another approach that I sometimes take when trying to understand something is that I ask other more experienced people for help.  I might ask the question, 'what does this section represent?' or, 'what does this section do?'  The answers from collegues can be instrumental in helping me to understand the purpose behind fragments of programming code.

Considering browsing

I can almost imagine what could be an audio code browser that has some functionality that allows you to change between different levels of abstraction.  At one level, you may be able to navigate through sets of different functions and hear descriptions of what they are intended to do and hope to receive by way of parameters (which could be provided through comments).  On another level there may be summaries of groups of instructions, like loops, with descriptions that might sound like, 'a foreach loop that contains four other statements and a call to two functions'.  Finally, you may be able to tab into a group of statements to learn about what variables are manipulated, and how.

Of course this is all very technical stuff, and it could be stuff that has already been explored before.  If you know of similar (or related) work, please feel free to drop me a line!

Acknowledgement: random image of code by elliotcable, licenced under creative commons, discovered using Flickr.

Share post

Exploring Moodle forums

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 21 Jul 2010, 18:08

Following on from the previous post, this post describes my adventures into the Moodle forums source code.

Forums, I understand, can be activities (a Moodle term) that can be presented within individual weeks or topics. I also know that forums can be presented through blocks (which can be presented on the left or right hand side of course areas).

To begin, and remembering the success that I had when trying to understand how blocks work, I start by looking at what the database can tell me and quickly discover quite a substantial number of tables.  These are named: forum (obviously), forum_discussions, forum_posts, forum_queue, forum_ratings (ratings is not something that I have used within the version of Moodle that I am familiar with), forum_read, forum_descriptions, forum_subscriptions and forum_track_prefs.

First steps

Knowing that some of the data tables are called, I put aside my desire to excitedly eyeball source code and sensibly try to find some documentation.

I begin by having a look at the database schema introduction page (Moodledocs), but find nothing that immediately helps.  I then discover an end user doc page that describes the forum module (and the different types of forum that are on offer in Moodle).  I then uncover a whole forum documentation category (Moodledocs) and I'm immediately assaulted by my own lack of understanding of the capabilities system (which I'll hopefully blog about at some point in the future – one page that I'll take note of here is the forum permissions page).

From the forums category page I click on the various 'forum view pages', which hints that there are some strong connections with user settings.

Up to this point, what have I learnt?

I have learnt that Moodle permits only certain users to carry out certain actions to Moodle forums.  I have also learnt that Moodle forums have different types.  These, I am lead to believe (according to this documentation page) are: standard, single discussion, each person posts one discussion, and question and answer.  I'm impressed:  I wasn't expecting so much functionality!

So, can we discover any parallels with the database structures?

The forum table contains fields which are named: course, type, name, description followed by a whole other bunch of fields I don't really understand.  The course field associates a forum with a course (I'm assuming that somewhere in the database there will be some data that connects the forum to a particular part or section of a course) and the type (which is interestingly, an enumerated type) which can hold data values that roughly represents the forum types that were mentioned earlier.

A brief look at the code

I remember that the documentation that I uncovered told me that the 'forums' was a module. In the 'mod' directory I see notice a file called view.php.  Other interesting files are named: post.php, lib.php, search.php and discuss.php.  View.php seems to be one big script which contains a big case statement in the middle.  Post.php looks similar, but has a beguiling sister called post_form which happens to be a class.  Lib, I discover, is a file of mystery that contains functions and fragments of SQL and HTML.  Half of the search file seems to retrieve input parameters, and discuss is commented as, 'displays a post, and all the posts below it'.

Creating test data

To learn more about the data structures I decide to create some test data by creating a forum and making a couple of posts.  I open up an imaginatively titled course called 'test' and add an equally imaginatively titled forum called 'test forum'.  When creating the forum I'm asked to specify a forum type (the options are: single simple discussion, Q and A forum, standard forum for general use).  I choose the standard forum and choose the default values for aggregate type and time period for blocking.  The aggregate type appears to be related to functionality that allows students to grade or rate posts.

When the forum is live, I then make a forum post to my test forum that has the title 'test post'.

Reviewing the database

The action of creating a new forum appears to have created a record in the forum table which is associated to a particular course, using the course id.  The act of adding a post to the test forum has added data to forum_discussions, where the name field corresponds to the title of my thread: 'test post'.  A link is made with the forum table through a foreign key, and a primary key keeps track of all the discussions held by Moodle.

The forum_posts table also contains data.  This table stores the text that is associated with a particular post.  There is a link to the discussion table through a discussion id number.  Other tables that I looked at included forum_queue (not quite sure what this is all about yet), forum_ratings (which probably stores stuff depending on your forum settings), and forum read, which simply stores an association between user id, forum id, discussion id and post id.

One interesting thing about forums is that they can have a recursive structure (you can send a reply to a reply to a reply and so on).  To gain more insight into how this works, I send a reply to myself which has the imaginative content, 'this is a test post 2'.

Unexpectedly, no changes are made to the forum_discussions table, but a new entry is added to the forum_posts table.  To indicate hierarchy a 'parent' field is populated (where the parent relates to an earlier entry within the forum_posts table).  I'm assuming that the sequence of posts is represented by the 'created' field which stores a numerical representation of the time.

Tracing the execution flow

These experiments have given me with three questions to explore:

1. What happens within the world of Moodle code the user creates a new forum?
2. What happens when a user adds a new discussion to a forum?
3. What happens when a user posts a reply?

Creating a new forum

Creating a new forum means adding an activity.  To learn about what code is called when a forum is added, I click on 'add forum' and capture the URL.  I then give my debugger the same parameters that are called (id, section, sesskey and add) and then begin to step through the course/mod.php script.  The id number seems to relate to the id of the course, and the add parameter seems to specify the type of the activity or resource that is to be added.

I quickly discover a redirect to a script called modedit.php, where the parameters add=forum, type= (empty), course=4, section=1, return=0.  To further understand what is going on, I stop my debugger and start modedit.php with these parameters.

There is a call to the database to check the validity of the course parameter, fetching of a course instance, something about the capability, fetching of an object that corresponds to a course section (call to get_course_section in course/lib code).   Data items are added to a $form variable (which my debugger tells me is a global). There is then the instantiation of a class called mod_forum_mod_form (which is defined within mod/forum/mod_form.php). The definition class within mod_forum_mod_form defines how the forum add or modification form will be set out. There is then a connection between the data held within$form and the form class that stores information about what information will be presented to the user.

After the forum editing interface is displayed, the action of clicking the 'save and return to course' (for example) there is a postback to the same script, modedit.php.  Further probing around reveals a call to forum_add_instance within forum/lib.php (different activities will have different versions of this function) and forum_update_instance.  At the end of the button clicking operation there is then a redirect to a script that shows any changes that have been made.

The code to add a forum to course will be similar (in operation) to the code used to add other activities.  What is interesting is that I have uncovered the classes and script files that relate to the user interface forms that are presented to the user.

A new discussion can be added by clicking on the 'Add a new discussion topic' button once you are within a forum.  The action of clicking on this button is connected to the forum/post.php script.  The most parameter associated to this action is the forum number (forum=7, for example).

It's important to note the use of the class mod_frum_post_form contained within post_form.php which represents the structure of the form that the user enters discussion information to.

The code checks the forum id and then finds out which course it relates to.  It then creates the form class (followed by some further magic code that I quickly stepped through).

The action of clicking on the 'post to forum' button appears to send a post back (along with all of the contents of the form) to post.php (the same script used to create the form).  When this occurs, a message is displayed and then a redirect occurs to the forum view summary.  But where in the code is the database updated?  One way to do this is to begin with a search to the redirect.  Whilst browsing through the code I stumble across a comment that says 'adding a new discussion'.  The database appears to be updated through a call to forum_add_discussion.

Posting a reply to a discussion

The post.php script is also used to save replies to discussions (as well as adding new discussions) to the database.  When a user clicks on a discussion (from a list of discussions created by discuss.php), the link to send replies are represented by calls to post.php with a reply parameter (along with a post number, i.e. post.php?reply=4).  The action of clicking on this link presents the previous message, along with the form where the user can enter a response.

To learn more about how this code works, I browse through the forums lib file and uncover a function called forum_add_new_post.  I then search for this in post.php and discover a portion of code that handles the postback from the HTML form.  I don't explore any further having learnt (quite roughly) where various pieces of code magic seems to lie.

Summary

The post.php script does loads of stuff.  It weighs in at around seven hundred lines in length and contains some huge conditional statements.

Not only does post appear to manage the adding of new discussions to a forum but it also appears to manage the adding, editing and deletion of forum messages.  To learn about how this script is structured I haven't been able to look at function definitions (because it doesn't contain any) but instead I have had to read comments.  Comments, it has been said, can lie, whereas code always tells the truth.  More functions would have helped me to more quickly learn the structure of the post.php script.

The creation of the user interfaces is partially delegated to the mod and post form classes.  Database updates are performed through the forum/lib.php file.  I like some of the function abstractions that are beginning to emerge but any programming file that contains both HTML and SQL indicates there is more work to be done.  The reason for this aesthetic (and person) opinion is simple: keeping these two types of code separate has the potential to help developers to become quickly familiar where certain types of software actions are performed.  This, in turn, has the potential to save developer time.

One of the central areas of functionality that forum developers need to understand is how Moodle works and uses forms.  This remains an area of mystery to me, and one that I hope to continue to learn about.  Another area that I might explore is how PHP has been used to implement different forum systems so I can begin to get a sense of how PHP is written by different groups of developers.

Acknowledgements: Photograph licenced under creative commons by ciaron, liberated from Flickr.

Share post

Forums 2.0

Visible to anyone in the world
Edited by Christopher Douce, Tuesday, 20 May 2014, 09:52

I like forums, I use them a lot.  I can barely remember when I didn’t know what one was.  I think my first exposure to forums might have been through a dial-up bulletin board system (used in the dark ages before the internet, of course).  This was followed through a brief flirtation with usenet news groups.

When trying to solve some programming problems, I more often than not would search for a couple of keywords and then stumble across a multitude of different forums where tips, tricks and techniques might be debated and explored.  A couple of years ago I was then introduced to the world of FirstClass forums (wikipedia) and then, more recently, to Moodle forums.  Discussions with colleagues has since led me towards the notion of e-tivities.

I have a confession to make: I use my email account for a whole manner of different things.  One of the things that I incidentally use my email account for is sending and receiving email!  I occasionally use email as a glorified ‘todo’ list (albeit one that has around a thousand items!)  If something comes in that is interesting and needs attention, I might sometimes use click on an ‘urgent’ tick box so that I remember to look at the message again at a totally unspecified time in the future.  If it is something that must be bounded by time, I might drag the item into my calendar and ask my e-mail client to remind me about it at a specified time in the future (I usually ponder over this for around half a minute before choosing one of two options: remind me in a weeks time, or remind me in a fortnight).

I have created a number of folders within my email client where I can store interesting stuff (which I very often subsequently totally forget about).  Sometimes, when working on a task, I might draft out some notes using my email editor and them store them to a vaguely titled folder.

The ‘saving of draft’ email doesn’t only become something that is useful to have when the door knocks or the telephone rings – email, to me, has gradually become an idea and file storage (and categorisation) tool that has become an integral part of how I work and communicate.  I think I have heard it said that e-mail is the internet’s killer application (wikipedia).  For me, it is a combined word processor, associative filing cabined, ideas processor and general communications utility.

Returning to the topic of forums… Forums are great, but they are very often nothing like email.  I can’t often click and drag forum messages from one location into folder or to a different part of the screen.  I can’t add my own comments to other people’s posts that only I can see (using my mail client I can save copies of email that other people send me).  On some forum systems I can’t sort the messages using different criteria, or even search for keywords or phrases that I know were used at some point.

My forum related gripes continue: I cannot delete (or at least) hide the forum message that I don’t want to see any more.  On occasions I want to change the ‘read status’ from ‘read’ to ‘unread’ if I think that a particular subject that is being discussed might be useful to remember when I later turn to an assessment that I have to submit.  I might also like to take fragments of different threads and group them together in a ‘quotation set’, building a mini forum centric e-portfolio of interesting ideas (this said, I can always copy and paste to email!)If a forum were like a piece of paper where you could draw things at any point I might want to put some threads on the left of the page (those points that I was interested in) and others on the right of the page (or visa-versa).

I might want to organise the threads spatially, so that the really interesting points might be at the top, or the not so interesting points at the bottom – you might call this ‘reader generated threading!’  When one of my colleagues makes a post, there might be an icon change that indicates that a contribution has been made against a particular point.

I might also be able to save thread (or posting) layout, depending on the assignment or topic that I am currently performing research.  It might be possible to create a ‘thread timeline’ (I have heard rumours that Plurk might do something like this), where you see your own structured representation of one or more forums change over time.  Of course, you might even be able to share your own customised forumscape with other forum users.

An on-line forum is undoubtedly a space where learning can occur.  When we think about how we might further develop the notion of a forum we soon uncover the dimension of control.

Currently, the layout and format of a forum (and what you can ultimately do with it) is ultimately constrained by the design of the forum software and a combination of settings assigned by an administrator.  Allowing forum users to create their own customised view of a forum communication space may allow learners tools to make sense of different threads of communication.  Technology can be then used to enable an end user to formulate a display that most effectively connects new and emerging discussions with existing knowledge.

This display (or forumscape) might also be considered as a mask.  Since many different discussions can occur on a single forum at the same time choosing the right mask may help salient information become visible.

The FirstClass system, with its multiple discussion areas and the ability to allow the end user to change the locations of forum icons on a ‘First Class’ desktop begins to step toward some of these ideas.

Essentially, I would like discussion forums to become more like my email client: I would like them to do different things for me.  I would like forum software to not only allow users to share messages.  I would like forum software to become richer and permit the information they display to the users be more malleable (and manageable).  I know this would certainly be something that would help me to learn!

Acknowlegements: Picture from Flickr taken by stuckincustoms, licenced under creative commons.

Permalink 1 comment (latest comment by Sam Marshall, Thursday, 5 Feb 2009, 12:30)
Share post

How Moodle block editing works: database (part 2)

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 21 Jul 2010, 18:05

This is a second blog entry about how Moodle manages its blocks (which can be found either at a site level or at a course level).  In my previous post I wrote about the path of execution I discovered within the main Moodle index.php file.  I discovered that the version of Moodle that I was using presented blocks using tables, and that blocks made use of some interesting object-oriented features of PHP to create the HTML code that is eventually presented to the end user.

This post has two objectives.  The first is to present something about the database structures that are used to store information about which blocks are stored where, and secondly to explore what happens when an administrator clicks on the various block editing functions.  The intention behind this post is to understand Moodle in greater detail to uncover a little more of how it has been designed.

Blocks revisited

Blocks, as mentioned earlier, are pieces of functionality that can sit on the left hand or right hand borders of courses (or the main Moodle site page).  Blocks can present a whole range of functions ranging from news items through to RSS feeds.

Blocks can be moved around within a course page with relative ease by using the Moodle edit button.  Once you click on ‘edit’ (providing it is there and you have the appropriate level of permissions), you can begin to add, remove and move blocks around using a couple of icons that are presented.  Clicking on the left icon moves the block to the left hand margin, clicking the down arrow icon changes its vertical position and so on.

One of my objectives with this post is to understand what happens when these various buttons are clicked on.  What I am hoping to see are clearly defined functions which will be called something like moveBlockUp, moveBlockDown or deleteBlock.

Perhaps with future versions it might be possible to have a direct manipulation interface (wikipedia) where rather than having buttons to press, users will be able to drag blocks around to rapidly customise course displays.  Proposing ideas and problems to be solved is a whole lot more easier than going ahead and solving them.  Also, to happily prove there’s no such thing as an original thought, I have recently uncovered a Moodle documentation page.  It seems that this idea has been floating around since 2006.

Before I delve into trying to uncover how each of the Moodle block editing buttons work, it is worthwhile spending some time to look at how Moodle remembers what block is placed where.  This requires looking at the database.

Remembering block location

I open up my database manipulation tool (SqlYog) and begin to browse through the database tables that are used with Moodle.  I quickly spot a bunch of tables that contain the name block.  One that seems to be particularly relevant is a table called block_instance.

The action of creating a course (and adding blocks to it) seems to create a whole bunch of records in the block_instance.  Block_instance appears to be the table that Moodle uses to remember what block should be displayed and when.

The below graphic is an excerpt from the block_instance data table:

The field weight seems to relate to the vertical order of blocks on the screen (I initially wondered whether it related to, in some way, some kind of graphical shading, thinking of the way that HTML uses the term weight).  Removing a block from a course seems to change the data within this table.

The blockid seems to link each entry within block_instance to data items held within the  Block table:

The names held within the name field (such as course_summary) are connected to the programming code that relates to a particular block.  The cron (and the lastcron) relate to regular processes that Moodle must execute.  With the default installation of Moodle everything is visible, and at the time of writing I have no idea what multiple means.

Returning to block_instance, does the pageid field relate to the id used in the course?  Looking at the course table seems to add weight to his hypothesis.

I continue my search for truth by rummaging around in the Moodle documentation, discovering a link to the database schema and uncover some Block documentation that I haven’t seen before (familiarity with material is a function of time!)  This provides a description of the block development system as described by the original developer.

Knowing that these two tables are now used to store block location my question from this point onwards is: how does this table get updated?

To answer this question I applied something that I have called ‘the law of random code searching’: if you don’t know what to look for and you don’t know how things work, carry out a random code search to see what the codebase tells you.  Using my development environment I search to find out where the block_instance datatable is updated.

Calls to the database to be spread out over a number of files: blocks, lib, accesslib, blocklib, moodlelib, and chat/lib (amongst others).  This seems to indicate that there is quite a lot of coupling between the different sections of code (which is probably a bad thing when it comes to understanding the code and carrying out maintenance).

Software comprehension is sometimes an inductive process.  Occasionally you just need to read through a code file to see if it can yield any clues about its design, its structure and what it does.  I decided to try this approach for each of the files my search results window pointed to:

Accesslib
Appears to access control (or permission management) to parts of Moodle.  The comments at the top of the file mention the notion of a ‘context’ (which is a badly overloaded word).  The comments provide me no clue as to the context in which context is used.  The only real definition that I can uncover is the database description documentation which states, ‘a context is a scope in Moodle, for example the whole system, a course, a particular activity’.  In AccessLib, there are some hardcoded definitions for different contexts, i.e. CONTEXT_SYSTEM, CONTEXT_USER, CONTEXT_COURSECAT and so on.

The link to the blocks_instance database lies within a huge function called create_context which updates a database table of the same name.  I’ve uncovered a forum explanation that sheds a little more light onto the matter, but to be honest, the purpose of these functions is going to take some time to uncover.  There is a clue that the records held within the context table might be cached for performance reasons.  Moving on…

Moodlelib

Block_instance is mentioned in a function named remove_course_contents which apparently ‘clears out a course completely, deleting all content but don’t delete the course itself’.  When this function is called, modules and blocks are removed from the course.  Moodlelib is described as ‘main library file of miscellaneous general-purpose Moodle functions’ (??), but there is a reference towards another library called weblib which is described as ‘functions that provide web output’.

Blocks
A comment at the top of the blocks.php file states that it ‘allows the admin to configure blocks (hide/show, delete and configure)’.  There is some code that retrieves instances of a block and then deletes the whole block (but in what ‘context’ this is done, at the moment it’s not clear).

Blocklib
The file contains the lion’s share of references to the block_instance database.  It is said to include ‘all the necessary stuff to use blocks in course pages’ (whatever that means!)  At the top there are some constants for actions corresponding to moving a block around a course page.  Database calls can be found within blocks_delete_instance, blocks_have_content, blocks_print_group and so on.  The blocks_move_block seems to adjust the contents of the database to take account of moment.  There also appears to be some OO type magic going on that I’m not quite sure about.  Perhaps the term ‘instance’ is being used in too many different ways.  I would agree with the coder: blocklib does all kinds of ‘stuff’.

Lib files
Reference to block_instance can be found in lib files for three different blocks: chat, lesson and quiz.  The functions that contain the call to the database relate to the removing of an ‘instance’ of these blocks.  As a result, records from the block_instance table are removed when the functions are called.

So, what have I learnt by reading all this stuff?  I’ve seen how the database stores stuff, that there is a slippery notion of a course context (and mysterious paths), and know the names of some files that do the block editing work, but I’m not quite sure how.  There is quite a lot of complexity that has not yet been uncovered and understood.

Digressions

I have a cursory glance through the lib folder to see what else I can discover and find an interestingly named script file entitled womenslib.php.  Curious, I open it and see a redirect to a wikipedia page.  The Moodle developers obviously have a sense of humour but unfortunately mine had failed!  This minor diversion was unwelcome (humour failure exception), costing me both time and ‘head’ space!

Bizarrely I also uncover seemingly random list of words (wordlist.txt) that begins: ‘ape, baby, camel, car, cat, class, dog, eat …’ etc.  Wondering whether one of the developers had attended the famous Dali school of software engineering, I searched for a file reference to this mysterious ‘wordlist’.

It appeared that our mysterious list of words was referenced in the lib\setup.php file, where a path to our  worldlist was stored in what I assumed to be a Moodle configuration variable.  How might this file be used?  It appears it is used within a function called generate_password.

Thankfully the developers have been kind enough to say where they derived some of their inspiration from.   The presence of the wordlist is explained by the need to construct a function to create pronounceable automatically generated passwords (but perhaps only in English?)

This was all one huge digression.  I pulled myself together just enough to begin to uncover what happens when a user clicks on either the block move up, down, or delete buttons when a course is running in edit mode.

Button click action

Returning to the task in hand, I add two blocks (both in the right hand column, and one situated on top of the other) to my local Moodle site with a view to understanding that function code that contributes to the moveBlockUp and deleteBlock functionality.

I take a look at the links that correspond to the move up and the delete icons.  I notice that the action of clicking sends a bunch of parameters to the main Moodle index.php.  The parameters are sent via get (which means they are sent as a part of the hypertext link).  They are: instanceid (which comes straight out of the block_instance table), sesskey (which reminds me, I really must try to understand how Moodle handles sessions (wikipedia) at some point), and a blockaction parameter (which is either moveup or delete in the case of this scenario).

The question here is: what happens within index.php?  Luckily, I have a debugger that will be able to tell me (or, at least, help me!)

I log in as an administrator through my debugger.  When I have established a session, I then add some breakpoints on my index.php code and launch the index.php code using the parameters for ‘move activity upwards’.

Index.php begins to execute, and a call to page_create_object is made. It looks like a new object is created.  An initialisation function within the page_base class is called (contained within pagelib).  A blocks_setup function is called and block positions from the block_instance database is retrieved.  After some further tracking I end up at a function called blocks_execute_url_action.  The instanceid is retrieved and a call is made to blocks_execute_action where the block action (moveup or delete) is passed in as a parameter with the block instance record that has just been retrieved from the database.

In blocks_execute_action a 'mother of all switch statements' makes a decision about what should be done next.  After some checks, two update commands to the database are issued through the update_record function updated weight values (to change the order of the respective blocks).  With all the database changes complete, a page redirect occurs to index.php.  Now that the database has the correct representation of where each block should be situated index.php can now go ahead and display them.

Is the same mechanism used for course pages?

A very cursory understanding tells me that the course/view.php script has quite a lot to do with the presentation of courses, and at this point gathering an understanding of it is proving to be elusive.  Let’s see what I can find.

Initially it does seem that the index.php script controls the display of a Moodle site and course/view.php script does control the course display.  Moving the mouse over the ‘move block up’ icons reveals a hyperlink to the view.php script with get parameters of: id (which corresponds to the course number held within the course data table), instance id (which corresponds to a record within the block_instance table) and sesskey and blockaction parameters (as with index.php).

To get a rough understanding of how things work, I do something similar as before: open up a session through my debugger and launch the view.php with this bunch parameters.  The view.php course is striking.  It doesn’t seem to be very long and nor does it produce any HTML so it looks like there’s something subtle going on.

In view.php, there are some parameter safety checks, followed by some context_instance magic, checking of the session key followed by calls to the familiar page_create_object (mentioned in the earlier section).  Blocks_setup is then called, followed by blocks_get_by_page_pinned and blocks_get_by_page which asks the database which blocks are associated to this particular page (which is a course page).

Like earlier, there is a call to blocks_execute_url_action when updates the database to carry out the action that the administrator clicked on.  At the end of the database update there is a redirect.  Instead of going to index, the redirect is to view.php along with a single parameter which corresponds to the course id.

This raises the question: what happens after the view.php redirect?

Redirect to view.php

When view.php makes a call to the database to get the data that corresponds to the course id number it has been given.  There is then a check to make sure that the user who is requesting the page is logged into Moodle and eventually our old friends page_create_object and blocks_setup are called, but this time since no buttons have been clicked on, we don’t redirect to another page after we have updated the database.

Towards the end of view.php we can begin to see some magic that begins to produce the HTML that will be presented to the user.  There is a call to print_header.  There is then a script include (using the PHP keyword ‘required’) which then creates the bulk of the page that is presented to the user, building the HTML to present the individual blocks.  When running within my debugger, the script course/format/weeks/format.php was included.  The script that is chosen depends on the format of the course that has been chosen.  When complete, view.php adds the footer and the script ends.

Summary

So, what have I learnt from all this messing about?

It seems that (broadly speaking) the code used to move blocks around on the main Moodle site is also used to move blocks around on a course page, but perhaps this isn’t too surprising (but it is reassuring).  I still have no idea what ‘pinned blocks’ means or what the corresponding data table is for but I’m sure I’ll figure it out in time!

Another thing that I have learnt is that the view course and the main index.php pages are built in different ways.  As a result, if I ever need to change the underlying design or format of a course, I now know where to look (not that I ever think this is something that I’ll need to do!)

I have seen a couple of references to AJAX (MoodleDocs) but I have to confess that I am not much wiser about what AJAX style functionality is currently implemented within the version of Moodle I have been playing with.  Perhaps this is one of those other issues that will become clearer with time (and experience).

One thing, however, does strike me: the database and the user interface components are very closely tied together (or closely coupled) which may make, in some cases, change difficult.  One of the things that I have on my perpetual ‘todo’ list is to have a long hard look at the Fluid Project, but other activities must currently take precedence.

This pretty much concludes my adventure into the world of Moodle blocks. There’s a whole load of Moodle related stuff that I hope to look at (and hopefully describe) at some point in the future: groups, roles, contexts, and forums.  Wish me luck!

Acknowlegements: Image from lifeontheedge, licenced under Creative Commons.

Share post

How Moodle block editing works : displaying a block (part 1)

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 21 Jul 2010, 17:58

One of the great things about Moodle (other than the number of databases it can use!) is the way that courses can be easily created and edited.  One of its best features is the edit button that can be found at the top of many pages.  Administrators and course managers can push this button and quickly add and remove functionality to redesign a course or a site within seconds.

This blog post is the first in a series of two (but might even extend to three) that aims to answer the question of: how does the Moodle block editing magic work?  To answer this question I found that it was useful to split this big question into a number of smaller questions.  These are: how are blocks presented to the user?, how are block layouts stored in the Moodle database?, and what happens when the user clicks on the edit button and makes changes to the layout of a site or a course?

There are two reasons for wanting to answer to these questions.  The first is that knowing something about this key part of Moodle might help me to understand more about its architecture which might help me in the future if I have to make any changes as a part of the EU4ALL project.  The second is pure curiosity, particularly regarding the database tables and structures - I would like to know how they work.

There are two broad approaches that I could take to answer these questions: look at things from the top down, or from the bottom up.  I could either look at how the user interfaces are created, or I could have a look at the database to see if I can find data tables that might be used to store data that is used when the Moodle user interface is created.  In the end I used a combination of top down and bottom up approaches to understand a bit of what is going on.

This post will present what I have learnt about how Moodle presents blocks.  The second post will be about what I have found out about the database and how it works (in relation to Moodle blocks) and what happens when you click on the various block editing icons.

There will be load of detail which will remain unsaid and I’ll be skipping over loads of code subtleties that I haven’t yet fully understood!  I’ll also be opinionated, so advance apologies to and Moodle developers who I might inadvertently offend.  I hope my opinions are received with positive spirit, which is the way that they are intended.

Introducing blocks

Blocks are bits of functionality which sit on either side of a Moodle site or course.  They can do loads of stuff: provide information to students about their assignment dates, and provide access to discussion forums.  When first looking at Moodleworld, I had to pause a moment to distinguish between blocks, resources and activities.  Blocks, it might be argued, are pieces of functionality that can support your learning, whilst activities and resources may be a central part of your learning (but don’t quote me on that!)

Not long after starting looking at the blocks code, I discovered a developer page on the subject.  This was useful.  I soon found out that apparently there are plans to improve the block system for the next version of Moodle.  The developers have created an interestingly phrased bug to help guide the development of the next release.  This said, all the investigations reported here relate to version 1.9+, so things may very well have moved on.

Looking at Index

Blocks can be used in at least two different ways: on the main Moodle site area (which is seen when you enter a URL which corresponds to a Moodle installation) and within individual courses.  I immediately suspect that there is some code that is common between both of them.  To make things easy for myself, I’ve decided (after a number of experiments) to look at how blocks are created for a Moodle site.

To start to figure out how things work the first place that I look at is the index.php file.  (I must confess that I actually started to try to figure out what happened when you click on the editing button, but this proved to be too tough, so I backtracked…)

So, what does the index.php file do?  I soon discover a variable called $PAGE and asked the innocuous question of ‘why are some Moodle variables in UPPERCASE and others in lowercase?’ I discover an answer in the Moodle coding guidelines. Anything that is in uppercase appears to be global variables. I try to find a page that describes the purpose of the different global variables, but I fail, instead uncovering a reference to session variables, leaving me wondering what the$PAGE class is all about.

Pressing on I see that there are some functions that seem to calculate the width of the left and the right hand areas where the blocks are displayed.  There is also some code that seems to generate some standard HTML for a header (showing the Moodle instance title and login info).

The index page then changes from PHP to HTML and I’m presented with a table.  This surprises me a little.  Tables shouldn’t really be used for formatting and instead should only be used to present data.  It seems that the table is used to format the different portions of the screen, dividing it unto the left hand bunch of columns, a centre part where stuff is displayed, and a right hand column.

It appears that the code that co-ordinates the printing of the left and right blocks is very similar, with the only difference being different parameters to indicate whether things appear on the left, and things appear on the right.

The index file itself doesn’t seem to display very much, so obviously the code that creates the HTML for the different blocks is to be found in other parts of the Moodle programming.

Seeding program behaviour

To begin to explore how different blocks are created I decide to create some test data.  I add a single block to the front page of Moodle and position it at the top on the right hand side:

Knowing that I have one block that will be displayed, I can the trace through the code when the ‘create right hand side’ code is executed using my NuSphere debugger to see what is called and when.

One thing that I’m rather surprised about is how much I use the different views that my debugger offers.  It really helps me to begin learn about the structure of the codebase and the interdependencies between the different files and functions.

Trying to understand the classes

It soon becomes apparent that the developers are making use of some object-oriented programming features.  In my opinion I think this is exactly the right thing to do.  I hold the view that if you define the problem in the right way then its solution (in terms of writing the code that connects the different definitions together) can be easy, providing that you write things well (this said, I do come from a culture of Java and C# and brought up, initially, on a diet of Pascal).

After some probing around there seem to be two libraries that seem to be immediately important to know about: weblib and blocklib.  The comments at the top of weblib describes it as ‘library of all general-purpose Moodle PHP functions and constants that produce HTML output’.  Blocklib is described as, ‘all the necessary stuff to use blocks in course pages’.

In index, there is a call to a function called blocks_setup (blocks, I discover, can be pinned true, pinned both, or pinned false – block pinning is associated to lessons, something that I haven’t studied).  This function appears to call another function named   blocks_get_by_page (passing it the $PAGE global). This function returns a data structure that contains two arrays. One array is called l and the other is called r. I’m assuming here that array data has been pulled from the database. The next function that follows is called blocks_have_content. This function does quite a bit. It takes the earlier data structure, and translates the block number (which corresponds to which block is to be displayed on the page) and converts it into a block name through a database call. The code then uses this name to instantiate an object whose class is similar to the block name (it does this by prepending ‘block_’ to the start). There is something to be cautious about here: there is a dependency between the contents of the database (which are added to when the Moodle database is installed) and the name of the class. If either one of these were to change the blocks would not display properly. The class that corresponds to the news block is named ‘block_news_items’. This class is derived from (or inherits) another class called block_base that is defined within the file moodleblock.class.php. A similar pattern is followed with other blocks. Is_empty() Following the program flow, there is a call to a function called is_empty() within blocklib.php. This code strikes me as confusing since is_empty should only be doing one thing. Is_empty appears to have a ‘side effect’ of storing the HTML for a block that comes from a call to get_content to a variable called ‘content’. Functions should only do what they say they do. Anything else risks increasing the burden of program comprehension and maintenance. The Moodle codebase contains several versions of get_content, one for each of the different blocks that can be displayed. The version that is called depends on which object Moodle is currently working through. Since there is only one block, the get_content function within block_news_items is called. It then returns some HTML that describes how the block will be presented. This HTML will be stored to the structure which originally described which block goes where. If you look through the pageblocks variable, the HTML can be found by going to either the left or right array, looking in the ‘obj’ field, then going to ‘content’. In ‘content’ you will find a further field called ‘text’ that contains the HTML that is to be displayed. When all the HTML has been safely stored away in memory it is almost ready to be printed (or presented to a web client). Calls to print_container_start() and print_container_end() delineate a call to blocks_print_group. In this function there will then be a database call to check to see if the block is visible and then a call to _print_block() is made. This is a member function of a class, as indicated by the proceeding underscore. The _print_block() can be found within the moodleblock.class file. This function (if you are still following either me or the code!) makes a call to print_side_block function (which is one of those general purpose PHP functions) contained within weblib.php. Summary and towards part 2 I guess my main summary is that to create something that is simple and easy to use can require quite a lot of complicated code. My original objective was to try to understand the mechanisms underpinning the editing and customising of course (particularly blocks) but I have not really looked at the differences between how blocks presented within the course areas and how blocks are presented on the main site. Learning about how things work has been an interesting exercise. One point that I should add is that from an accessibility perspective, the use of tables for layout purposes should ideally be avoided. What is great is that there is some object-oriented code beginning to appear within the Moodle codebase. What is confusing (to me, at least) is the way that some data structures can be so readily changed (or added to) by PHP. I hold the opinion that stronger data types can really help developers to understand the code that they are faced with since they constrain the actions that can be carried out to those types. I also hold the view that data stronger typing can really help your development since you give your development tools more of an opportunity to help you (through presenting you with autocomplete or intellisense options), but these opinions probably reflect my earlier programming background and experience. On the subject of data types, the next post in this series will be about how the Moodle database stores information about the blocks that are seen on the screen. Hopefully this might fill the gaps of this post where the word ‘database’ is mentioned. Acknowlegement : Picture by zoologist, from Flickr. Licenced under creative commons. Share post Learning Technologies 2009 Visible to anyone in the world Edited by Christopher Douce, Wednesday, 21 Jul 2010, 17:49 Yesterday I went to the Learning Technologies exhibition held at Kensington Olympia, London. This is the third time I have been to this event. The first time I went (back in 2004) was because I also attended a related exhibition called BETT which is hosted a couple of weeks earlier. The two shows have different audiences: BETT is more focussed towards the schools and government funded education sector whereas the Learning Technologies exhibition focuses more on education (or training) software, services and systems for private sector companies (but there is much cross over, of course). Every year there seems to be a conference that is linked to the exhibition but I have so far never been able to attend. Last year Last year I came away from the exhibition learning a few new things. I learnt that there was a range of products called competency management systems which enables corporations to learn about what their employees know about (and how these map to individual training courses). I also learnt about the release of new mobile learning systems. The prevailing theme of last year’s exhibition seemed to be the concept of Rapid E-learning (more of this later). My objective for this visit was to determine whether there were any new themes (or innovations) in learning technologies that are emerging from the commercial sectors. I also had one eye on the subject of accessibility and the extent to which Moodle was beginning to feature in the commercial e-learning sphere. Themes Whilst walking around the exhibition I asked a number of exhibitors whether they thought there were any differences between this years exhibition and the previous years exhibition. Two main seemed to dominate. The first is the application of web 2.0 ideas into learning systems. The second is the idea of informal learning. Both of these themes were, perhaps unsurprisingly, reflected in articles that were provided in the free magazine that came with admission. I also picked up on a number of other themes too. These are listed below. Web 2.0 The notion of web 2.0 (or the 'participatory web'), seemed to feature quite heavily. Given the amount of discussion this label has generated this perhaps isn’t surprising. It was interesting to see that an article written by the current Open University vice-chancellor was given a mention in the exhbition and conference magazine. One comment that I heard from the exhibitors is that there is a more wider acceptance of the use of blogs and wikis. One vendor who I spoke to was called Infinity Learning. Infinity were presenting something called their 'learning portal' product which provided some functionality to allow learners to rate and review courses. It was interesting since it featured a recommendation system akin to something that Amazon does when it offers you products that other people have bought. I presume this will expose the learning pathways that other employees or learners have followed, allowing water cooler discussions about what learning activities were helpful to become more explicit. Informal learning I have to confess, I do struggle with understanding the concept of formal learning, but the exhibition magazine points me in the direction of a related blog post. There are a couple of links within this link that might be useful. One vendor connected e-learning and informal learning by describing an approach where large quantities of digital resources are placed on-line allowing employees to gain access to useful information as and when they are required, allowing gaps of knowledge about procedure or practice to be filled. Informal learning, in this sense, can be connected to some of the other themes that could be found within the exhibition, specifically ‘bite sized’ or on-demand learning (which may or may not incorporate product simulations). Gaming There seemed to be a bit of a buzz about gaming, but I didn’t get a sense that this was one of the big topics of the show. When speaking to one exhibitor, gaming was mentioned in the same sentence as virtual worlds. Rapid e-learning The idea of rapid e-learning initially puzzled me when I first came across it last year. I soon realised that rapid e-learning is facilitated by tools that allow e-learning designers to create their own in-house courses without having to go outside to professional e-learning content development companies (of which there are many). Last year, the word at the exhibition was that rapid e-learning tools were causing the decline in the price of bespoke e-learning contracts. Every exhibitor that had a rapid e-learning tool seemed to have their own learning management system of some kind. When it comes to industry standards (in the e-learning world), the one that is most often mentioned is SCORM (wikipedia). Bite sized e-learning Bite sized e-learning seems to relate primarily to e-learning objects that are quite small. You might use informal learning and bite sized learning in the same sentence. These might be small 'mini courses' that give you instruction about how to carry out a particular task or operation within your institution. This is also related to the next theme: simulations. (As an aside, I'm assuming that a bite sized piece of e-learning doesn't last more than ten or twenty minutes, but this wasn't a question that I really asked). Simulations A number of vendors were selling tools that enable you to build simulations of any IT system that your organisation might have deployed. Simulations can be used to either train up new employees, or to offer 'bite sized' reminder courses that can help to guide employees through the features of a large system that might not be used very often. The presence of these products did make me wonder about how the provision of simulation recording (and development) systems might stack up against quick and easy to use open source tools such as Wink (but this exposes a dimension of simulation systems that has illustration at one end and involvement at the other). Competency Management I love this term! It has such a positive feel to it! Like last year there were some vendors who were selling systems that attempted to bridge the gap between human-resources systems and training delivery systems. I know very little about human resource management systems but I can see that the link between LMS systems that deliver different kinds of learning might be useful. When asking about the different personnel management systems that were on the market, Oracle seemed to be the one that was mentioned most frequently, having acquired Peoplesoft (wikipedia). Content Development I stumbled across the term 'workflow management' a couple of times. I can see the purpose of using an e-learning material workflow management system: a company needs to draw upon the skills and abilities of different people within an organisation, some of whom might be external contractors. I find the area of workflow management systems interesting since they can really take advantage of the fact that IT systems are exceptionally good at remembering stuff about who did what and when. Moodle Moodle cropped up a couple of times. Kineo, a company based in Brighton in the UK was offering a cut-price hosted solution for a period of twelve months. As a part of the package they appeared to be offering customising (or branding) of the Moodle instance to match the identity of your institution, and some training. Sadly, all the guys at Kineo were way too busy to have a chat with me! The second big Moodle related find was a product called Moomis marketed by Aardpress. Moomis is apparently a Moodle 'plug-in' that can add Continuing Professional Development (CPD) and competency management functionality (my favourite term) to make Moodle more flavoursome for the more commercially inclined. Accessibility Since e-learning materials appear to be often created using rapid e-learning tools, the accessibility of the resulting material is likely to partially dependent upon the structure of the digital resources that are generated. I didn't have much of a chance to quiz vendors about this issue, but well known UK companies such as Epic and Brightwave are known to appreciate the importance of accessibility. On another note, I was interested to discover the presence of Texthelp, a company who produce a tool called ReadWrite&Gold (they also produce the BrowseAloud system which can be used in conjunction with the main Open University website). They kindly gave me quick demo and said that they had just release a new version which incorporates new synthetic voices and updated dictionaries. I also discovered the presence of the UK Council for Access and Equality, a not for profit organisation. The downturn The Learning Technologies exhibition seemed to be as busy as it was last year – it was certainly buzzing with visitors. I asked a couple of people about their opinions about the current concerns about 'the downturn' and received a mixed set of responses. Some companies, it was reasoned, were choosing to bring their training spend 'in-house', choosing to use rapid e-learning tools (but this was in line with some of the trends I felt were at the exhibition last year). Other companies seemed to state that they had been affected, whereas others had a deliberate strategy of going after public sector projects. In one of the presentations that I briefly attended contained the argument that organisations should make use of learning technologies to ensure that employees are able to perform as efficiently as possible. On-demand 'bite sized' e-learning will certainly help when it comes to carrying out complex infrequent tasks. And finally I also discovered the presence of a project called Next Generation Learning , a campaign sponsored by Becta. As well as noticing the presence of organisations like the British Computer Society, I also noticed an organisation called the e-Learning Network (which appears to be a partner with the Association of Learning Technology), and was duly informed that associate membership was free. Might be worth a look. Summary I quite like the Learning Technologies exhibition (I might even be able to attend the conference one day). It's a good way to find out (very roughly and quickly) what's happening in the wider e-learning industry. Its interesting to see that vendors offer a portfolio of different services which often includes content creation, tool development, managed learning environment provision and system hosting. The concept of 'web 2.0' (whatever that means) seems to be a salient theme this year. It was interesting to see the substantial use of the term informal learning. It'll be interesting to see how the exhibition looks next year. Acknowlegements: thanks to all those exhibitors who I spoke to! Share post Personalising museum experience Visible to anyone in the world Edited by Christopher Douce, Wednesday, 21 Jul 2010, 17:48 Last year has been a fun year. At one point I found I had a number of hours to kill before I caught an onward travel connection. Since I was travelling through a city, I decided to kill some time by visiting some museums. I have to confess I really like museums. My favourite type is science and engineering museums. I really like looking at machines, mechanisms and drawings, learning about the people and situations that shaped them. I also like visiting art museums too, but I will be the first to confess that I do find some of the exhibits that they can contain a little difficult to understand. Starting my exploration I stepped into the Thyssen-Bornemisza museum (wikipedia) with mild trepadation, not really knowing what I was letting myself in for. After the entrance area I discovered a desk that was renting audio guides. Since I felt that I might be able to gain something from the use of an audio guide (and since I was travelling alone, it could offer me some company), I decided to rent one for a couple of hours. With my guide in hand I started to wander around the gallery. The paintings appeared to be set out in a very particular and deliberate way. The gallery designer was obviously trying to tell me something about the history of art (of which I know next to nothing about). The paintings gradually changed from impressionism, to modernism, through to paintings that I could only describe as thoroughly abstract (some of which I thoroughly liked!) Extending my guide I remember stopping at a couple of paintings at the impressionist section. The disembodied voice of my guide was telling me to pay attention to the foreground, and the background: particular details were considered to be important. I was given some background information, about where the painter was working and who he was working with. On a couple of occasions I felt that I had been told a huge amount of detail, but I felt that none of it was sticking. I didn't have a mental framework around which to store these new facts that I was being presented with. Art history students, on the other hand, might have less trouble. What I did discover is that some subjects interested me significantly more than others. I wanted to know which artists were influenced by others. I wanted to hear a timeline of how they were connected. I didn't just want my guide to tell me about what I was looking at, I wanted my audio guide to be a guide, to be more like a person who would perhaps direct me to things that I might be interested in looking at or learning about. I wanted my audio guide to branch off on an interesting anecdote about the connections between two different artists, about the trials and tribulation of their daily lives. I felt that I needed this functionality not only to uncover more about what I was seeing, but also to help me to find a way to structure the information that I was hearing. Alternative information Perhaps my mobile device could present a list of topics of themes that related to a particular painting. It might display the name of the artist, some information about the scene that was being depicted, perhaps some keywords that correspond to the type under which it could be broadly categorised. Choosing these entries might direct you to related audio files or perhaps other paintings. A visitor might be presented with words like, 'you might want to look at this painting by this artist', followed by some instructions about where to find the painting in the gallery (and its unique name or number). If this alternative sounded interesting (but it wasn't your main interest) you might be able to store this potentially interesting diversion into a 'trail store', a form of bookmark for audio guides. Personalised guides Of course, it would be much better if you had your own personal human guide, but there is always the fear of sounding like an idiot if you ask questions like, 'so, erm, what is impressionism exactly?', especially if you are amongst a large group of people! There are other things you could do too. Different visitors will take different routes through a gallery or museum. You might be able to follow the routes (or footsteps) that other visitors have taken. Strangers could be able to name and store their own routes and 'interest maps'. You could break off a route half way through a preexisting 'discovery path' and form your own. This could become, in essence, a form of social software for gallery spaces. A static guide might be able to present user generated pathways through gallery generated content. Personal devices One of the things I had to do when I explored my gallery was exchange my driving licence for a piece of clumsy, uncomfortable mobile technology. It was only later that it struck me that I had a relatively high tech piece of mobile technology in my pocket: a mobile phone. To be fair, I do hold a bit of fondness for my simple retro Nokia device, but I could imagine a situation where audio guides are not delivered by custom pieces of hardware, but instead streamed directly to your own hand held personal device. Payment for a 'guide' service could be made directly through the phone. Different galleries or museums may begin to host their own systems, where physical 'guide access posters' give users instructions about how visitors could access a parallel world of exploration and learning. Rather than using something that is unfamiliar, you might be able to use your own headphones, and perhaps use your device to take away souvenirs (or information artefacts) that relate to particular exhibits. Museums are, after all, so packed with information, it is difficult to 'take everything in'. Your own device may be used to augment your experience, and remind you of what you found to be particularly interesting. Pervasive guides If each user has their own device, it is possible that this device could store a representation of their own interests or learning preferences. Before stepping over the threshold of a museum, you might have already told your device that you are interested in looking at a particular period of painting. A museum website might be able to offer you some advice about what kinds of preferences you might choose before your visit. With the guide that I used, I moved between the individual exhibits entering exhibit numbers into a keypad. Might there be a better less visible way to tell the guide device what exhibits are of interest? In museums like Victoria and Albert and the Natural History Museum, it takes many visits to explore the galleries and exhibits. Ideally a human guide would remember what you might have seen before and what interests you have. Perhaps a digital personalized guide may able to store information about your previous visits, helping you to remember what you previously studied. A digital system might also have the power to describe what has changed in terms of exhibits if some time has elapsed between your different visits. A gallery may be able to advertise its own exhibits. Challenges These thoughts spring from an idealised vision of what a perfect audio (or mobile) guide through a museum or gallery might look like. Ideally it should run on your own device, and ideally it should enable to learn and allow you to take snippets or fragments of your experience away with you. In some senses, it might be possible to construct a museum exhibit e-portfolio (wikipedia), to store digital mementoes of your real-world experiences. There are many unsaid challenges to realise a pervasive personalized mobile audio guide. We need to understand how to best create material that works for different groups of learners. In turn, we need to understand how to best create user models (wikipedia) of visitors. Perhaps one of the biggest challenges may lie with the creation of a standards-based interoperable infrastructure that might enable public exhibition spaces to allow materials and services to be made available to personal hand held devices. Acknowlegement: image from Flickr by jonmcalister, licenced under creative commons. Share post Database abstraction layers and Moodle Visible to anyone in the world Edited by Christopher Douce, Wednesday, 21 Jul 2010, 17:47 One of the great things about Moodle is that it can be used with a number of different database systems. It can use popular open source databases such as MySQL or Postgres, or commercial offerings from Oracle or Microsoft. The great thing about offering this level of flexibility is that it can make the adoption of Moodle into an existing IT infrastructure a whole lot easier. If you have an IT department which is Microsoft centric, then adopting Moodle or slotting it into an existing IT infrastructure might not cause too much upset. Similarly, if your IT department uses Linux and has a dedicated database server that runs Postgres, offering choice of back end technologies can make things easier for system administrators. This post is all about exploring how Moodle works with so many different database systems. The keywords that I am starting with is database abstraction layer. Wikipedia defines a database abstraction layer as ‘an application programming interface which unifies the communication between a computer application and databases’. In some cases, a database abstraction layer can also help to maintain good application performance by caching important data, avoiding the need to repeatedly request data from a database engine. Here are my questions: how does a Moodle developer save stuff to and get stuff from a database? Does Moodle have a database abstraction layer? If it does, how might it work? Finally, are there other database abstraction layers or mechanisms out there that could be used? Let’s begin with the first question. Getting stuff in and out What instructions or mechanisms do developers used to get data into and out of Moodle, or a database that Moodle is using? My first port of call is the Moodle documentation. After a couple of clicks I find something called the Moodle Database Abstraction Layer. This looks interesting but way too complicated (and initially confusing) for me to understand in one go. What I’m interested in is an example. I turn to the Moodle codebase and using my development environment I perform a text search (or grep) with the word SELECT, which I know to be a frequently used part of the SQL database language which underpins most relational database systems, and browse through the results. I quickly uncover a function called get_record_sql which seems to be the way to send SQL language commands to a database. Another search reveals that the function is defined within a file called dmlib.php. This library is said to contain all the Data Manipulation Language functions used to interact with the DB. Comments within the file are reported to be ‘generic’ and work with a number of different databases. A link to a documentation page is also provided, but seems to be describe functions that relate to the development version of Moodle, not the version that I am using (version 1.9). It seems that functions named get_record_sql, get_record_select and update_record (amongst others) are all used to write to and read from a database that is used with Moodle. To write new Moodle modules requires a developer to know a vocabulary of abstraction functions. The second question can be answered relatively easily: Moodle does seem to have a database abstraction layer. Judging from the documentation it seems to have two different types of abstraction layers: one for the usage of a database, another for the creation of database structures. I’ll try to write something about this second type in another post. How does it work? How does the Moodle abstraction layer work? How does it act as an intermediary between the Moodle application and the chosen database engine? There seems to be a magic global variable called$db, and the abstraction layer code seems to be replete with comments about something called ADOdb.  Is ADOdb the magic that speaks to the different databases?

Another search for the phrase '\$db =’ yields a set of interesting results, including files contained within a folder called adodb (lib/adodb).  This seems to be a database access library for PHP.  I uncover a link to the ADOdb sourceforge project from where the code originated and I’m rudely confronted with some sample code.

At this point, it seems that Moodle uses different two layers to 'get' and 'set' data.  It begins with the Moodle-world functions (the database manipulation language functions).  Calls are then passed to ADOdb, where they are magically ushered towards the database.

Other questions come to mind, such as: why did the Moodle developers choose ADOdb?  This question does not have an answer that can be easily uncovered.

Other abstraction layers

A quick glance at two of my PHP books point towards different database (or data) abstraction layers.  My copy of Programming PHP, for example, emphasises the use of a library called PEAR DB (named after the PHP Extension and Application Repository).  Clicking this previous link tells me that the PEAR DB library has since been replaced by something called MDB2.  My PHP Cookbook, on the other hand emphasises the use of PDO, which is a part of PHP 5 (a version of the PHP engine that the Moodle community has only relatively recently adopted).

So, why did the Moodle developers choose ADOdb when there are all these other mechanisms on offer?  I haven't managed to uncover forum discussion that explains the precise motivation for the choice.  Moodle release notes go back to May 2005, but the earliest forum discussion I can find that relates to ADOdb dates back to 2002.  Perhaps the choice could be put down as a happy accident of history and one that has facilitated amazing database interoperability.

One thing is clear: PDO is the (relatively) new kid on the 'database abstraction' block, and other software developers are asking the interesting (and difficult to answer) question of 'ADOdb or PDO: which is better?'  In trying to answer this question myself, I uncovered a slideshare presentation and a blog post that tries to compare the two technologies by using benchmarks to see which is faster.  PDO, it seems, is a central part of PHP 5 and has been written in 'native code' which might explain why is reported as being faster.

The debates about which database interface technology is better are interesting but don't directly arrive at a clear conclusion.  Different technologies may do similar things in slightly different ways, and sometimes a choice of one or the other may boil down to what the programmers have used in the past.  Unpicking the subtle advantages and disadvantages of each approach needs lots of time and determination.  And when you have an answer, affecting a change may be difficult.

Future developments?

I recently uncovered a really interesting Moodle forum discussion on the topic of database abstraction (amongst other things).  Subjects included differences between various database systems, the possibility of using stored procedures, the difficulty of mapping object-oriented data structures to relational database engines and so on.  All great fun for computer scientists and application developers!

One thing bugs me about the Moodle database abstraction layer is that it is very shallow.  It requires module developers to know a lot of stuff about things that ideally they shouldn't need to know about.  To add courses and modules, you have to know a little about the structure of the Moodle database and how to work with it.  There is very little code that separates the world of SQL statements (passed on to databases using DML and ADOdb) and the interfaces that are presented to users.

It could be argued adding additional layers of abstraction to more firmly manage data flow between Moodle application code and the database would place additional demands on the Moodle programmers.  In turn, it this could make it harder for occasional contributors, particularly those working within academic institutions to make contributions to the code base.  I strongly disagree with this argument.  Creating a more sophisticated (or layered) database abstraction approach may open up the possibility of making more effective use of functions of different database engines and make the Moodle code base easier to understand (if the abstractions are designed correctly).

One way to consider ways about how the abstraction layer might be improved is to look at how other open source projects solve the same problem.  I was recently told about the Drupal database abstraction layer.  One useful activity might be to investigate its design and learn about what decisions have helped to guide its development.

Summary

Databases can be a tough subject.  Creating an application that can work with different database engines effectively and efficiently is a big software engineering challenge.  This challenge, on the other hand, can make things a lot easier for those people who are responsible for the management and operation of IT services.  Providing application choice can increase the opportunities for an application to be used.

What is certain is that the database abstraction mechanisms that are currently used in Moodle will change as Moodle evolves and database engines are updated.  At the time of writing work is underway to further develop the Moodle database abstraction layer.  I look forward to seeing how it changes.

Image acknowledgement: pinksherbert, from Flickr.

Share post

Big wins in accessibility?

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 21 Jul 2010, 13:33

In 2004 a report was published by the Disability Rights Commission (now known as the Equality and Human Rights Commission) that explored the state of website accessibility.  The DRC report, which is also summarised bythe on-law website analysed one thousand different web sites and evaluated their accessibility against the WCAG 1.0 guidelines.  It was concluded that 81% of the sites that were surveyed failed to reach the lowest level of accessibility (level A).

This statistic is surprising because it is such an alarmingly high figure.  This causes me to ask a closely related question: what does not being able to access websites mean?  One answer is that it can mean some people being unable to access goods, services and information.  It may also mean not being able to use tools that can be used to communicate with others.

Another question (and perhaps this is not a 'million dollar' question, but a 'multi-million dollar' question) is: what could we do to reduce this figure?  The DRC report presents a set of very sensible recommendations for different stakeholders: support service providers, assistive technology providers, operating system developers, website developers and owners, and developers of checking tools.

An alternative vision?

I think there is another approach that we could use.  The world-wide-web would not be what it is today without open source software (OSS).  You could even consider OSS to be the web’s backbone.  OSS powers the programming languages used to create open source operating systems (Linux).  These operating systems can play host to open source web servers (Apache), which in turn can offer functionality by through open-source software development frameworks build using open-source programming languages.

Some open source developments are more popular than others.  There may be a whole range of reasons that might contribute to success or popularity.  Usually it amounts to a vigorous development community and the fact that a product happens to solve a precise problem very well.

The 81% figure mentioned earlier relates only to web sites.  Many open source software developments are created especially to make it easier for other developers to build and manage different types of end-user facing web-based applications.

If we take the argument that there are open source software packages that are used to power web sites, and acknowledge the fact that some open source applications are likely to be more popular than others, we could argue that by improving the accessibility of certain web frameworks we might be able to reduce that 81% figure.

Of course, there is the difference between making changes to a software framework to make it more accessible to users, and making the materials that are presented using a framework more accessible.  Rather than tacking these two issues together, let's just thing about choosing software frameworks.

Choosing frameworks to explore

I use the web for loads of things.  I use it to both write and consume blogs.  I also use the web to buy stuff (especially around Christmas time!)  Very occasionally I might poke my head into on-line discussion forums, especially those that discuss programming or software development related topics.  I also browse to news portals (such as the BBC), and find myself on various information exchanges.

In essence, I use the web for a whole range of different stuff.  If I take each of my personal 'web use cases', I can probably find an open source application that supports each of these tasks.  Let’s begin with the most obvious.  Let’s begin with blogs.

Blogging tools

Here, I have two questions: how accessible are blogging tools (to both read and write entries), and what blogging tools are out there?

I don’t know the answer to the first question, but I suspect that their accessibility could be improved.  On some sites you are presented with a whole range of different adverts and links.  Headings and tagging may be mysterious.  The blog editing tools may present users with a range of confusing icons and popups.  This is a topic ripe for investigation.

But what tools are out there?  A quick exploration of Wikipedia takes you to an article called Weblog software.  Immediately we are overwhelmed with a list of free and open source software.  But which are the most popular?  A quick poke around reveals two popular contenders for accessibility evaluation: Moveable Type and WordPress

A related question is: how many blogs do these systems collectively represent?  WordPress, for example, claims to be used with 'hundreds of thousands of sites' (and seen by tens of millions of people everyday), and reported 3.8 million downloads in 2007.  These are impressive figures.

Content management systems

Blogs are often referred to in the same sentence as a broader category of web software known as content management systems (or CMS for short).  As always, a quick probe around in Wikipedia reveals an interesting page entitled List of Content Management Systems. It appears there are loads of them!

CMS systems are used for different things.  You might use a CMS to create a way to more easily manage a static website that represents the 'store front' of a company or organisation (or brochureware sites, as I believe they might be know).  If used in this way a CMS can make the task of making updates a lot easier: you might not need a web designer to modify HTML code or add new files. Some CMS systems contain integrated blog tools.  As well as representing a store front, there might be a 'product' or 'service blog' to provide information to customers about new developments.

You might also use a CMS as an information portal.  A charity might use a CMS to provide fact sheets or articles on a particular subject.  A CMS may also provide additional functionality such as discussion forums, allowing users to share points of view on particular subjects.

A simple question is: which are the most popular open source content management systems?  This simple question is not easy to answer. It strikes me that you have to be closely involved with the world of content management systems to begin to answer this question effectively.  This said, a couple of systems jump out at me, all of which seem to have funny names.  Three systems that I have directly heard of are: Joomla!, Mambo and Drupal.  Other interesting systems include TangoCMS and PHPNuke

Unfortunately it is difficult to get a clear and unambiguous picture of how many web sites are created by these systems.  You cannot always tell by looking at the code of a website which content management system is has been created by.  This said, some research has been performed to explore other measures of popularity, such as downloads and search engine ranking values.  (Waterandstone Open Source CMS market share report - 5mb PDF)

What is certain, exploring the status of accessibility of one content management system may have a positive impact on wider set of websites.

Shopping

E-commerce isn’t the preserve of on-line megastores like Amazon.  Small specialist shops selling anything from diet pet food through to hi-fi speaker cables have the potential to become global 'clicks-and-mortar' retailers.

Some content management systems can be extended by installing additional 'blocks' to  add e-commerce functionality.  There is also a category of software that could be loosely described as shopping cart software (there is also a Wikipedia shopping software comparison page for the curious).  Further probing uncovers a category entitled Free electronic commerce software.

Following the links to osCommerce website, some interesting claims can be revealed.  It is stated that over fourteen thousand shops using this one platform have been voluntary added to a directory of on-line businesses.

I also clicked on another shopping site provider: CubeCart. Although not an open source platform, CubeCart claims that it is used by a 'million store owners around the world'.  It is interesting to note that accessibility is not one of its selling points.

Community sites or forums

Content management systems have begun to step on the toes of what might be considered to be an older category of web software: community or on-line discussion forums.  As ever, Wikipedia is useful, offering a comparison page. Whatever your interest, there will be a forum on the web in which you can share opinions and experience with others.  Forums should be accessible too.

Summary

Creating a web site, or a web based application is hard work (in my opinion).  There is so much to think about: information architecture, graphical design, HTML coding, databases, CSS files.  To help you, there are loads of software development frameworks that can help out.  Many of these frameworks are open source, which means you can modify software so it can match your precise needs.

Another great thing about open source software is that if you find a framework that does not generate HTML code that is accessible as it could be, any improvements that you make has the potential to affect a wider user community of both developers and end users.

What is not clear, however, is the precise extent of the accessibility of some of the software frameworks that have been presented here.  Whilst it is true that accessibility is more a matter of changing or correcting programming code, exploring some of these projects in depth may be one way to increase the accessibility and on-line experience for the benefit of all web users.

Acknowlegements

Posting image, licenced under creative commons from chough, from Flickr.

Share post

Reflections on learning object granularity

Visible to anyone in the world
Edited by Christopher Douce, Tuesday, 1 Sep 2020, 07:57

I first discovered the notion of learning object granularity when I was tasked with creating my first learning object.  I was using an authoring tool that allowed you to describe (or tag) absolutely anything.  This was a revelation!  My tool allowed me to assign descriptions to individual photographs and sets of navigable pages that could contain any type of digital media you could imagine.  You could also assign descriptions to an entire learning object.  Not only was I struggling with how to use the fields (title, description, keywords) that I had to complete, it was also difficult to know where I should stop!

Terms of reference

There are a significant number of terms here that beg further explanation.  The idea of a learning object (wikipedia) is one that is slippery: it varies depending upon who you speak to.  I see a learning object as one or more digital resources that have the potential to provide useful information to, or serve a useful function for, a consumer.  Consumers, I argue, are both the end-users (learners), and those who might use learning objects within a course of study.

An alternative definition might be: a set of learning resources that can be used together to help a learner achieve a defined set of learning objectives.  I think I prefer this second definition.  It feels a little more precise, but there are few words that allude to how large a learning object might be.

Benefits of learning objects

One of the often cited benefits of learning objects is that they have the potential to be reused.  A digital resource could be taken from one learning situation could be reused (or repurposed) to another situation.  The benefits could include an increase in the quality of the resulting material and possible savings in terms of time and money.

Learning objects are sometimes held within mysterious instruments called repositories.  If existing materials are taken and modified, they could then be later returned to a repository and placed back into circulation for other people can use and modify, thus creating a virtuous cycle.  One problem with placing material in a repository is that if your repository contains tens of thousands of individual objects, finding what you want (to solve your particular teaching need) can become difficult (as well as tedious).

Metadata (wikipedia) has the ability to 'augment’ textual searching, potentially increasing the quality of search results.  Metadata also has the ability to offer you additional information or guidance about what an object might contain and how it might have been used, allowing you to make judgements regarding its applicability in your own teaching context.

There is a paradox: the more granular (or mutable) a learning object is, the more easily it can be reused, but the less useful it is likely to be.  The larger a learning object is, the more useful it is to an individual user (since it may attempt to satisfy a set of learning objectives), and the less likely it could be transferred or 'repurposed' to different learning and teaching contexts or situations.  Furthermore, the smaller the learning object, the more moral fibre one needs to successfully create correct (and relevant) metadata.

Repurposing

'Repurposing' is a funny word. I understand it to mean that you take something that already exists and modify it so it becomes useful for your own situation. I think repurposing is intrinsically difficult.  I don't think it's hard in the sense that it's difficult to change or manipulate different types of digital resources (providing you already have skills to use the tools to effect a change).  I think it's hard because of the inherent dependencies that exist within an object.  You have to remember to take care of all those little details.

I consider repurposing akin to writing an essay. To write a really good essay you have to first understand the material, secondly understand the question that you are writing about, and then finally, understand who you are writing it for.  If you write an essay that consists of paragraphs which have been composed in such a way that they could be used in other essays, I sense you will end up with an essay that is somewhat unsatisfactory (and rather frustrating to read).

There is something else that can make learning object repurposing difficult.  Learning objects are often built with authoring tools.  Some tools begin with a source document and then spit out a learning object at the other end.  The resulting object may (or may not) contain the source from which it was created.  This is considered to be 'destructive' (or one way) 'authoring', where the resulting material is difficult to modify.

Even if we accept that reuse is difficult, there are other reasons why it is not readily performed.  One reason is that there is no real sense of prestige in using other people materials (but you might get some credit if you find something that is particularly spectacular!).  Essentially, employers don't pay people to re-purpose learning materials; they pay people to convey useful and often difficult ideas of learners in a way that is understandable.  There is no reward structure or incentive to reuse existing material or build material that can be easily reused.  Repurposing takes ingenuity and determination, but within the end result, much of this may be hidden.

There is a final reason why people may like to create and use their own learning resources rather than reuse the work of others.  The very act of creating a resource allows one to acquire an intimate understanding of the very subject that you are intending on teaching.  Creating digital resources is a creative act.  Learning object construction can be constructivism used to prepare for teaching.

The terms 'aggregate' (or 'composite') and 'atomic' objects are sometimes used when talking about learning objects.  An atomic object, quite simply, are objects  that cannot be decomposed.  An atomic object might well be an image or a sound file, whereas an aggregate object might be a content package or a SCORM object.

In my opinion, many aggregate objects should be considered and treated as atomic objects since it could be far too difficult, complex and expensive to treat them in any other way.  I hold this view since learning objects are ultimately difficult to reuse and repurpose for the reasons presented earlier, but this should not detract from the creation and use of repositories.  Repositories are useful, especially if their use is supported by organisational structures and champions.

I hold the view that metadata should match the size of a resource that it describes.  There should be metadata that describes an object in terms of overall learning objectives.  Lower-level metadata can be used to add additional information to a composite object (such as an image file) that cannot be directly gained from examining its properties or structure (such as using an algorithm to determine its type).

In essence, tagging operations for aggregate and atomic object types must be simple, economic and pragmatic.  If you need to do some tagging to add additional information to a resource (a pragmatic decision), the tagging operation should be simple, and in turn, should be cost effective.

The purpose of high level tagging, the description of high level aggregate object, should be obvious.  Consider a book.  A book has metadata that describes it so it can be found within a library with relative ease (of course, things get more complicated when we consider more complex artefacts, such as journals!).

Low-level (or lower level) metadata may correspond to descriptions of individual pages or images (I should stress at this point, my experience in this area comes from the use of software tools, rather than any substantial period of formal education!).  Why would one want to 'tag' these smaller items (especially if it costs time and money)?  One reason is to provide additional functionality

Metadata helps you to do stuff, just in the same way that storing a book title and list of authors help you to find a book within a library.  Within the EU4ALL project, metadata has the potential to allow you to say that one page (which may contain an audio file) is conceptually equivalent to another page (which contains a textual equivalent).

By describing the equivalence relationships between different resources, the users experience can be optimised to their preferences.  There is also the notion of adaptability, for example, whether a resource can be dynamically changed so it can be efficiently consumed using the device from where it was accessed (this might be a mobile device, a PC, or a PC that is using assistive technology).

Moving forwards

One of the biggest challenges within EU4ALL is to ensure that the users interface to an adaptable learning technology system is coherent, consistent and understandable.  By way of addressing accessibility concerns, all users could potentially benefit.  Learners could potentially be presented with an interface or a sign that indicates that different alternatives are available at certain points during a learning activity, should they be found to exist.  Presenting alternatives in a way that does not cause disruption to learning, yet remains flexible by permitting users to change their preferences, is a difficult task.

Creating metadata is something that is difficult and tiresome (not to mention expensive).  As a result, mistakes can be easily introduced.  Some researchers (slideshare) have been attempting to explore whether it is possible to automatically generate metadata using information about the context in which they are deployed.  In fact, it appears to be a subject of a recent research tender.  But ultimately, humans will be the final consumer of metadata, although metadata languages are intended to be used by machines.

Summary

The notion of a learning object is something that is difficult to define.  Speak to ten different people and you are likely to get ten different answers.  I hold the view that the most useful learning object is an aggregate (or composite) learning object.

Just as the idea of a learning object can be defined in different ways, the notion of granularity can also have different definitions.  The IEEE LOM standard offers four different levels of 'aggregation', ranging from 1, which refer to 'raw media data' (or media objects), through to individual lessons (level 2), a set of lessons (i.e. a course, level 3), to finally a set of courses which could lead to a formal qualification or certificate (level 4).  I hold the opinion that metadata should match the size of a 'learning object'.  Otherwise, you might end up in a situation where you have to tag everything 'in case it might be used'.  This is likely to be expensive.

High level metadata (in my opinion) is great for storing larger objects within repositories, whereas low-level metadata can be used to describe the adaptability and similarity properties of smaller resources which opens up the possibility of delivering learning resources that match individual user needs and preferences.

Acknowledgements

Posting image: from cocoi_m, from Flick.  Thanks go to an anonymous reviewer whose comments have been very instructive, and all those on the EU4ALL project.  The opinions that are presented here are my own rather than those of the project (or the Open University).

Permalink 1 comment (latest comment by Jonathan Vernon, Friday, 5 Oct 2012, 06:45)
Share post

Using OpenLearn resources with Moodle

Visible to anyone in the world
Edited by Christopher Douce, Monday, 29 Apr 2019, 13:38

One of the things that we need to do in the EU4ALL project is to create a prototype. To show the operation of a prototype, we need to show how content can be personalised. To show content personalisation happening we need some content. Luckily, the OpenLearn project is at hand to provide some Open Educational Resources (wikipedia) that we may be able to use.

The OpenLearn project provides learning materials in a number of formats. These formats range from native OU XML files, raw HTML files, IMS content packages through to RSS feeds and Moodle backup formats. This post is all about finding the most effective way to transfer OpenLearn to Moodle (and uncovering the best approach to use for on-going development work).

Using a Moodle Course Backup

The sample content that I'm going to use is a learning package about the Forth Road bridge (openlearn). This learning package (for want of a better term) is interesting since it contains a couple of different resources, including a video, a transcript in the form of a PDF file and some HTML pages.

In terms of loading the package to Moodle, I thought the easiest route would be to import the backup file type. The course backup facility allows Moodle users to make copies of entire courses (and their setup) for safe keeping. If inadvertent changes are made, a user then has the possibility of restoring (Moodle documentation) a course to your Moodle installation.

Others at the OU have blogged about similar issues, providing a more comprehensive description about how to setup an EEE netbook to allow users to view the OpenLearn material whilst on the move. This post takes (more or less) a similar approach, but focuses more on the different OpenLearn filetypes.

After downloading an OpenLearn Moodle backup course, I logged into Moodle as an administrator then clicked around on the 'course' menu options to see what I could find. It wasn't immediately clear what to do, so I went to the documentation for help. I found quite a few things.

I discovered that you needed to use the Course administration block. But this could be only accessed from within a course. It was apparent that to import a course, I needed to also create one.

After creating an empty course (using all the default settings), the course administration block duly appeared. From faint memories of this process, having played with this part of Moodle a couple of years ago I remember that restore was a two step process: first you had to upload the backup file, then you had to click on a restore link to start the backup process.

After trying to upload my backup package I was presented with a message that read 'a required parameter (id) was missing' ('what on earth does this mean?' I wondered). I then noticed that the size of my OpenLearn zip file was bigger (because it contained a video) than maximum supported upload file size in Moodle. Obviously I need to change a setting somewhere.

The first place that I looked was the Moodle system configuration file called config.php, but this didn't tell me much. I then delved into the area of my computer that contained the PHP installation and found a file called php.ini.

After a quick search, I discovered two places which might explain the maximum file size that Moodle has told me about. I subsequently make a change to the upload_max_filesize variable, setting it to 32MB, restarted my web server and then refreshed my browser. As if by magic, the maximum file size that Moodle allows has changed.

When trying the upload again, everything seemed to work okay (but I should say that the error message that I was presented with does need some attention).

When the upload from local file store to a Moodle folder was completed, I could see an adjacent 'Restore' button which I clicked. I was then presented with a question: 'Later in this process you will have a choice of adding this backup to an existing course or creating a completely new course – do you want to continue?' In my situation I initially want to do the latter operation, but I'm forced to do the former. I click yes to continue.

I was then presented with a list of actions that have been carried out: creating temporary structure, deleting old data etc, with no button or option to click on afterwards when it appears that everything has finished. Obviously things were not working as they should be. I carried out a further web search for answers.

I discovered the following from the Moodle backup and restore FAQ: 'Attempting to restore a course to an older version of Moodle than the one the course was backed up on can result in the restore process failing to complete'.

So, what versions am I using, and what version is the OpenLearn backup software provided in? To find the version of your Moodle installation, you have to go to the site administration menu (when logged in as an administrator), and click on Environment. I soon discover that I was using version 1.9+. I extract the contents of the OpenLearn Moodle Backup file and discover that it might be version 1.9, according to the first set of lines in an XML file that I discover. It seems I might be in a spot of trouble.

Getting Moodle Restore working

All was not lost, however. After some random searches I found a forum discussion. Fred has a suggestion: change a more recent programming file to an older version (which can be downloaded from the Moodle code repository). I changed the name of my 'restorelib.php' to 'backup restorelib.php' and download the version he suggests.

After replacing the file and restarting the restore process, magic begins to happen and messages are displayed on the screen. I'm then presented with a course restore screen, where a drop down box has the options: restore to new course (what I wanted to do initially), existing course deleting it first, existing course adding data to it. I chose 'existing course deleting it first', carelessly ignore everything else (which has been automatically ticked), and faithfully click on continue. I'm then presented with a list of courses to overwrite (I was surprised by this option since I thought I was automatically going to overwrite the course from where I clicked the 'restore' option through). Ignoring the warning, 'this process can take a long time', I clicked on 'restore this course now!'

It didn't take a long time, and a minute or so later, I could happily browse through (and edit) my newly imported OpenLearn courses. Fred saved the day!

But what of the other OpenLearn file options? I'll steer clear of the 'Unit Content XML', the 'OU XML Package' and IMS Common Cartridge for now and instead focus on some of the others.

IMS Content Package

IMS publishes specifications that aim to make learning technology systems interoperate with each other. One of the specifications that they have publishes is the content package (CP). A CP is essentially a bunch of files which are contained with a zip file. In the zip file there is something called a manifest file. This manifest file is, more or less, like a table of contents, when is read by a VLE/LMS like Moodle.

In Moodle, a CP can be a resource (interactive components are called activities). I create a new course, set a course to have a topic format, and choose to upload my CP to the first topic. When this is done, I browse to the newly added resource and Moodle tells me that it is about to deploy the CP (meaning, uncompress its contents and read the table of contents file). When complete, I can now navigate through the different pages of my material.

One of the differences between this format and the Moodle format is that the content is a lot more difficult to change. You have to use special tools, such as Reload to edit the manifest file, and HTML editors (and other similar tools) to change the contents of individual pages. Also, there is no direct way to include VLE supported interactive tools such as Wikis, blogs or on-line discussion forums in the middle of the material other than using the navigation mechanisms that the VLE provides (this will hopefully become a bit clearer later on).

SCORM

SCORM is an industry standard for the sharing of e-learning materials. SCORM makes use of IMS content packaging and defines an interface between the learning material and the VLE that is used to present the material.

This interface allows the VLE to record information such as whether the user has viewed all the pages of a SCORM resource, store interaction state to the VLE (such as answers for formative questions) and retrieve information from the VLE, such as the name of the current user (to allow partial customisation of a learning experience).

IMS content packages created by OpenLearn can also be viewed using the Moodle SCORM player (but I don't know if there are any problems doing this!).

SCORM originated from a US government initiative called Advanced Distributed Learning (wikipedia). As a result, it reflects its training origins. Like IMS CP, it does not directly support the inclusion of interactive activities that are provided by a VLE (other than the activities that are contained within the boundaries of a content package).

In Moodle, there are two ways to present SCORM resources. The first, as presented above, is to add it as an 'activity'. The other way is to create a course that has a SCORM format. Rather than having individual weeks or topics, a single SCORM occupies centre stage. Surrounding the centre, it is possible to create Moodle supported activities, such as forums. Here I have create a Moodle wiki, allowing consumers of the OpenLearn course to share links about bridge engineering (!), for example.

The way that Moodle presents IMS packages and SCORM objects (or SCOs – sharable content objects) are similar, but subtly different, making me wonder about the underlying source code. When I have time I'll explore the code development history to see whether they are related in any way.

Plain Zip

One of the simplest formats that OpenLearn supports is called plain zip.

Unzipping a 'plain zip' file reveals all the resources for a course (images, video and transcripts), along with two types of HTML file: an index file (which is similar to the Moodle course summary screen that was presented earlier), and set of content pages. The content pages themselves have their own navigation links, i.e. page 1 is connected to page 2 and so on. SCORM, on the other hand, provides its own mechanism to navigate between resource pages, generated by the information contained within a manifest file.

Two other things are provided in the plan zip package: a creative commons deed (describing licencing terms), and a formatting stylesheet. If you want, you can change the font and the colours of the content pages by changing the stylesheet. The action of double-clicking on any of the HTML files within this package displays the material directly in a browser.

So, how can a plain zip OpenLearn package be used in Moodle? Is it possible?

The answer is that it is possible, and it's quite easy, but the end result is obviously not as 'integrated' as the other approaches. First of all, I create a new course. I give my course an obvious name and set it to use the 'topics' format. Then I transfer my OpenLearn zip package to Moodle. To do this, I click on the Files menu (from administration block whilst logged in as an administrator), and upload the zip file to the course (each course has its own file area). When the file has been uploaded, I unzip the zip file. After pressing the course edit button, I can now add a link.

From the resource menu I click on 'link to a file or website'. Here I select 4ROAD_1_section0.html. This is the first content file in a sequence of four. It is the file that presents the learning objectives to a learner.

I turn editing off to see the effect of what I have done. Clicking on the new link takes you to the first page in the OpenLearn content, providing further navigation links that allows you to access all the other resources.

One thing that should be noted is that you have not directly uploaded the resource into a directory on the web server that anyone can access to. Only people who have legitimate access rights can gain access to these files.

These approaches rely on content being downloaded from the OpenLearn site to Moodle. Are there any other ways to tell your students about the OpenLearn content through Moodle?

The final way that I will describe is through RSS (wikipedia). RSS is most commonly associated with blog syndication. RSS can be described as an XML data structure that contains links to interesting material. OpenLean also provide RSS Feeds to individual courses. If you take a copy of a RSS feed link, you can use it within other tools. One of those tools is Moodle.

Moodle can make use of activities, resources and blocks. Blocks are the pieces of functionality that can surround courses. Blocks can be added, deleted and moved around. One of the blocks that Moodle provides is an RSS block.

Using course I created earlier, I add a new block and paste in the RSS feed link that I gathered from the OpenLearn course, then ticked a tickbox and confirmed something. As if by magic, my new block was populated by the contents of the OpenLearn course I had just told it about.

Clicking on one of these links takes you directly to the OpenLearn site, where you can access the material directly. The advantages of this approach is that you don't have to do very much, plus the material is always up to date.

There is an outstanding question that this section of the blog raises: could it be possible to create a Moodle activity (or resource?) called an 'RSS feed' that could be placed within the main body of a course? This way, educators could be able to quickly and efficiently group together different OpenLearn (or other forms of OER) resources. Furthermore, this would make it possible to group different 'blog reading or reviewing' activities together which may culminate in a forum discussion or even an on-line audio conference at a pre-arranged time. But here, I'm starting to digress...

Further information

After having completed (more or less) the first section in this post, I discovered an OpenLearn course entitled Re-using, Remixing and Creating Content. This provides further information about the different file types and how they can be manipulated.

Conclusions

There are a number of different ways to use OpenLearn content in Moodle. Each of them differ in terms of how much you have to do and how the end result appears. Taking a personal perspective, which one might be the best approach to use within my project?

What I want is flexibility: the ability to change a course and add an additional category of resource to the middle of it, should this be required. Since I'm going to be using Moodle as my main research tool, it makes sense to make use of the Moodle course format. I can then make use of the Moodle tools (should this be necessary) and move resources and sections around with relative ease.

Share post

Understanding Moodle localisation

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 21 Jul 2010, 13:19

Another Moodle activity that I've been tasked with is: 'ensure that different users are presented with user interfaces that match their language choices'.

I understand that software localisation (or internationalisation) is an industry in its own right, replete with its own tools and practices. When you scratch the surface of the subject you're immediately presented with different character sets, fonts and issues of text direction (whether text flows from left to right or visa versa).

My question is: how is Moodle localised into different languages, and does it use any approaches that could be considered to be common between other systems?

This post will only scratch the surface of what is an interesting (and often rather challenging) subject. For example, what is the Moodle approach to dealing with plurals, for example? There's also the issue of how internet browsers send their localised settings to web servers and application engines... Before I've even started with this post, I'm heading off topic!

Let's begin by considering three different perspectives: the students perspective, maintainers perspective and the developers perspective.

Students perspective

A student shouldn't really need to concern themselves with their locale settings, since the institution in which they are enrolled are likely use a sensible default setting. But if students wish to change the LMS interface language (and providing your particular Moodle installation permits the changing of user preferences), a student user could click on their name hyperlink that they see after logging on and click on an 'Edit Profile' tab and search for the 'preferred language' drop down box.

In my test installation, I initially had only one language installed: English (en). In essence, my students are not presented with a choice. I might, at some point during my project need to offer 'student users' a choice of four different languages: German, Italian, Greek and Spanish. Obviously something needs to be done, leading us swiftly to the next perspective.

Maintainers perspective

I log out from my test student account and log back in as an administrator and discover something called a 'Language' menu, under which I discover a veritable treasure trove of options.

The first is entitled 'Language Settings'. This allows an administrator to choose the default language for a whole installation and also to do other things such as limit the choice of languages that users can choose.

The second menu option is entitled 'Language Editing'. It appears that this option allows you to edit the words and phrases (or strings) that appear on the screen of your interface. The link between a 'bit on a screen' and a language specific description is achieved by an identifier, or a 'placeholder' that indicates that 'this piece of text should go here'.

What is interesting is that individual strings are held within Moodle programming files. This makes me wonder whether the action of editing the strings causes some internal programming code to change. This process is mysterious, but interesting.

As a useful aside (which relates to an earlier project related post), I click on 'resource.php' to see what identifiers (and text translations) I can find. I see loads of resource types, including names for resource types, which are numbered. Clearly, when adding new functionality, a developer needs to understand how software localisation occurs.

Continuing my user perspective exploration (after being a little confused as to what 'new file created' means after choosing to view the 'resource.php' translation page), I click on the 'Language Packs' option. Here I am presented with a screen that tells me about what language packs I have installed. By default, I only have a single language pack: English (EN). Underneath, I see a huge list of other language packs, along with a corresponding 'download' link. Apparently, because of a problem connecting to the main Moodle site (presumably because one of my development machines is kindly shielded from world from different nasties), things won't install automatically and have to save (unzipped) language packs to a directory called 'moodledata/lang'.

After unzipping the language packs, I hit my browser 'refresh' button. As if my magic, Moodle notices the presence of the new packs and presents you with a neat summary of you have installed.

Developers perspective

So, how does this magic work, and what does a developer have to know about localisation in Moodle?

One place to start is by exploring the anatomy of a downloaded language pack by asking the questions: 'what does it contain, and how is it structured?' Out of all the four packs that I have downloaded the German pack looks by far the most interesting in terms of its file size. So, what does it contain?

The immediate answer is simply: files and directories. In the German pack I see three folders: doc, help and fonts. The doc and fonts folder do not contain very much, mostly readme files, whereas the help folder in turn contains a whole load of subfolders. These subfolders contain what appears to be files containing fragments of HTML that are read using PHP code and presented to the user. At this point I can only assume that Moodle reads different help files (and presents different content to the user) depending upon the language that a user has selected.

At the root of a resource pack I see loads of PHP files. Some of these have similar file names, i.e. some begin with quiz, and presumably correspond to the quiz functionality, and others begin with repository, enrol and so on (my programmer sense is twitching, wondering whether this is the most efficient way to do things!)

A sample of a couple of these PHP files shows that they are simply definitions of localised strings which are stored in an associative array, which is indexed by a name. Translated into 'human speak', there's a fixed 'programming world' name which is linked to a 'language world' equivalent. You might ask the question of why do 'language localisation' this way? The answer is: to avoid having to make many different versions of the same Moodle programming code, which would be more than a nightmare to maintain and keep track of.

A number of questions crawl out of the woodwork. The main one being, 'how are the contents of these resource packs used when Moodle is running?', but there is the earlier question of 'what happens when you make a change to a translation?' that needs to be answered. Both are related.

Moodle has two areas where localisation records are stored. The first can be described as a 'master' area. This is held within the 'programming code' area of Moodle within a directory unsurprisingly named 'lang'. This contains files which contains identifiers and strings for the default language, which is English. The second area is a directory, also called 'lang', which can be found within the Moodledata directory area. Moodledata is a file area that can be modified by the PHP software engine (the software that Moodle itself is written in). Moodledata can store course materials and other data that is easier to store using 'file storage' area as opposed to using the main Moodle database.

As mentioned earlier, language packs are stored to the Moodledata area. If a user chooses to edit a set of localised strings, a new version of the edited 'string set' is written as a new file to a directory that ends with '_local'. In essence, three different language resources can exist: the 'master' language held within the programming area, the installed 'language pack', and any changes made to the edited language pack.

During earlier development work, I created a new resource category called an 'adaptable resource'. After installing the German resource pack, using the 'master language pack', Moodle can tell you whether there are some translations that are missing.

After making the changes, the newly translated words are written to a file. This file takes the form of a set of identifier definitions which are then read by the Moodle PHP engine. Effectively, Moodle writes its own programming script.

Using this framework, developers shouldn't have to worry too much about how to 'localise' parts of their systems, but before stating that I understand how 'localisation' works, there's one ore question to ask.

How does Moodle choose which string to use?

When viewing a course you might see a the 'topic outline' headline. How does Moodle make a choice about which language pack to use? I begin my search by looking through the code that appears to present the course page, 'course/view.php'. There isn't anything in there that can directly help me, so I look further, stumbling upon a file within a 'topics' sub-directory called 'format.php'.

In the format file I discover a function called get_string, which references an identifier called 'topicoutline'. This is consistent with the documentation that I uncovered earlier. The get_string function is the magic function that makes the choice about where your labels come from.

Get_string is contained within a file called 'moodlelib.php' which is, perhaps unsurprisingly, contained within a directory called 'lib'. Moodlelib is a huge file, weighing in at about eight thousand lines. It is described as (in the comments) as a file that contains ‘miscellaneous general-purpose Moodle functions’.

Get_string is a big function. One of the first things it does is figure out what language is currently set by looking at different variables. It then creates a list of places to look where localised strings can be found. The list begins with the location of where language packs are installed to, followed by areas within the Moodle codebase that are installed by default. It then checks to see if any ‘local’ (or edited) versions of the strings that have been created (as a result of user editing the language packs). When the function knows which file the strings are held in, Moodle reads (includes) the file and caches the contents of the 'string file' into a static variable (so Moodle doesn’t have to read the file every time it needs to fetch a string) and returns the matching localised string.

In the middle of this function there is extra magic to present sensible error messages if no strings are found, and other code to help with backwards compatibility with earlier versions of Moodle. It also seems to check for something called 'parent languages', but I've steer clear of this part of the code.

Testing language installation

Has all my messing around the languages worked? Can I now assign different users different languages? (Also, can users choose their own language preferences?) There is only one way to find out. Acting as an administrator I created a new user and set the users default language to Italian. I logged out and logged in using the new user account.

It seems to work!

The one thing that I have not really explored is whether Moodle will automatically detect the language a user has configured on their internet browser. A little poking around, indicates that Moodle can indeed be clever and change its language dynamically by using the hidden 'language' information that is sent to a web server whenever a HTTP request is made.

The 'dynamic language adaptation' functionality is turned on by default, and a switch to turn it on and off can be found within the 'language settings' menu that the administrator can use.

The fact that Moodle can dynamically change in response to browser (and potentially operating system) settings is interesting. One of the things that the EU4ALL project is exploring is whether it might be possible to tell web-based systems whether certain categories of assistive technology are being used. This may open up the possibility of user interfaces that are more directly customised to users individual needs and preferences.

Other 'languages'

I've described (rather roughly) how Moodle takes care of software localisation, but how is it handled in other programming languages. I've used Java and the .NET framework in the past, and each system provides its own way to facilitate localisation.

Java makes use of something called a resource bundle (Sun Microsystems). dotNET, on the other hand, uses something called resource files (Code Project). One question remains: is there a generally recommended approach for PHP, the language on which PHP is based? Like with so many different things in software, there is more than one way to get the same result.

The author of the PHP Cookbook describes another way to think about localisation. This approach differs in the sense that it focuses more on demonstrating localisation by using object-orientation (an approach that Moodle has historically tried to steer away from, but this seems to be changing), and doesn't really address how a user might be able to edit or change their own strings should they not like what they see.

Conclusions

Software localisation, like accessibility, is a subject that software developers and web designers need to be aware of. This rather long post has outlined how software localisation is broadly achieved in Moodle. Much, however, remains unsaid. Issues such as how plurals, right to left scripts and multi-character fonts have been carefully side stepped.

What is clear is that Moodle appears to have a solid infrastructure for localisation which seems to work, and provides site maintainers the ability to add different languages without too many headaches. Also, whilst browsing the documentation site I stumbled across a documentation page that hints at potential future localisation developments.

Although I have mentioned one other way to approach localisation within PHP it might be useful at some point to explore how comparable learning management systems tackle the same problem, perhaps also looking at how localisation is handled in other large projects.

Localisation will always be something that developers will need to address. Whenever new functionality is introduced, developers will obviously make provision to ensure that whatever is developed is understandable to others.

Share post

User generated mobile learning designs

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 21 Jul 2010, 13:17

Would I be considered to be weird if I said that I quite like exams? I admit, I do quite like the challenge, but more specifically, I also like the feeling of opening a paper and knowing (roughly) how to answer the questions I find, and making a choice about which questions I'm going to answer, and which ones I'm going to ignore (if I have a choice, of course). I also like receiving the result and relaxing when a course finishes!

A big question to ask in relation to taking exams is, 'how do you successfully transfer all the knowledge and understanding from your course into your head in such a form that you can answer potentially challenging (and interesting!) questions?' We all have our own unique set of strategies. I'll share some of mine.

Repurposing material

When I'm taking a course, one of the things that I do is make voluminous notes. I am a sucker for writing things down. I buy a couple of dividers and split a A4 (or a lever arch) file into sections corresponding to the blocks. I usually have a couple of extra dividers free for 'other stuff'.

I admit that I sometimes go a bit far, especially when I insist on choosing a single brand of pen for the whole set of notes that I make during a course. I make headings in a consistent style and sometimes experiment with underlining colours!

Although this might seem to be a bit unusual (in terms of my studying rituals), the activity of taking notes is central to my studying strategy. Once I have my notes and the exam date is looming, I sometimes re-write my notes. I take my sides of A4 and 'summarise them down' to a single side of A4, trusting that the stuff that is not on the page is faithfully held within my head.

In the e-learning world, the term 'repurposing' crops up from time to time. It means to take existing materials that have been designed for one purpose and to change them in some way so they can be used for something else. One of the difficulties of e-learning content repositories is that it is difficult to repurpose or reuse existing learning materials, perhaps because of the granularity of the material, or perhaps because that some material is too closely connected to a particular learning situation (or context). But I digress…

When working towards an exam, I actively 'repurpose' the contents of the course that I am studying. I take the course and transfer themes and ideas from the text books or the course materials and transfer them into my A4 file.

Learning pathways

The A4 file represents my own unique adventure or path through a set of learning resources, replete with questions to self, underlining, quotations and green underlining. My repurposing activity, as an active learner, is a construction activity. In essence, I have designed my own learning resources, or have designed my own learning.

When I was a student on The Challenge of the Social Sciences, I have to confess I was not looking forward to the exam. What helped me, was not only the excellent resources that the course team provided, but also the mind maps, sets of notes and other forms of crib sheets that my fellow students had posted selflessly to our on-line discussion forum. They were a great help, not only in seeing that others were revising as hard as I was, but they were also pointing out and bringing different parts of the course together in ways that I had previously missed. Guys, I owe you one!

I often travel on a train. When studying, I try to read when I am travelling, which I find difficult. One of the reasons, other than that I cannot easily take notes because the train is bumping around (!), is that I'm often sitting next to someone who is insisting on talking loudly on their mobile phone the moment I wish to try to settle down to learn something about the history of empiricism. Not to mention the lack of 'elbow room' needed to work through ones course notes.

I much prefer listening to podcasts. Listening is another one of my learning preferences. If only I could easily convert my notes into audio form, I might be able to make better use of the 'dead time' I spend on a train.

One thing I could try to do (but I shall never dare!) is to make a podcast of my own notes. This does sound a bit extreme since I am lead to believe that making a podcast takes up lots of time, not to mention equipment.

You need to learn how to use your sound recording software, you might even start with a script, then there is a period of editing (podediting?) to edit out the false starts, door bell ringing, the dog or telephone…

This makes me wonder: is there a way to repurpose textual notes, interesting quotations, chapter headings and thematic points in such a way that you can create an interactive audio file that contains pathways that you could navigate through whilst your travel?

iLearningNotes

Not so long ago I learnt about the Daisy talking book project and was struck by the quality of the speech synthesisers that could be used (some of the same synthesisers are also used by the current generation of screen readers).

Imagine a tool, not unlike Compendium, where you could build audio mind maps. Underneath headings you could add notes and quotations. You could establish conceptual links between different titles, chapters and ideas. The graphical structures that you create could then be converted into speech using a high quality speech synthesiser.

Another possibility could be that you might be able to use excerpts from other podcastsVideo player: Media:Titanium.ogg

(Wikipedia example). Of course, there may be nothing stopping you making your own recordings, perhaps combining your material with words from other sources (providing you adhere to licence conditions, of course).

When you have finished editing you could transfer your edited interactive 'audio map' (which may even have corresponding iconic pictures!) to a magic mobile device not unlike an iPod. You could use the magic wheel control to move through the chapters, sections and notes that you have 'built'. You may also be able to control the rate of playback, allowing you to skip over sections of which you become more familiar.

When you have created your audio notes, in true Web 2.0 fashion you could share your own personal course specific pathways with others. You might be even able to repurpose or modify pathways created by other people so they closely match your own individual learning needs. Furthermore, these resulting navigable audio equivalents may have the potential to be useful for people with disabilities.

Back to learning design

There are some resonances between these ideas and the area of learning design tools and systems.

I first came across the concept of learning design when looking through the IMS specifications. I soon learnt that IMS LD was an XML language that could be used to construct descriptions of learning activities that could be executed using a player. I later came across a system called LAMS, and most recently was told about something called the e-lesson mark-up language, ELML.

Learning design, as an idea, can take many forms. The different systems vary in terms of dynamic adaptability, ease of authoring and who the language or system is intended for. Another is presented by CloudWorks, from what I understand.

My designs

When I study, I design my own learning with help from the materials that I am provided. This may occur when I travel on a train, carry out internet searches on the internet, or read some notes whilst drinking a cup of tea at home.

My own personal pathway through a set of resources may be very different to the pathway that other learners may choose. Learning about the differences, potentially through mobile devices, may help me (and fellow learners) to see new sets of connections that were not immediately understandable.

In doing so, we have the potential to create devices and tools that make better use of our 'dead time'.

Image modified from wikipedia

Share post

Exploring how to call SOAP webservices using PHP (and Moodle)

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 21 Jul 2010, 13:00

This post describes my second bash at understanding PHP and SOAP webservices, work carried out off and on over the last couple of weeks. The first time I connected PHP to an externally hosted web-service was using a script that I wrote that was external to Moodle. Now my challenge is slightly different: to try to get Moodle calling external web services.

Just to make sure I understand everything, I'm going to present some background acronyms, try and remember what pages I looked at before, then step towards uncovering parts of Moodle that are in some way connected to the magic of web services.

Background information

I'm required to interface to web services that use the SOAP protocol (wikipedia). SOAP is, I am led to believe, an abbreviation for Simple Object Access Protocol. In a nutshell, SOAP allows you to send a message from one computer to another, telling it to do stuff, or asking it a question. In return, you're likely to get a response back that either tells you what you wanted or indicates why your request had failed. SOAP is one of many different techniques that you can use to pass messages to one computer to another over the internet.

Another technique, which is simpler (and faster) but has some limitations that SOAP gets round, is REST (wikipedia). More information on this 'architectural style' can be found quite easily by doing a quick internet search. My focus is, however, SOAP.

So, assuming that one computer exposes (or makes available) a web service to another computer, how do other computers know how to call a service? In other words, what parameters or data does a particular service expect? The answer is that the designers of SOAP service use a language that describes the format of the messages that the SOAP server (or service) will accept. This language is called WSDL, or Web Services Description Language (wikipedia).

Each SOAP server (or service) has a web address. If you need to find out what data a SOAP service requires, you can usually ask it by adding ?wsdl after the service name. This description, which is presented in a computer readable structure, can sometimes help you to build a SOAP call – a request from your computer to another.

Very often (in my limited experience of this area), the production and use of this intermediate language is carried out using layers of software tools and libraries. At one end, you will describe the parameters that you will process, and some magic programming will take your description (which you give in the language of your choice) and convert it into a difficult to read (for humans!) WSDL equivalent. But all this is a huge simplification, of course! And much can (and will) go wrong on the journey to get SOAP web services working.

A web service can be a building block of a Service Oriented Architecture (again, wikipedia), or SOA. In the middle, between different web services you can use the mysterious idea of middleware to connect different pieces of software together to manage the operation of a larger system, but this is a whole level of complexity which I'm very happy to avoid at this point!

Stuff I looked at earlier

The first place that I looked was in a book! Specifically, the PHP Cookbook.

Chapters 14, consuming web services, and 15, building web services looked to be of interest, specifically the sections entitled 'calling a SOAP method with/out WSDL'. Turning to this section I was presented immediately with a number of possibilities of how to make SOAP calls since there are a number of different implementations depending upon the version of PHP that you're using.

Moodle, as far as I understand, can work with version 4.3 of PHP, but moves are afoot to move entirely towards version 5. My reference suggested its perhaps best to use the bundled SOAP extension as opposed to the other (PEAR::SOAP or NuSoap) libraries since they are faster, more compatible with the standards, automatically bundled and exceptions (special case errors) that occur within SOAP are fed into corresponding PHP exception constructs to make programs (theoretically!) easier to read.

Consuming services

On my first attempt to call a web service, I ran into trouble straight after starting! All my code was failing for a mysterious reason and my debugger wasn't deciding to give me anything that was useful. After doing some searching and finding some on-line documentation I gave the PEAR library a try, but ended up just as confused. I ended up asking one of my illustrious colleagues for help who suggested that I should add an additional parameter to original attempts using the PHP extensions to take account of local network setup.

Calling seemed to be quite easy. I could create something called a SOAP client, tell it which address I want to call, give it some options and make a call my sending my client a message which has the same name of the web service which I want to call, optionally loaded up with all my parameters. To see more of what came back, I put some of the client variables in some temporary variables so I could more easily watch what was coming back in my debugger.

Producing services

Now that I (more or less) knew how to call web services using PHP, it struck me that it might be useful to see how it might be possible to present web services using PHP. This was found in the next chapter of the book.

To maintain consistency, I asked the question how might I create some WSDL that describes a service? Unfortunately, there is not an easy answer to this one. Although the integral SOAP libraries don't directly offer support to do this, there are some known techniques and utilities that can help.

One of the big differences between PHP and the WSDL language is that PHP is happy to just go ahead and do things with data without having to know exactly what form (or type) the data takes. You only get into trouble when you ask PHP to carry out operations on a data item that doesn't make sense.

WSDL, on the other hand, describes everything, giving both the name of a data item and its type. Because of this, you can't directly take a PHP data structure and use it to create WSDL. To get round this difference one approach is to provide this additional information in the form of a comment. Although comments are intended to help programmers, they can also be read by other computer programs. By presenting data type information in the form of a comment, an intermediate program can create WSDL structures without too much trouble, saving developer time and heartache. This approach is used by both the NuSoap library and code that works with PHP 5. But I digress...

Moodle web services code

There appear to be some plans to expose some of the Moodle functionality via a series of web services, enabling Moodle to be connected to and used with a range of external applications. There is also a history connecting Moodle with external assessment systems using web services.

A grep through the Moodle codebase (for 1.9) reveals a library called (perhaps unsurprisingly) soaplib. There appears to be some programming logic which makes a decision about which SOAP interface library to use, depending upon the version of PHP: use the native version if PHP 5 is used, otherwise NuSoap.

I'm guessing that the need to use the NuSoap library will gradually disappear at some point, but a guess is totally different from finding out whether this is really going to happen.

One way to find out what is going on and what lies in store for the future is to explore the on-line discussion forums and quickly find a forum that is dedicated to discussing Moodle web services. It appears there are two interesting developments, something called the Moodle NetWork (which allows you to share resources between different instances of Moodle, at a first glance), and non-core Moodle code contribution called the OKTech Web Services. After a little poking around it's possible to find some documentation that describes this development in a little more detail.

I also discovered a documentation page entitled Web services API , but is related to XML-RPC (wikipedia) rather than SOAP. My head is beginning to hurt!

Returning to the Moodle core SOAP library, I ask the question: what uses the soaplib? One way to do this is to search for calls to functions that are contained within this library. I have to confess, I didn't find anything. But, what I did find is a discussion.

It turns out it was added as a result of work carried out at the University of York in the UK for a project called Serving Maths that created something called the Remote Question Protocol (RQP). The initial post mentions concerns about not being able to make use of some of the additional parameters that the PHP 5 library provides. This is a concern that I share.

Next steps

I've more or less finished my whistlestop tour of Moodle components and code that relate to web services type stuff. I'm sure there is more lurking out there that I haven't discovered yet. But what of a conclusion?

Since I'm not planning on using Moodle to expose any web services I can thankfully sidestep some of the more difficult discussions I've uncovered.

Also, since there isn't much in the way of existing SOAP utility code that I can build upon and I roughly know more or less how to call web services using the magic functions that are provided in PHP 5, I'm going to try to more or less directly add some lines of code to Moodle. But before I do this, like every good developer, I'll test things out using a test harness to explore how my target services behave.

Image: modified from wikipedia

Share post

Understanding Moodle accessibility

Visible to anyone in the world
Edited by Christopher Douce, Tuesday, 2 Dec 2008, 17:13

To really understand why things are the way they are today necessitates understanding what has happened in the past. This blog post is an attempt to build up an understanding of the current state of Moodle accessibility by looking into what has happened during parts of its development. My methodology is simple: begin with internet (and Moodle forum) searches, ask a few people to see what they know, and see where it takes me!

Initial results

A quick search using Cuil took me to some Moodle developer pages and the Moodle Accessibility Specification which has the headline, 'the document specifies improvements to the accessibility of the Moodle course management system for version 1.7’' This is useful. Both the page itself and the release number can point me towards some dates. Version 1.7 of Moodle dates from November 2006 onwards (until March 2007, when version 1.8 is released).

Digging a little further in the Moodle documentation, I discover the release notes for version 1.7. This provides a huge amount of information. Release notes are very often overwhealming for the casual reader. So, to help my search, I search this page for the term 'accessibility'.

Under Moodle 1.8 release notes, the words 'the Moodle interface is now compliant with XHTML Strict 1.0 and major accessibility standards' catch my eye. This is interesting, but what does this mean? Thankfully, there is a link. I’ll try to uncover what the significance of XHTML Strict later. Let's continue with the search for discussions relating to 'major accessibility standards'.

Moodle Accessibility Page

The link found on the 1.8 release notes takes me to the Moodle Accessibility page. The page provides several groups of other links: starting points, standards, legislation, tools and resources. A couple of things jump out at me: a link to the development tracker that relates to Accessibility Compliance in Moodle 1.8, a link to Italian Accessibility Legislation Compliance, and a link to an accessibility forum (guest login required).

It looks like I might be finding some very useful stuff! So much stuff, I need to focus down on what is often very important to me: source code. Code cannot lie, but on its own, it cannot always tell you its history... Thankfully, there are other ways to understand how (and why) things have changed.

Looking at the detail

To enhance the accessibility of Moodle, the developers have created tasks within a combined bug tracker and change management system. This is something that is common to loads of other software developments. Change management systems help developers to keep track of what has changed, when and by whom. If bugs are accidentally introduced as a result of changes, keeping records can help us to understand why. A side effect of a good tracker is that it can also tell you what changes are incorporated into individual releases.

Let’s have a look at a tracker entry to see what one of them says: Indicate type of resource in the name of the resource. This is an interesting one. For screen reader users, having advance warning about the file type is useful, particularly if a link is to a resource that is handled by an application outside of the browser, such as a PDF file, for example.

It’s also interesting to see that the tracker can also contain information about debates about the development of the software and, sometimes, its requirements. Clicking on the 'change history' may sometimes present you with a file that summarises the modifications that a Moodle developer has made to several files to make the accessibility enhancement.

As well as the Accessibility Specification, one of the developers has created a useful page entitled Accessibility Notes (found within the developer area). This includes an executive summary of some of the guidelines, a roadmap for further accessibility developments, pointers towards future areas of development and a link to some accessibility 'patterns' which have been derived from the Web Content Accessibility Guidelines (WCAG).

Relationship to WCAG?

You often hear WCAG mentioned in relation to different levels of conformance, specifically A, AA and AAA. Whilst searching the terms Moodle and WCAG, I found myself back at the forum that I mentioned earlier which had the title, a forum to discuss 'planned conformance to standards/laws such as the Web Content Accessibility Guidelines (WCAG), Special Educational Needs and Disability Act (SENDA), Section 508 (USA)'

It should be said that there is no formal way to 'conform' to the WCAG guidelines. Whilst some of guidelines can be assessed by machine (by the use of a computer program), some sections of the guidelines require real people to determine whether or not a web page is accessible (according to the guidelines). It should be noted that even if something is accessible under one measurement, to some users, this might not be the case.

The issue of compliance is also complicated by the fact that Moodle (along with many other learning management systems) can make use of different blocks, modules or components in a range of different ways. The way that an application is use and configured can significantly influence its accessibility.

Although there is no definitive statement how Moodle adheres to the different WCAG 1.0 levels, but I have discovered a forum posting that relates to a question about the American Section 508 procurement legislation. But will there ever be a statement about WCAG? I decided to dig further by speaking to one of the contributors to the Moodle Accessibility Specification.

Whilst WCAG is great for content, it doesn’t work so well with interactive systems. The Moodle accessibility specification has been created by distilling accessibility principles and ideas from a number of different sources, WCAG as well as an organisation called IMS (see also the IMS Guidelines for Developing Accessible Learning Applications).

Future work?

It was recently announced that the latest version of the WCAG guidelines (version 2.0) will be soon released. One interesting piece of work would be to carry out an assessment of a 'vanilla' (or out of the virtual box) installation of Moodle against these new guidelines.

Strict!

Earlier on I mentioned that I might explore what is meant by the mysterious words XHTML Strict. Whilst browsing the Moodle accessibility pages, I discovered the Moodle tracker task that asked the developers to move to web pages that are 'marked up' in this way.

One part of this tracker jumps out at me, specifically: 'avoid using, within the markup language in which the page is coded, elements and attributes to define the page's presentation characteristics'. In essence, use semantic tagging on web pages as opposed to tagging that relates to the change of the visual characteristics of a display. Rather than using bold tags to indicate a heading, a developer should instead use heading tags. This way, the tags that 'add meaning' to a document can help users who have assistive technology navigate through a page more easily.

A further comment on the subject of semantic tagging is that if a developer needs to add visual formatting to a page, cascading style sheets should be used (CSS). CSS can be used to separate the structure of the content from how it appears on the users screen. A great illustration of what CSS is and what it is capable of can be found within the CSSZengarden.

There is another line within the tracker problem that was interesting: 'for all new sites, use at least version 4.01 of HTML, or preferably version 1.0 of XHTML'. What does this mean, and is there a difference between the two, and why is a difference preferred? Let’s have a look what they are in Wikipedia which contains a paragraph that explains how XHTML relates to HTML.

It seems there are little differences between the two, except that the HTML pages become well-formed XML documents. Not only can then the resulting pages be manipulated by programs that can manipulate XML (and more easily check for different types of conformance – page checking is mentioned in the tracker comments page), but by insisting that they are 'well formed' may prevent the possibility of 'ill-formed' pages confusing assistive technologies, such as screen readers.

The tracker provides more information about how XHTML relates to accessibility. WCAG states that content authors (and you could argue that a page generated by Moodle is content) should 'create documents that validate to published grammars' (checkpoint 3.2). Other useful WCAG checkpoints include guidance not to use deprecated (now obsolete or old) features, and select W3C technologies when they are available, and use the latest versions. In essence, take advantage of new technologies when they are available for use.

Summary

It seems that accessibility, as a subject, has been discussed on the Moodle forums since November 2005. Since this date, a lot of work has been carried out to improve the accessibility of Moodle, some by the Open University. Evidence of this work can be found documented within the Moodle project without too much difficulty. I hope this post has helped to show where (and how) to find information about Moodle accessibility.

Although it can be argued that no platform is totally accessible, strides have been made to make Moodle more suitable for users of assistive technology. Anyone who uses Moodle has to be aware that the accessibility of such a system does not only depend upon the programming code alone, but also how it is used, and what materials it presents to learners.

Acknowledgements are extended to those who I spoke to during preparation of this post. You know who you are!

Share post

Discovering Moodle profile fields

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 21 Jul 2010, 12:58

One way to improve e-learning user experience is to attempt to present material that match a learners precise needs and preferences. In terms of accessibility, it would be non-sensical to provide a screen reader user with a digital video resource, if that resource contained images or illustrations which did not have accompanying auditory explanations.

The previous post explored how it might be possible to add a new category of resource to Moodle, an 'adaptable resource'. This post will try to explore a related piece of the puzzle by examining how it might be possible to tell Moodle about your own e-learning content preferences.

Some background

A learner might use a range of different e-learning systems at different schools or universities during a learning career. One problem might be the need to continually tell different systems what your content preferences might be. Let's face it: this is trouble! Not only would this take time, but it would also no doubt introduce errors. One solution to this might be to allow a user to store their preferences on a server somewhere. This server could then share user preferences to different systems, subject to some form of magic authentication.

A learning management system could be used allow a learner (or someone acting on behalf of a learner) to edit and change their centrally managed preferences. The question is: how could we demonstrate this idea using Moodle?

Let's begin from where I left off in the previous post: user profile developer pages.

Returning to code

What this page says is that it's possible for a user of Moodle to store extra 'stuff' about a group of users to a database. This sounds great! But how does it work? In true developer fashion, ignoring all the user documentation, I delved into the source code and then browsed over the database structures. I found quite a few tables that relate to the user. There were tables relating to fields, data and categories, and a hint of previous accessibility development work as evidenced by the presence of a 'screenreader' field (but more of this later).

It soon became clear that there was quite a lot of existing functionality that might be able to be 'leveraged' (horrid word!) to facilitate (another one) the entering of user preferences. I liked what I saw: code where the functions were not too (not bigger than a screenful) and had the odd set of comments (you can read that in two different ways). Looking at the code, whilst useful, is never enough. It was time to have a look to see what the user sees.

Within a couple of minutes, I found it was possible to construct a way to enable both the user and the administrator to enter extra data against a particular user profile. Using the Moodle tools I created a really quick pull down menu to represent a learner specifying their preferences.

I should note, that a single menu represents a tip of the iceberg regarding the issue of entering user preferences! My titles are undoubtedly badly chosen. Also, there are existing metadata standards, such as AccMD (powerpoint), which can be used to describe user preferences, but I certainly won't go this thorny area here...

Along the way I stumbled across some documentation pages that describes the Moodle user profile.

Joining the dots (or nodes)

Okay, so this part of Moodle might be able to be used as a simple user interface to allow a user to specify their content preferences, but how (and where?) might I store other information like 'special-magic-numbers' or identifiers that can allow the VLE to understand that other systems are referring to the same user? (I hope this sentence makes sense!)

It seems that there are ways to store additional stuff in a Moodle profile too, fields that can be accessed and used by an administrator, but cannot be seen or edited by learners.

But... why?

As ever, one simple question has created a whole raft of others:

1. Where did this feature come from?
2. How is the data represented in the db? (looking at things from a developers eyes again!)
3. What part of the code should I modify so I can connect the Moodle user interface to some kind of magic 'preferences server'?
4. What does this mysterious 'screenreader' option do?

I'll leave some of them for another day, but I shall 'touch upon' answers to the first and the fourth.

Apparently the capability to add profile fields (and categorize them) was added in version 1.8 of Moodle (which also incorporated a number of accessibility enhancements). I've tried to find the discussions on the forum that culminated in the addition of this feature, but I've had no joy - but what I have learnt is that there is talk about making an even more customisable version of the user interface, but I digress.

Wishing to know more, I turned my attention to the code to see what it could tell me. References to the screen reader profile tag is found scattered throughout the codebase. It appears to change how certain parts of the HTML is presented to browsers. It is found in the chat module, within the file type resource code, the question engine code (where additional feedback is presented to screen reader users), and in some of the currently mysterious theme code. I sense this question is a bit harder to answer than I had initially anticipated!

Onwards and upwards (or should it be downwards?)

Now that I know roughly how to make a custom user profile interface, my next task is to identify where in the code I should add some magic to make Moodle speak to different servers. Wish me luck!

Share post

Working with new Moodle resource types

Visible to anyone in the world
Edited by Christopher Douce, Tuesday, 2 Dec 2008, 17:14

As a part of the EU4ALL project, I have been trying to figure out how to add a new resource type. The idea is to add a resource known as an 'adaptable resource', whereby different media types are presented to the user depending on their accessibility preferences. The issue of how and where to assign or change these preferences is currently a question that has to be resolved. This post is intended as a bunch of 'notes to self' about what I have found during the last couple of days exploring and poking around the Moodle code base.

To explore the code, I've been using a couple of tools: SQLYog, which was recommended to me by an illustrious IET developer (to allow me to explore an instance of a Moodle MySQL database I have running on my home machine), and NuSphere, a PHP IDE. I did try the Zend IDE a year or so back, but abandoned it since I became rather confused!

So, how is it possible to add a new resource to Moodle? Initially, I decided to look at an existing resource, beginning with the simplest one that I could find: a simple text resource. By browsing the code base I seemed to find the rough area where the 'resource code' lives. I also browsed around the developer documentation page an unearthed a resource class template. Great!

In development, one question instantiates a tree of others. The most fundamental question is: how does this code work? I need to answer this big one to make a change. This is too big, so I split it into two other questions: (1) how can you modity a form that allows you to enter the parameters that describe an adaptable resource (currently it is to be a simple numerical value, from what I understand), and (2) how can I take the values held within a form and update them to the MySQL database? This requires an understanding of further magic code. As a note to myself, let's have a look at each of these in turn.

Entering stuff

Looking at the text resource code, there seemed to be a bit of object-oriented polymorphism going on. The name of the directory where the resource code is important too! There is a magic function called display which appears to make some further magic calls to create some data entry fields - but these calls are quite a long way away from the pure HTML that is presented in the browser window.

This is another question: how does the magic functions in display() get turned into HTML? The answer apparently lies with the application of a forms library called PERL. If I figure out how to add functions in a way that would work for this library, I can ask the user for whatever I want.

The form uses some object-oriented principles. Individual controls are added to 'the form', and then a function is executed that 'prints out' or renders each of the controls, without you having to go near to producing your own HTML.

Another interesting observation, is that the display function I have uncovered only relates to a small part of a bigger form. This is due to subclassing and polymorphism that is being used, but this is a distraction... now I have a little understanding of what is happening (thanks to the NuSphere debugger!), I'll park this question for the time being. There are other mysterious areas to explore!

Storing stuff

When a Moodle user edits a course resource, there are a couple of buttons that appear at the bottom of the screen. These are 'save and return', 'save and display' and 'cancel'. Looking at these buttons from a HCI perspective I think, 'buttons doing two different things?? surely this is a bad idea!'. But I digress.

My question is, 'what happens when the tutor (or administrator) clicks on either of the save buttons - where does the data go? Or more precisely, how does the data get saved?

Moodle seems to have a thin database layer: a set of functions that allows you to send SQL statements and receive data in response. Since the contents of the resource form is held in what can only be described as a 'big variable' (PHP has a funny approach to object-oriented programming if you've used other languages), the Moodle developers have figured out a way to transfer the contents of a form to the database, by matching on-screen fields to database fields.

This seems to work well: but on the downside is the database update code that Moodle code generates appear to be rather big, and an implicit dependency is created between the form and the database structure. Other systems that I've looked at make use of stored procedures, which has the potential to boost performance and security on one hand, but on the other restrict the database platforms that an application can be used with.

Moving forwards

Now I know (roughly) how to add extra bits to a new resource type, the next thing I have to do is figure out how to write the functions that I need. After I've done that I'll have to hook up my edits to the database, and figure out how to best display the data that I've produced. I already have some idea about how to do this since I have created a paper prototype.

But before going down that road, I think I'll continue my survey of the Moodle codebase by exploring what sections can potentially relate to adding and manipulation of user parameters and settings. I think I'll start by looking at the user profile developer pages.

Looking towards the longer term, I will also have to connect Moodle to a number of different web services. Wish me luck!

Share post

Learning from TV

Visible to anyone in the world
Edited by Christopher Douce, Saturday, 1 Nov 2008, 09:05
I have to admit I do get more information, and dare I say it, learning from the television than I should do. I’m a bit of a sucker for factual documentaries and the odd bit of reality television, but for two episodes I have been totally entranced by can’t read, can’t write that has recently appeared on Channel 4.

The programme traces the learning journeys of a number of adults who are learning to read for the first time. My initial reaction to hearing about this programme was one of astonishment: words, to me are like air. They are something that I barely notice because they surround me. As the programme started, I wondered what would unfold before my eyes and the people who I was presented with astonished me with their determination, intelligence and their love of language.

Phil Beadle’s performance was also astonishing. I’ve done nothing more than read about learning styles and the skepticism that surround them, but he was using them in anger. Jumping out from the screen was the realization that reading (and writing) is an activity that is ultimately synesthetic. To write, you have to integrate the shape of the words with the feeling of the pen. Writing this now seems so obvious. Beadle mentioned something interesting: all his learners had different needs and requirements and no single teaching approach would work for everyone, at the same time.

I connected this need for personalization of education with a project I'm working on that is trying to figure out how to present learning materials that are suited to the needs and preferences of individual learners. A talented teacher will have the skill (and the reflective ability) to undercover what works for which student. Getting this information in to a magical software program that provides learners what they need to help learners to learn is a really tough problem to solve.

I’ve been idly wondering for a while about how much can be done to support the learning of phonics (and writing) using touch screen laptops. I remember from a keynote that learning technologists should be also thinking about what can be cost effective from a learning and teaching perspective. I simplify this terribly: teacher time is expensive but tools that can support learning have the potential to be cheap. The challenge is tuning devices and technologies in a way that is efficient for the educator and effective for the learner.
Permalink 1 comment (latest comment by Ruth Jenner, Tuesday, 17 Mar 2015, 20:55)
Share post

This blog might contain posts that are only visible to logged-in users, or where only logged-in users can comment. If you have an account on the system, please log in for full access.

Total visits to this blog: 1628818