OU blog

Personal Blogs

Christopher Douce

SD4ST: Drama for Staff Tutors

Visible to anyone in the world
Edited by Christopher Douce, Thursday, 6 Mar 2014, 16:36

Whenever I visit a new place for the first time I'm always a little anxious about whether I'll be able to find it okay.  My first visit to the OU Cardiff centre was on 1 November.  I shouldn't have worried; the moment I left the train station and turned right I could see the Open University logo.  I even manage to find my destination without having to resort to a googlemap.

My reason to visit Cardiff was to attend what would be my second SD4ST event (staff development for staff tutors).  I blogged about my first SD4ST event a number of months ago which took me all the way to Gateshead where the focus of the two days was research (and how to fit it into the role of a staff tutor).  This event was all about motivation and inspiration.

I was particularly attracted to this event since I remember reading that it would involve some theatre.  A number of years ago (either three or four - I forget!)  I attended an associate lecturer development event that was held by the South East region.  I remember being ushered into a very large room where the chairs were arranged in a circle.  I have memories of being taken out of my comfort zone by being presented with drama workshop exercises which included words like zip and zap.  The overwhelming feeling that I left the East Grinstead workshop was being a little puzzled about how on earth I might be able to use some of the stuff I had just witnessed to add a little more 'zip' into my own tutorials.

The Cardiff SD4ST event was being partly organised by the same tutor who ran the associate lecturer staff development event, and I was looking forward to it.  Just like my last SD4ST post this one is intended to act as a simple record of what happened during the day, but it might be of interest to my fellow Open University staff tutors.

Good tutorials

The first session of the day was very traditional but it set the scene very well for the rest of the two days.  We were asked to split into equal groups and asked to consider the factors about what made up a good tutorial, writing our views on a big piece of paper.

Our group came up with a set of words and phrases which I managed to quickly scribble down, both on the flipchart page and in my notebook.  These were: student centred, friendliness, knowledge of module, good structure, enthusiasm, flexibility, ability to connect things together, personal approach, supportive atmosphere and clarity of expression.

After the groups returned, there was a quick plenary discussion and each sheet was blu-tacked to the wall to act as physical reminders of our discussion.

Forum theatre

After a short break we were led into a room that had a configuration that I remembered my AL development event several years ago: the chairs were arranged in a circle!  We were told that three actors would act out a scene from the start of a telephone tutorial that went badly wrong.  I have to admit, it was very bad... Not the acting, I mean; that was very good!  What we were shown certainly didn't create a very good impression and if I had been a student I would have been suitably bewildered.

When the scene had come to an end it was replayed where we were then asked to stop the scene and take over the scene to offer a correction.  An illustrious education staff tutor from the London region took up this challenge!  Other staff tutors were then encouraged to jump in to the tutoring seat to lead the tutorial to a successful conclusion whilst at the same time reflecting carefully on what was happening.

Towards the end of the first session we were given a further question (or challenge) to think about, which was: how to make a good tutorial better and generate a 'palpable buzz' (a phrase that generated quite a bit of debate towards the end of the two days).

Elluminate

The next session of the day was all about Elluminate.  It very soon became apparent that there was a big difference in how this asynchronous tool is used by different module teams.  For some modules it was compulsory, but on others it was not.  After a bit of discussion we were then treated to recordings of two different Elluminate sessions.

The first recording was from a sports and fitness module, and the second was from a languages module.  My own reflection on this was that there were very big differences in how the different sessions were run, and some of the differences come from the differences in the subject matter.

One of the most powerful elements of Elluminate is its whiteboard.  It enables you to create very visual activities and share concepts that would have taken a thousand words to explain.  The language activity that we were shown was truly multi-modal: learners could listen to other students speak to the words that were on the whiteboard and connect different words and phrases up using lines.

Other Elluminate tools, such as the polling function, can be used to quickly gather opinions and relate to materials that may be presented on a whiteboard slide.  One of the challenges lies with making activities interactive especially when the connection between the Elluminate moderator and the participant is distant and the emotional bandwidth afforded by tools such as Elluminate is lower.  I remember some discussions about barriers to participation and the use of the emoticons to assess 'happiness' (or should I say, whether participants are fully engaged).

One of the biggest 'take home' points of the day lies with how Elluminate might be used in a team teaching scenario.  One of the things I've heard about Elluminate it is it is very hard work to keep track of everything that is going on: there's voice, text chat and (potentially) stuff being drawn on the whiteboard.  If there are two moderators, one can be taking care of the text chat (or some of the other tools), whilst the other can be responding to the audio channel.

I remember from my own Elluminate training in the South East region that Elluminate moderators are more producers than tutors.  Moderators are producers in the sense that they produce a session by choosing an appropriate mix of the different tools that Elluminate offers.  The notion of a producer remains firmly stuck in my mind.  For me, it's an analogy that makes sense.

Creativity in face to face tutorials

On the second day David Heley gave a similar version of the workshop he prepared for regional associate lecturer development events.  I'm not going to describe it in a lot of detail since I won't be able to do it justice.

A couple of thing stood out for me.  The first was how the physical space of the room was used.  Space can be used to identify different opinions and present different characteristics.  The idea of a 'spectrum line' can be used to enable participants to think about where they stand on a particular opinion; two sides of the room being opposing views.  We were then asked to use our imagination by imagining a map of the world on the floor of the room, and then asked to stand at various locations.  It was very thought provoking: kinaesthetic learning is both fun and engaging (in my opinion, but perhaps that might be a reflection on my own learning style).

One thing that stood out for me was the idea of using 'broken powerpoint', i.e. you ask participants what is on a series of imaginary powerpoint slide as opposed to simply giving your own powerpoint.  This seemed to work really well and I've been wondering how I might be able to use it in my own interaction design tutorials.  Another related thought that can to my mind was to have a my tutor group create their own powerpoint which might be helpful for both revision purposes and also for those who may not be able to attend a particular session - I've not tried it out yet, but the 'broken powerpoint' activity has certainly got me thinking!

David made the point that the aim of his workshop isn't to encourage participants to use everything but instead to consider how to use parts of it, or even to use some of the ideas it contains as sources (or vectors) of inspiration.  That was exactly how I used it when I attended a couple of years ago.  As a result of attending David's session I gradually managed to incorporate a small amount of role play.  Doing this wasn't easy and certainly took me outside of my comfort zone, but I think that was a good thing.

Forum theatre

After some lunch and a preparatory discussion we returned to our drama room and were then presented with another semi-improvised vignette which seemed to be about poetry.  There was some discussion about the kind of feedback that might have been offered, after which the episode was then replayed.

Towards the end of the day we were paired off and given a role play challenge which related face to face tutorials. I won't say too much about this other that it was quite good fun: I certainly learnt a lot from that exercise. It was really interesting to see so many different topics of debate emerge from a series of short scenarios.

Summary

A couple of years ago I attended an accessibility and human computer interaction event (please bear with me with this: there is a connection!)  The aim of the event was to introduce a science council project to 'the public'.  I mostly expected to get more of an understanding of different technologies and how they might be applied, but I was surprised to see how drama was used to teach students to understand the perspective of users of interactive devices (such as phones and computers).  It was a really interesting approach. 

During the day, we were given a premier of a short film (just in case you might be interested, the video that is mentioned in this earlier post can be viewed through a YouTube link).  At the end of the film we were able to ask the actors some questions (who remained 'in character') about their experience of using technology.  All these goings on reminded me of some aspects of our SD4ST event.

For me, there were a couple of things I got out of the event.  The first was the principle that there are so many different ways of doing things.   I sometimes get into a habit of using technology to help to do stuff.  Whilst tools such as powerpoint and the digital resources that you can create using them can be useful in terms of sharing information with others (through digital spaces such as the VLE), a face to face tutorial offers a richer way to explore and engage with module material.

The other point was the use of drama emphasised the importance of considering and carefully thinking about different perspectives.  I like the connection that theatre encourages practice reflection, and at the same time can permit the exploration of different topics, themes and subjects.

I mentioned technology, and this is the third 'take away' point: the use and mastery of asynchronous tools such as Elluminate (and how to connect their use to module materials in an effective and engaging way) will undoubtedly continue to be a subject for further discussion and exploration.

Congrats to the organisers, Janet Hanna, Annette Duensing, Martin Rhys, David Heley and the three forum theatre actors. All in all, a fun (and useful) event!

Permalink Add your comment
Share post
Christopher Douce

Higher Education Academy BotShop Workshop

Visible to anyone in the world
Edited by Christopher Douce, Monday, 3 Mar 2014, 18:46

I think I must have been about 10 years of age when I first heard about the magical device that was the Logo turtle (and the mysterious notion of turtle geometry).  Fast forward two decades and about ten years ago I managed to find a copy of Papert's book Mindstorms in a second hand bookshop in Brighton (a part of me things that it must have been owned by someone who studied cognitive science at the nearby University of Sussex).  I had these two thoughts in mind whilst I was travelling to the University of Derby on the 28 November to attend the HEA BotShop.

This is a quick summary of my own views of the day along with my own take on some of the different themes that emerged.  In fact, there was quite a bit of commonality between this event and the HEA Open University event that was held a week earlier, but more of that later.  In case you're interested, Derby produced a press release for this event which highlights some of the areas of interest, which include neural networks, embedded systems and artificial intelligence.  We were told that this was pounced upon by the local press, proof that mention of robots has a constant and wide appeal.

First Sessions

The day was introduced by Clive Rosen and Richard Hill, subject head for Computing and Mathematics at the University of Derby, jointly welcomed everyone to the workshop.  The first presentation of the day was by Scott Turner from the University of Northampton who gave a presentation entitled Neurones and Robots.  Scott uses the Lego Mindstorms hardware to introduce some of the fundamental concepts of neural networks, such as how learning takes place by the changing of some of the numerical values that are used within a network.  Some of the applications of these robots include following lines, getting robots to follow each other and getting robots to avoid obstacles; seemingly simple actions which enable underlying principles to be both studied and demonstrated.

One of the questions to Scott was, 'what does using a robot add?'  The answer was simple: it can add enjoyment and engagement.  Although similar principles could be studied through the application of other tools, such as Excel spreadsheets, simple robots add a degree of immediacy and physicality that other approaches cannot.  These discussions made me consider whether there is a pedagogic dimension (in terms of educational tools) that has emulation on one end and physical devices on the other, and the extent to which we may wish to consider a balance between the two.

Scott's presentation was followed by Martin Colley from the University of Essex whose presentation was entitled Embedded Systems Teaching.  Having been a postgraduate student at Essex I found Martin's presentation especially interesting.  Although I didn't have direct exposure to the robotics labs during my time at Essex I remember that some of my peers made good use of the research and teaching facilities that the University provided.

Martin gave a quick history of robotics and embedded systems at Essex where he talked about the design and evolution of different devices which culminated in the development of a kit.  The design of the kits had changed along with the development of processors.  Early kits (and robots) made use of the Motorola 68k processor (famed for its use in the original Apple Mac) whereas the current generation of kits made use or ARM processors which are, of course, commonplace with mobile phones and other embedded devices.

One aspect of Martin's kits that I really liked was the range of different input and output devices.  You could write code to drive a 160x128 colour display, you could hook up a board that had light and ultrasound sensors, or you could even connect a memory card reader and a digital to analogue converter to enable students to write their own MP3 player.  Martin also touched upon how the hardware might be used to explore some of the challenges of programming, which includes the use of C and C++, how to work with a real-time clock, the use of interrupts and direct memory access buffers.   Debugging was also touched upon, which, in my opinion is a really important topic when 'students get down to the muddy later between the hardware and software'.  Plus, all these interesting peripherals are so much fun than simply having an embedded system to turn an LED on or off.

All in all, a really interesting presentation which gave way to a discussion about the broader challenge of teaching programming.  One comment that it isn't programming per se that is the main problem.  Instead, it is development the skills of algorithmic thinking, or knowing how to solve problems that represents the biggest challenge.

Using bots to teach programming

The third presentation of the day was by Mark Anderson and Collette Gavan from Edge Hill University who described how the broad idea of robotics has been used to teach fundamentals of programming and how some students have learnt to build their own devices.  Mark and Collette had a surprise in store.  Rather than having to do most of the talking themselves they brought along a number of students who shared with us their own experiences.  This was a great approach; personal accounts of experience and challenges are always useful to hear.

Mark and Collette's slot covered a broad range of technology in what was a short period of time.  They began with describing how they made use of Lego Mindstorms kits (using the NXT brick) with Java, used in the first year of study.  During the second year students move onto Arduino kits, where student negotiate the terms of their own projects.  I seem to remember hearing that students are at liberty to create their own input controllers, which might be used in collaboration with an embedded arcade game, for instance.  There was reference to a device called the Gameduino which allows the Arduino controller to be connected up to a video display.  Not only can I see this as being fun, I can also see this as being pretty challenging too!

Towards the end of the session there was a question and answer and answer session where another introductory to programming tool called Alice was mentioned.  There were two overriding themes that came from this session.  The first was that learning to program (whether it is an embedded device or a computer) isn't easy.   The second is that it's possible to make that learning fun whilst developing essential non-technical skills such as team working.

Micromouse

One of the really interesting things about robotics is that a broad array of disciplines can come into play.  One of the most important of these is engineering, particularly electronic and mechanical engineering.  I guess there's a 'hard side' and a 'soft side' to robotics.  By 'hard side' I mean using the concept of robots to teach about the processes inherent in their design, construction and operation.  The 'soft side', on the other hand, is where robots can be used to teach problem solving skills and introduce the fundamentals of what is meant by programming.  Tony Wilcox from Birmingham City University, who is certainly on the 'hard side' of this continuum, gave a fabulous presentation which certainly gave us software engineers (and I'm talking about myself here!) a lot to think about.

A micromouse is a small buggy (or autonomous robot) that can explore a controlled maze (in terms of its dimensions) and figure out how to get to its centre.  It was interesting to hear that there is such a thing as a micromouse competition (I had heard of robot football, but not mice competitions before!)  Different teams produce different mice, which then compete in a maze finding challenge.  Tony uses his 'mice' to expose a number of different subjects to students.

Thinking about the problem for a moment we begin to see a number of different problems that we need to address, such as, how do we control the wheels and detect how far the mouse has travelled?  What sensors might be used to allow the mouse to discover the junctions in a maze?  What approaches might we used to physically design elements of the mouse?  How might we devise a maze solving algorithm?  Tony mentioned a number of subject areas that can help to solve these problems: closed loop process control (for the control of the wheels) and mechanical engineering, 3D computer aided design, power electronics, and PIC programming (which is done using C).  I'm sure there are others!

It was interesting to hear Tony say something about programming libraries.  Students are introduced to libraries in the in the second year of his teaching.  Whilst libraries can help you to do cool stuff, you need to properly understand the fundamentals (such as bit shuffling and manipulation) to use them properly.  To best teach the fundamentals, you ideally need to do cool stuff!

One thing that I took away with me was that robot control and maze solving software can exist within 32K.  I was reminded that even though there is a lot of commonality between programming on embedded devices and programming applications for a PC they can be almost different disciplines.

Critters

The micromouse robots operate within a controlled physical environment.  Another way to make a controlled environment is to make one using software.  The advantages of using software is that you can control everything.  You can, of course, define your own laws of physics and control time in any ways that you want.  In a way, this makes things both easier and difficult for us software engineers: we've got the flexibility, but we've got to figure out the boundaries of the environment in which we work all the time.

Dave Voorhis from the University of Derby talked about his work on 'emergent critters'.  By critters, Dave means simple 'virtual animals' which hunt around in an environment for food, bumping into others, whilst trying to get to an exit.  There are some parallels with the earlier talks on robotics, since each critter has its own set of inputs and outputs.  Students can write their own critters which can interact with others.  There is parallel to core wars, a historic idea where programmers write programs which fight against each other.

Dave said that some students have written critters which 'hack' the environment, causing the critter world to exhibit unusual behaviour.  Just as with the real world, working against some of the existing laws (and I'm talking those that are socially constructed rather than the other immutable variety) runs the risk of causing unintended consequences!

Final presentations

There were two presentations in the final session of the day.  The first was by Steve Joiner from Coventry University about how robotics can be used to help teach mathematics to key stage 3 and 4.   Steve is a maths graduate and is a part of the Centre of Excellence in Mathematics and Statistics Support (SIGMA), in collaboration with Loughborough University.  Steve showed us a number of videos where students had built robots out of the Lego Mindstorms NXT kit.  Projects included navigation through sensors, launching projectiles (albeit small ones!) and predicting their behaviour, and applications of graph theory.  Steve also demonstrated a self-balancing Lego robot (basically, a bit like a mini-segway) that made use of ultrasonic sensors.  Steve made the good point that through the use and application of kits and projects, students can begin to see the reason behind why particular subjects within mathematics are useful.

The final presentation was an impromptu one by Jon Rosewell, from the C&S department at the Open University.  Jon has been heavily involved with a short module called T184 Robotics and the meaning of life which has just come to the end of its life.  Like so many of the earlier presentations, this module makes use of Lego Mindstorms to introduce the concept of programming to students.  Instead of using the native Lego software, the Open University developed its own programming environment (for the earlier Lego Mindstorms blocks) which is arguably easier to work with.  Students are not given their own robotics kit, but may use their own if they have one available.  If they don't have access to a kit, students can make use of a robotics simulator, which is a part of the same software package.

Towards the end of the day there was some talk about a new Open University undergraduate module called TU100 My Digital Life.  This module makes use of a 'home experiment kit' which contains an Arduino processor, which was mentioned by Mark Anderson earlier in the day.  Whilst the TU100 senseboard, as it is called, cannot directly become a robot, it does contains some of the necessary features that enable students to understand some of the important ideas underpinning robotics, such as different types of input sensors, and output peripherals such as motors.

Plenary

At the end of the day there was sufficient time to have an open discussion about some of the themes that had emerged.  Clive Rosen kicked off the discussions by saying that robots can help teaching become engaging.  I completely agree!  One of the difficulties of teaching subjects such as mathematics and computing is that it can often take a lot of work to get satisfying and tangible results.  Using robots, in their various forms, allow learners to more readily become engaged with the subject that they are exploring.  In my own eyes there are two key things that need to be addressed in parallel: what subjects the application of robots can help to explore, and secondly, how to best make use of them to deliver the most effective pedagogic experience.

These ruminations connect to a plenary discussion which related to the teaching of computing and ICT at school.  There was a consensus that computing and ICT education needs to go so much further than simply teaching how students to make use of Microsoft Office, for instance.   We were all directed to a project called Computing at School and a relatively new education (and hardware) project called RaspberryPi was mentioned.  I'm personally looking forwards to how this project develops (and hopefully being able to mess around with one of these devices!)

There was some debate about individual institutions 'doing their own thing', in terms of building their own teaching hardware, raising the question of whether it might be possible to collaborate further (and the extent to which the higher education hardware might potentially be useful in the school environment).  It was concluded that it isn't just a matter of technology it may be more of a matter of education policy.

In the same vein, it was hypothesised that perhaps the embedded processors within students' mobile telephones might (potentially) be used to explore robotics, perhaps by plugging in a phone to 'extension kit'.  Another point was that little was discussed about fixed or industrial robots, which is almost a discipline in its own right, having its own tools, languages and technologies.  This is a further example of how robotics can connect to other subjects such as manufacturing.

Thinking back to the event, there are a number of other themes that come to mind.  Some of these include the role of simulation (and how this relates to physical hardware), the extent to which we either buy or build our hardware (which might depend on our pedagogic objectives), and the types of projects that we choose.

Through the use of robots students may more directly appreciate how software can change the world.  Robotics is a subject that is thoroughly interdisciplinary.  I've heard artificial intelligence (AI) described as applied philosophy.  Not only can robotics be used to help to study AI, it also has the potential to expose learners to a range of different disciplines, such as mathematics, electronics, physics, engineering and the fundamentals of computing.

Learning how to program a robot is not just about programming.  Instead, it is more about developing general problem solving skills, developing creativity and becoming more aware at how to conduct effective research.

Permalink 2 comments (latest comment by Scott Turner, Sunday, 18 Dec 2011, 18:30)
Share post
Christopher Douce

Distance Learning for Computing and ICT Workshop

Visible to anyone in the world
Edited by Christopher Douce, Monday, 3 Mar 2014, 18:47

A Higher Education Academy sponsored distance learning workshop for computing and ICT was held at the Open University on Thursday 20 October 2011.  The workshop addressed a number of different themes.  These included internationalisation and the delivery of modules to different countries, professionalization and industry, models of distance learning, the use of technology and its accessibility.

The day was divided up into a number of different sessions, and I'll do my best to summarise them.  I feel that blogging this event is going to be a little bit different from the previous times I have blogged HEA workshops since this time I was less of an observer and more of a participant.  This said, I'll do my best!

Introduction and keynote

The event was introduced by Professor Hugh Robinson, head of the department of Computing at the Open University.  Hugh briefly spoke about the history of the university and mentioned that Open means that students who enrol to courses do not necessarily have to have any qualifications.  This connected to one of the university's themes: to be open in terms of people, places and ideas.  Distance education enables education to be open in all these respects but it is apparent that due to the changes in the higher education sector, all institutions are to face challenges in the future.

Hugh's opening presentation gave way to Mike Richards keynote presentation about a new computing module entitled TU100, My Digital Life.  Mike described some of the main topic areas of this new module which will for a common entry point to a number of degrees.  This module addresses themes that are rather different to those that used to be on the computing curriculum, mostly due to the changes in technology and what is meant by a 'computer'.

Mike mentioned important subjects such as privacy and security, the notion of ubiquitous computing and what is meant by 'free', connecting to subject of open source software systems.  Mike went on to say that the TU100 module contains some hardware that might once have been known as a 'home experiment kit'.

In the case of TU100 this is in the form of a programmable microcontroller board which can be configured in a way to work with different types of measurements and share the results with other people over the internet.  Furthermore, the microcontroller (and connected software) can be developed using a visual programming language called Sense, which is a version of Scratch, a popular introductory programming environment developed by MIT.

Mike's presentation emphasised that distance education need not only begin and end with a virtual learning environment.  A distance education module can contain a rich set of resources such as video materials and physical equipment that can be used to facilitate both understanding and debate.  Mike emphasised the point that many issues that connect to the increasingly broad discipline of computing (broad because of its impact on so many other areas of human activity) is that some debates do not have right or wrong answers.

One thing is certain: technology has changed so many different aspects of our lives and will continue to do so in ways that we may not be able to expect.  It's my understanding that one of the aims of TU100 is to highlight and uncover different debates and help students to navigate them.  What was very clear is that computing education is so much more than just technology and getting it to do cool stuff.  It's essential to understand and to consider how technology affects so many different aspects of our lives.

Morning session

The first presentation in the morning session was by Quan Dang from London Metropolitan University.  Quan's presentation was entitled, 'blending virtual support into traditional module delivery to enhance student learning'.  Quan emphasised how synchronous tools, such as on-line text chat could be used to create virtual 'drop in' sessions outside of core teaching hours to enable students to gain regarding subjects such as computer programming.  Quan's presentation was very though provoking since it made me ask myself the question, 'what different tools and practices might we potentially adopt (at a distance) to help student get to grips with difficult issues such as debugging'.  Debugging is something (in my humble opinion) that you can best learn by seeing how different people consume elements of the programming tools that are available through development environments.  Getting a feeling of the different strategies that can be applied is something that can only be gained through experience, and technology certainly has the potential to facilitate this.

The following presentation, by Amanda Banks from the University of Manchester, was entitled 'advanced professional education in computer science'.  Amanda spoke at some length about how a tool such as MediaWiki could be used to enable students to create useful materials that could be used with others.  This presentation was also thought provoking: Wiki's can certainly be used within on-line modules to enable to student to generate materials for their own study, but Amanda's presentation made me consider the possibility that wiki-hosted material can be used between different module presentations as a way to facilitate debates about different ideas.

The final presentation was by Philip Scown, from Manchester Metropolitan University Business School.  Philip's thought provoking presentation was entitled, 'the unseen university: full-flexible degrees enabled by technology'. Philip argued that technology can potentially allow different models of studying and learning, such as modules which don't have start dates, for instance.  I can't do justice to Philip's talk within this space, so I do encourage you to have a look on the HEA website where I understand that his presentation slides are hosted.

First afternoon session

The afternoon session was started by Mark Ratcliffe, discipline lead for computing at the Higher Education Academy.  Mark outlined the role of the HEA and then went on to describe funding opportunities and the role of a HEA academic associates.  Mark then directed us to the HEA website for more information.

Distance education is one of those terms that can mean different things to different people, and this difference was, in part, highlighted by Mariana Lilley's first presentation of the afternoon that had the title, 'online, tutored e-learning and blended: three modalities for the delivery of distance learning programmes in computer science'.  Mariana's presentation also represented a form of case study of a programme that is presented internationally by the University of Hertfordshire.  It was interesting to hear about the application of different tools, such as Elluminate (now Blackboard Collaborate), QuestionMark Perception and VitalSource Bookshelf.  This suggested to me the point that distance learning is now facilitated by a mix of different tools and made me question whether we have (collectively) identified best (or most effective) mix.  Institutions have to necessarily explore technology in combination with pedagogic practice, and sharing case studies is certainly one way to understand something about what is successful.

Mariana's presentation was nicely complemented by Paul Sant's (in collaboration with his colleague Malcolm Sant) who was from the University of Bedfordshire.  Paul's presentation was entitled, 'distance learning in higher education - an international case study'.  Paul identified a number of challenges which included, 'how can we ensure that distance students remain engaged? How can we offer support in a way that meets their schedule and requirements?', and 'How can we ensure that the work performed by students meets their potential?'  Paul mentioned tools such as the Blackboard VLE and synchronous tools by Horizon Wimba.  Paul's presentation also helped to expose the subject of partnerships with international institutions.

Second afternoon session

The final session of the day was broadly intended to focus upon the needs of the student from two different perspectives.  Steve Green from the Accessibility Research Centre, Teeside University kicked off this session by describing 'studying accessibility and adaptive technologies using blended learning and widgets'.  Accessibility is an important subject since it enables students to make use of learning resources irrespective of how or where they may be studying (both in terms of their physical and technical environment), but also widens the way in which resources may be consumed, taking into account learners with additional requirements.  Steve described how students create accessible widgets and their evaluation.

Steve's talk reminded me of a question that I was asked not so long ago, which is, given that distance legislation is now an international endeavour and the development of accessibility is supported by equality legislation, where do the boundaries lie in terms of offering support to students?  The answer may depend on the issue of how partnerships are developed and function.

The final presentation of the day, entitled 'finding a foundation for flexibility: learner centred design' was by Andrew Pyper from the University of Hertfordshire.  The underlying theme is that institutions need to understand the needs of their learners to best support them.  Tools such as learner centred design, which is known to the interaction design and human-computer interaction communities, have the potential to create rich pictures which then potential guide the development of both learning experiences and technology alike.

Plenary

Towards the end of the day there was a bit of time to hold an open discussion about some of the different themes that the presentations had exposed.  Many thanks to Amanda, Philip and Andrew for taking part.  Some of the themes that came to my mind were the issues of  tools and technology, internationalisation, industry and employability, and student skills.  Points included that we need to be careful about our assumptions of the technology that students might have.  Another important point is that one way to differentiate between different institutions might be in terms of the technologies that they use (and also how they use it).

We were also reminded about something called the Stanford Machine Learning course, which provoked some debate about 'free' (which relates back to Mike Richard's earlier TU100 presentation), and we were all directed towards the QAA Distance Learning precepts (many thanks to Richard Howley for bringing this to our attention).

Summary

All in all, it was a fun day!  There were loads of questions asked following each of the sessions and much opportunity for talk and debate in between.  I have to confess I was very relieved when the tea, coffees and sandwiches arrived on time, so thanks are extended to the Open University catering group.

It's tough, for me, to say what the highlight of the day was due to the number of very interesting thought provoking presentations.  I certainly feel that there is always an opportunity to learn lessons from each other; it is clearly apparent that there are many different ways to approach distance education.  Whilst there are many differences between institutions, similar issues are often grappled with, such as how to best make use of technology and ensure that students are offered the best possible level of support.

Permalink 2 comments (latest comment by Christopher Douce, Thursday, 2 Feb 2012, 12:04)
Share post
Christopher Douce

PPIG 2011

Visible to anyone in the world
Edited by Christopher Douce, Friday, 1 Dec 2017, 14:21

 I recently had the pleasure of attending the PPIG 2011 workshop between 6 and 8 September.  As I might have mentioned in an earlier blog, PPIG is an abbreviation for the Psychology of Programming Interest Group.  There used to be an American equivalent which was called ESP (Empirical Studies of Programmers), but this community seem to have disappeared.  PPIG, on the other hand, is going strong.

This year it was held in the University of York computer science department.  The department had moved since I was last there, forcing me to circumnavigate the campus and arrive at a time when almost all the tea and sandwiches had disappeared.  Thomas Green gave an opening address, and then it was swiftly on to the first presentation.

Mathematics and Visual Impairments

Alistair Edwards gave a talk entitled 'new approaches for mathematics in blind students'.  Mathematics, of course, relies on visual notations.  These notations, it was argued, are an integral part of working with maths.

Alistair holds that view (or, should I say that I understand that he holds the view) that there is more to it than just using an appropriate notation, or having that notation converted into another form.  We externalise parts of our working memory by using pen and paper.  Also, the idea of cancelling (or balancing) an equation by crossing of similar teams from both sides can be viewed as a visual manipulation.

Alistair mentioned a couple of projects, one of which was called Lambda, an abbreviation for Linear Access to Mathematics for Braille Device and Audio-synthesis.  Here, the challenges began to be clear: I didn't know this but different countries have different braille notation for mathematics.  There is, of course, the issue that using an interface to a notation immediately places cognitive barriers in the way that has the potential to make things more difficult to understand.  Users, it was argued, need more direct forms of interaction when working with mathematics.

All in all, a very thought provoking introduction, and it made me wonder how Green's cognitive dimensions of notations framework might be used to analyse interactions to notational systems (such as mathematics or programming languages) by users who may have different modality preferences (i.e. auditory over visual).

New Tech

The first main session of the workshop was called New Tech.  I'll do a very quick run through of the papers from each session.  Jon Rimmer presented the shyness project, and introduced the beguiling ambient computing device called the 'subtle stone', whereby class participants can communicate their emotional state to the lecturer by a click of a juggling ball.  Jon deftly directed us to a series of interesting papers about shyness.

The following paper was closely connected.  It was entitled 'self-reporting emotional experiences in a computing lab', by Judith Good (et al).  Us programmers go through a whole spectrum of different emotions during a programming session, from delight, through to wishing to jump on our laptop (although I don't recommend this).  This is an interesting direction: emotion connects to motivation, and traditionally the PPIG community have more focussed upon the cognitive.

Chris Roast, from Sheffield Hallam University, took us on a tour of a tool that helped to create different internationalised film posters; software localisation through abstraction.

One of the papers that I most enjoyed was by Chris Martin from the University of Dundee, who took dancing robots on a road trip to a number of different schools to further explore whether a robot dance workshop can inspire interest in a technical subject such as computer programming.  Or, does using robots 'sugar coat' a difficult subject, or is there more to it than this?  By robots, imagine small buggies.

In fact, Chris's robots use the same Arduino microcontroller that is central to the TU100 Senseboard. By dance, this is an activity that has a low threshold to success, is created, transcends culture, and where performance can be valued over competition (I made notes during this bit of Chris's talk).

The two other papers in this section described how to make music with a dry marker pen, a whiteboard and a mobile phone by defining your own musical notation (which was pretty cool), and considered the challenges that users of mobile spreadsheets face.

Human and algorithmic complexity

The next presentation was a 'work in progress' paper by yours truly.  I talked about a project (that has been making very slow progress, due to a myriad of different reasons) to explore whether it is possible to link measurements of program complexity to physiological measurements that are known to indicate human cognitive load. 

The workshop participants gave me a whole number of things to think about, including the names of a number of researchers, as well as a clear message of: 'stop procrastinating, just get on with it'.  I'm certainly going to take that last piece of advice.

The following presentation was about the challenge of working with test data.  A subject that can easily cause terror to many a software developer.

Language Formalities

Wednesday yielded a change of plan.  Thomas Green ran a session entitled, 'how to design a notation'.  A programming language is, of course, a form of notation.  Thomas posed interesting questions, like, 'how might we invent a different type of musical notation?', which led onto other questions such as its level of abstraction (what each element of a notation can represent), what you might wish to do with a notation, it's overall purpose, and how you might off load to external representations.

A theme underpinning all this debate was one that is familiar to many human-computer interaction and interaction design researchers: the idea of trading one thing off against another.

Giora Alexandron then presented his paper entitled 'programming with the user in mind' which led us towards considering something called live sequence charts, which is an extension of UML sequence diagrams.  We were introduced to the Play Engine and the PlayGo IDE.

This was followed by a presentation by Ahmad Taherkhani who spoke about 'automatic algorithm recognition based on programming schemas'.  I really liked this paper, since it was a new angle on some very early psychology of programming themes.  I particularly like how Ahmad was attempting to use existing theories to engineer a solution that may not only have practical use (in terms of providing tools to help educators to understand the programming code that students write), but also the act of programming a solution has the potential to allow us to learn more about the theories that are being applied.

The final paper in this session was a work in progress paper by Khuong A Nguyen who used a human-computer interaction technique called cognitive walkthough to learn more about the NXT-G visual programming language that can be used with the Lego Mindstorms hardware.

HCI for the future

Following a tour of a HCI and accessibility lab (which resembles a small apartment), a short panel discussion took place, with Nicholas Merriam and Luke Church taking centre stage.  Nicholas's perspective was especially welcome, since he spoke about the challenge of working with low level embedded software and the role that timing visualisation tools may play when attempting to solve programming problems that seem particularly intractable.  This linked back well to the earlier discussions of notation.

Learners and language design

This session contained two papers.  The first was a paper by Ahmed Alardawi, from Sheffield Hallam, aimed to explore the effect that object-oriented programming language class structure has on the comprehension of computer programs.  What was great about this research was that it clearly made use of existing work, and partially replicated earlier studies, thus adding to evidence.  Babak Khazaei, also from Sheffield Hallam, presented an empirical study on the influence of OCL, an abbreviation for Object Constraint Language (wikipedia). 

Motivation and affect

This part of the workshop contained a single presentation, by Rein Sach, from the Department of Computing at the Open University. 

Rein's paper was entitled, 'what makes software engineers go that extra mile?'  Rein asked software engineers what it was about their work that they enjoyed.  This gave way to an interesting discussion about the perception of the nature of programming work.  Even though developers might have to battle with misbehaving operating systems and maintain servers from time to time, perhaps these activities need to be considered as work rather than nuisances that get in the way of the real task, which is doing the intrinsically rewarding and creative work of creating code.

Invited paper

Thursday kicked off with a presentation by Gerrit van der Veer, from the Open University in the Netherlands and the Free University, Amsterdam.  Gerrit's presentation was about different aspects of design, how technology might be used to gather debate surrounding the artefacts that are created during the design process through a system called CAM, meaning Co-operative Artefact Memory.  A barcode sticker could be attached to different artefacts, such as a sketch or a physical prototype.  Design groups could then use these stickers, with mobile phones, to access a shared Twitter stream that relates to each object, allowing views and ideas to be shared.

Two thoughts came to mind during Gerrit's presentation.  Firstly, I wondered the extent of similarities between design practice (in different disciplines) and what occurs within software development practices that use agile methods, such as eXtreme Programming.  Secondly, CAM reminded me of a tool called OpenDesignStudio used for an Open University design course called U101, Design Thinking.

Another part of Gerrit's presentation was all about service design (i.e. design which yields a product that is not a tangible item).  Gerrit pointed us to a number of resources, such as the Service Design Tools site, and mentioned the importance of culture, by referring to the work of Hofstede (which is studied in the M364 Interaction Design Open University course). 

Cognitive considerations

The final session of the workshop contained two presentations that related to the cognitive dimensions of notations framework, papers by Anna Bobkowska, Maria Kutar and John Muirhead.  Anna introduced a language for the processing of multimedia streams through the use of a visual language.  Maria and John's presentation explored how a cognitive dimensions questionnaire might be used by non-experts.

Miguel Monteiro went on to speak about the cognitive foundations of modularity.  Miguel referred to a programming paradigm called Aspect-oriented programming (wikipedia), a subject that I have heard of many times, but one that I have not explored in a great deal of depth.  Learning more about AOP is certainly something to do at some point.

Qualitative and Quantitative

The final presentation of the workshop was by Gordon Fletcher from the University of Salford.  Gordon's presentation was entitled 'methods and practicalities of qualitative research', but it was so much more than this.  Gordon spoke about data collection in different communities, and mentioned the concept of biographical research (which made me wonder if anyone has thought about applying this technique, perhaps with regards to exploring motivation or software related careers.

I came away with a number of messages, namely, it can be relatively easy to gather qualitative data, but figuring out what to do with it is a whole other issue.  Also, both quantitative and qualitative research can be both systematic and rigorous; these different approaches to research have a lot in common.  An interesting quote was, 'the method has to fit the researcher even more than it has to fit the research'.

Gordon's presentation gave way to a memorable debate on the use of terms.  Undoubtedly the use of language will remain a perpetual challenge when carrying out multidisciplinary research.

Themes

A number of diverse themes were evident within the PPIG '11 workshop, representing its broad membership.  There was a strong theme of computing education and pedagogy.  Programming and educational motivation was also apparent, mixed in with the use and design of visual programming languages.  This connected to the important theme of cognitive dimensions, notational systems and notational design. 

Two interesting inclusions were links to the broader subject of design, and accessibility.  Human-computer interaction and interaction design remains important theme too.  A final theme is (perhaps that isn’t as strong as in previous years) is the application of ethnographic methods to further understand the activity of programming.  It was great to hear from a broad spread of presenters who are exploring many different research areas.

Permalink Add your comment
Share post
Christopher Douce

Psychology of Programming

Visible to anyone in the world
Edited by Christopher Douce, Friday, 10 Aug 2018, 14:39

Ever since July 2001 I have edited (off and on) the Psychology of Programming Interest Group newsletter.  The group, known as PPIG has been in existence since 1987.  Its purpose is to provide an interdisciplinary academic forum for researchers who are interested in the intersection between computer programming and psychology.

PPIG can be described as a pretty broad church.  On one hand there are those who aim to explore program comprehension and the relationship between notation systems and programming languages, whereas other researchers have been performing ethnographic studies and considering the different types of research methods that could be used.

Some of the questions that the PPIG community have been exploring resonated strongly with my doctoral research which was all about understanding how computer programmers go about maintaining computer software. 

I will probably always remember the moment when I started to be interested in the link between computer programming and psychology, particularly cognitive psychology.  I studied computer science as an undergraduate.  Our lecturers asked us to do a time limited summative programming assignment.  What I mean by this is that myself and my cohort were all corralled into a tired computer lab, given a sheet of program requirements, a Pascal compiler, and were told to get on with it (and, no, we couldn't talk to each other).

When we had finished our programs, we had to print them out using a dot matrix printer (which was, of course, situated within its own sound proof room), and give the fruits of our labour to our instructor who exuded a unique mixture of boredom and mild bewilderment at the stress that he had caused.

What struck me was that some students had finished and had left the laboratory to go to the union bar within twenty minutes, whereas others were pulling out their hair four hours later and still didn't have a working program.  This made me ask the questions, 'why was there such a difference between the different programmers?', and 'what exactly do we do when we write computer software?'

I seem to remember that this was in our first year of our degree.  Our computing lecturers had another challenge in store for those of us who made it to our second year: a software maintenance project.

The software maintenance project comprised of one third role play, and two thirds utter confusion.  Our team of four students were presented with a printout of around forty thousand lines of wickedly obscure FORTRAN code, and given another challenging requirements brief.  We were then introduced to a fabulous little utility called GREP, and again told us to get on with it.

This project made me ask further questions of, 'how on earth do we understand a lot of unfamiliar code quickly?', and 'what is the best way to make effective decisions?'  These and other questions stuck with me, and eventually I discovered PPIG.

So, during my week on study leave I compiled the latest edition of the PPIG Newsletter.  The next annual workshop is to take place at the University of York, and I hope to attend.  If I have the time, I'll try to write a short blog post about it and the themes that emerge.

Work-in-Progress Paper

When I was done with the newsletter I turned my attention to a research idea I have been trying to work on for well over the past year with an esteemed collaborator from Royal Holloway, University of London.

As well as studying a number of different programming languages during my undergraduate years I was also introduced to the all-encompassing subject of software engineering. In engineering there is a simple idea that if you can measure something, that something can be controlled. One of the difficulties of managing software projects is that software is intrinsically intangible: it isn't something you can physically touch or see. It's impossible to ascertain, at a glance, how your developers are getting along or whether they are experiencing difficulties. To get round the problem researchers have proposed software complexity metrics.

Having a complexity metric can be useful (to some extent).  If you apply a complexity metric to a program, the bigger the number, the more trouble a developer might be faced with (and more money spent).  Researchers have proposed a number of different metrics which measure different aspects of a program. One counts the number of linguistic parts of a program, another metric counts the number of unique paths of execution that a program might have.

Another type of metric, called spatial complexity metrics, have sprung from an understanding that programmers use different types of short term memory during program comprehension and development. The idea behind this metric, which was first published in a PPIG workshop back in 1999, was to try to propose a metric which is inspired by the psychology of a programmer.

The work in progress paper describes a number of experiments that aims to explore whether there might be a correlation between different software complexity metrics, and empirical measurements of cognitive load taken from an eye tracking programming comprehension study. The research question is: are program complexity metrics psychologically valid? 

Of course, writing up a research idea is a lot easier than carrying it out!  This said I do hope to share some of the research materials that may be used within the studies through this blog when they are available.

Permalink 2 comments (latest comment by Christopher Douce, Tuesday, 16 Aug 2011, 14:37)
Share post
Christopher Douce

TLAD (Teaching, Learning and Assessment of Databases)

Visible to anyone in the world
Edited by Christopher Douce, Tuesday, 15 Nov 2011, 11:36

logo.jpg

I always enjoy visiting Manchester.  Having spent many years there (both as an undergraduate and a postgraduate) visiting Manchester almost feels as if I'm coming home.  Plus, travelling directly to the computer science building (as a computer scientist) feels as if I'm returning to a 'spiritual home'!

The reason for my most recent visit was to attend the 9th TLAD workshop, which was all about the teaching, learning and assessment of databases.  Below is a summary of the event.  I hope it is useful for someone.

Paper Session 1 - Data Mining

The first paper that was presented during the day was entitled 'Teaching Oracle Data Miner using Virtual Machines', by Qicheng Yu and Preeti Patel, both from London Metropolitan University, and expertly presented by Preeti.

Preeti made the point that employers are demanding up to date practical and technical skills.  The issue of employability was, of course, an issue that was discussed within an earlier HEA event entitled Enhancing the Employability of Computing Students.  To address the important issue of technical skills, educators are necessarily faced with the challenge of how to enable learners to make use of industrial tools and products.  One of the solutions (in the area of database and data mining education) was to make use of virtual machine technology, such as the Microsoft Virtual PC, Oracle Virtualbox and a product called VMWare Player. 

Some of the challenges that have to be addressed are technical i.e. how to share drives through a host computer, and financial or legal challenges, such as how to make sure that any solutions are correctly licenced.  Preeti pointed us to a product known as WEKA, which I had never heard of, but it seemed that many of the audience had!

The second paper of the morning was by Hongbo Du from the University of Buckingham.  His paper entitled, 'Data mining project: a critical element in teaching learning and assessment of a data mining module' won the 'best paper' prize of the workshop.  Hongbo mentioned what seemed to be an important paper in the area, namely, something called the CRISP-DM standard (wikipedia), which enable students to gain an understanding of the data mining process.  Hongbo also presented a general framework for assessment (within his paper) which drew upon this methodology before presenting three different case studies.

The key points that I took away from his presentation was: group work is important (but very often students don't want to do this since they might want to own their own scores completely), and that the domain of application is very important too, and represents a substantial area of complexity that students need to necessarily grapple with.

The final presentation of the first session, entitled, 'Making data warehousing accessible' was by Tony Valsamidis from the University of Greenwich.  Tony began by presenting some of the problems, such as the availability of large data sets (an issue that I shall return to later), unwieldy query languages and unfamiliar domain and data models. 

Tony spoke of a number of different technologies and techniques, some I had used in anger (such as Visual Basic for Applications), others I had heard of but had forgotten the meaning of (such as OLAP).  An important issue that was raised was that of data sanitization: when you move data between different systems, things might not be directly compatible, so database practitioners might have to design some magic data transformations.

Paper Session 2 - Database and the Cloud

The fourth presentation was rather different.  Mark Dorling from Langley Grammar School described his teaching practice and association with a project called Digital School House

Mark trained as a primary school teacher but he is now working within the secondary sector.  He described how he helps students to understand the key concepts of data, information and logical operators, sometimes applying kinaesthetic learning techniques (i.e. movement).  Mark also described the use of something called independent learning videos (allowing students to remember how to use elements of applications).  This reminded me of the industrial term of 'on-demand e-learning', where learners can call up screen casts or bite sized presentations about how to carry out or complete different tasks.

Mark also showed how secondary school pupils could make use of cloud application, such as Google Spreadsheet, to enable an entire class to enter data (and for the data to magically appear on an interactive whiteboard).  Mark's presentation (and paper) got me thinking about how I might potentially adopt some of the pedagogic techniques that he described into my own teaching practices.

Mark's description of how he makes use of Google Spreadsheet lead us directly to Clare Stanier's presentation which was entitled, 'Teaching Database for the Cloud'.  One of the things that I really liked about Clare's presentation was that she addressed a question I was already mulling over, namely, 'how is it possible to define cloud databases?'  She gave an answer that related to some NIST definitions (pdf). 

A cloud database can be considered in terms of Infrastructure as a Service, Platform as a Service and Software as a Service (such as Google Spreadsheet).  Depending on their task, developers will interface with different products (and different levels of abstraction).  Clare pointed us to interesting sounding products such as Microsoft Azure and Oracle on Demand, neither of which I had ever heard of before. 

This is obviously a rapidly changing field!  If you require any further references to either cloud related papers (or products), Clare might be able to share a useful URL.

Paper Session 3 - Embedding Technology

Jackie Campbellfrom the University of Leeds began the final session by presenting a paper entitled, 'Inquiry based learning database oriented applications for computing and computing forensic students'.  Two activities or tools were described.  The first was an SQL Quiz application, where students were challenged to compose correct SQL statements.  The second was more of a software maintenance task, where students were asked to investigate and carry out a number of fixes to an existing application developed using something called Oracle Apex.

I personally consider maintenance activities to be really valuable for a number of reasons.  Firstly, maintenance is a substantial on-going challenge.  Database designs are likely to be particularly affected if software applications or businesses merge.  Businesses, of course, continually change and evolve (as must the software systems that they support).  Understanding how (and where) to change or correct database queries may require students to navigate their way through unfamiliar systems.  This, in itself, is likely to be an intellectually challenging task.

The presentation by Craig McCreath (and supported by his supervisor Petra Leimich) reminded me of an earlier presentation about the JISC WILD project that was held at the HEA mobile event a number of weeks ago.  The underlying ideas were very similar: using mobile devices to elicit responses from students to (attempt) to assess their understanding of materials.  Craig did a fabulous job at presenting, and I had a sense that the application he had developed for his final year project was easy to use.

The final presentation was, 'Using Video to Provide Richer Feedback to Database Assessment' by Howard Gould, Leeds Metropolitan University. Howard addresses the important questions of: 'what kind of feedback would most effectively benefit our students?, how do we make the best use of our time to provide this feedback?, and, how might we practically provide video feedback?'

Howard's paper was, in essence, a practice paper; I think this type of paper can be really valuable.  One of the challenges that lecturers face is to offer effective and useful feedback on entity relationship diagrams, showing students alternative database designs.  I sense that providing feedback on any kind of non-written notation is something that is intrinsically very difficult (and can include notational systems such as mathematics and music).  Howard solves the problem by recording screen captures after having spent some time initially looking at a student submission.  Practical challenges include file sizes (the videos themselves can be very big), and on-screen flicker (since an inexpensive digital camera is used as opposed to an expensive licence for Camtasia).

Discussion

After all the paper sessions a discussion was initiated by Alistair Monger (Southampton Solent), David Nelson (University of Sunderland) and Charles Boisuert (Sheffield Hallam).

Alistair pointed us towards a number of useful assessment resources, beginning by mentioning that the QAA code of practice requires formative assessment.  He also mentioned REAP, an abbreviation for Reenginerring Assessment Practices in Higher Education, followed by TESTA, Transforming the Experience of Students through Assessment (JISC).  Alistair mentioned the importance of having an audit trail of feedback, and mentioned a system called GradeMark.

David raised the perennial issue of collusion and plagiarism and made the point that assessment should always be at such a level that it is difficult for students to quickly 'find answers'.  One solution might be to have time constrained assessments, perhaps in a computer lab (something that I remember from my days as a computing undergraduate), and the production of a portfolio which shows evidence of understanding.

Charles pointed us towards the idea of Nifty assessments.  Charles mentioned the point that markers mark in different ways and that there will be variability.  Another point was how to help those students who always seem to struggle with the subject.

The ensuing discussion took us into issues such as the importance of good feedback (and explaining the importance of why certain subjects are assessed), differences between both individual students and cohorts, and a reference to something called the Database Commons Initiative.

Summary

Although I'm not directly involved with the teaching of databases and database technologies I found this to be a very interesting event for a number of different reasons.  The first is the difference in the variety of database related topics that are now taught; things are rather different from my undergraduate days when database education began (and ended) with SQL syntax and learning about different types of joins.  The domain is now a lot richer than it ever was. 

There are now different levels of 'cloud' databases, small embedded databases, huge data warehouses, object-oriented databases and XML databases.  Other themes that might have been perhaps discussed are the connections between database teaching and software design, alerting students to issues such as making database abstraction layers and stored procedures (but perhaps these issues have been explored in earlier workshops).

The second reason also relates to this issue of richness.  Software engineers and system designers are now faced with a myriad of different choices in terms of architectures (how to set up large systems) as well as products, both commercial and open source.  Understanding what different products do and how they might support business objectives are issues that software professionals always need to bear in mind.  These 'industrially connected' points relate to the issue of certification.  The teaching and learning of databases is, perhaps, now an endeavour that is shared between academia and industry.  Perhaps the role of higher education teaching and learning in this area is to provide a useful context to help students to get to grips the more practical dimension of professional certifications.

Another thought that came to mind was I felt that there was a degree of useful cross over between other HEA events, particularly the joint employability and computing forensics event that I attended.  During the forensics part of this day I remember a fair amount of discussion about the sharing of 'data sets' which could be used to enable students to hone their computing forensic skills.  It struck me that database technology educators are faced with a similar challenge.

A final comment is I personally consider that the subject of databases is a pretty fundamental area of computing education.  I have to mention, of course, the Open University's own database course, M359 Relational databases: theory and practice.

Database education is an area is necessarily rich: students will be exposed to different languages, problem domains and wider system architectures and designs as soon as they set to work on 'real world' applications.  Databases represent a domain of software technology that ultimately helps 'to get jobs done'.  Where would we be without them?

Permalink Add your comment
Share post
Christopher Douce

Scholarship for Staff Tutors

Visible to anyone in the world
Edited by Christopher Douce, Thursday, 6 Mar 2014, 16:37

I haven't really blogged about 'internal events' before.  I think this is my first one.  Although I've written this post mainly for an internal audience, it might be useful for a wider audience too, although I'm not yet sure whether I'll click on the 'make available to the world' box in our VLE blogging tool.

About a week or so ago I was lucky enough to attend what is called a Staff Tutor Staff Development event that was all about scholarship.  It was all about how we (as staff tutors) might fit scholarship into our day job.

The SD4ST scholarship event was hosted by the Open University office in Gateshead, a part of the country I had never explicitly visited before (other than passing through either in the car or on the train).  The Gateshead office is fabulous (as is the architecture in Newcastle).  The office presented us with a glorious view of the millennium bridge and the imposing Baltic contemporary art gallery.  I'm digressing before I've even begun, so, without further ado, on to describing the event.

Introducing scholarship

The first day kicked off (in the early afternoon) by asking the question of, 'what exactly counts as scholarship?'  An underlying theme was how to contribute to research that might be used as a part of the university REF submission (which is, of course, used to assess how well universities compare to each other in terms of their research output).

A number of different types of scholarship were defined, drawing on a paper that had recently been circulated (or published) through senate.  The paper also included explicit examples, but I won't go through them here.  Here's my attempt at summarising the different types:

  • Institutional scholarship (about and for the institution)
  • Scholarship of teaching (the investigation of one's own, or teaching by others)
  • Scholarship for teaching, or outputs that contribute to teaching materials in its different forms
  • Research that relates to and can inform professional practice (in whatever form this might take)
  • Discipline based scholarship, investigative research which can be understood in terms of adding to the knowledge of a particular subject or area.

The output of scholarship may be presented within journal or conference papers, chapters in books or be in reports.  Blogs can also be considered as scholarship too, but this is a rather difficult category, since it has to be a 'rated blog'.  In essence, an output should be something that can be judged as excellent by a peer, capable of use by others and have impact.

I thought about which of these categories I could most readily contribute to.  I came up with a couple of answers.   The first was that I might be able to carry out some discipline based scholarship, perhaps building on some of the accessibility or computing research I have been previously involved with.  Another idea might be to do some research that might inform the different course teams I'm involved on.  An example of this might have been an earlier blog post on mobile technologies that fed into course team discussions.  Also, given my current duties of supporting course presentations I could also see how I might be able to (potentially) contribute to the scholarship of teaching in a number of different ways.

How to find the time

Although I'm relatively new to the role of a staff tutor (or regional academic), I am beginning to feel that we have to be not only good at juggling different things, but also be able to put on a good balancing act too! 

The reason for this is that our time is split down the middle.  On one hand we have regional responsibilities (helping our tutors to do their job as effectively and as efficiently as possible, and doing a lot of other mysterious stuff, like marketing) which accounts for fifty percent of our time.  The other fifty percent of our time is spent on 'faculty' work.  This means that we are able to contribute to course teams, offering useful academic input and ensuring that our associate lecturers are fully taken into consideration during the course design phases.  We can also use this fifty percent slice to carry out scholarship in its different forms.

Given the different pulls from course teams and regional responsibility there is a significant question which needs to be asked, namely: 'how is it possible to do scholarship when we've got all this other stuff to do?'  The second section in the day aimed to answer this exact question through presentations by two staff tutors who seem to be successfully balancing all their different responsibilities.

The first presentation was by Dave McGarvie, Science Staff Tutor in Scotland.  Dave gave a cracking presentation about his research, which was all about volcanoes (I'm sure he will be able to provide you with a better description!)  What struck me about Dave's presentation was that he was also came across as being a bit of a 'dab hand' at media stuff too, being called upon as an 'expert' to talk about Icelandic volcano eruptions.  Dave talked about how he used his study leave (he uses all of it), and said that it is possible to ask for research leave too (which was something that I hadn't heard about).

The second presentation was by Gareth Williams, Maths and Stats Staff Tutor (or MCT), in Manchester.  Gareth told us about how he managed to carve out (and protect) a 'research day' which he used to speak (and work with) with other academics in his subject area.

I noted down a really important part of Gareth's presentation which summarised the reasons for doing research: that it is something that we're passionate about, that it's fun, it can help us to maintain knowledge, it can be exciting, it can help with networking (and recruitment of good ALs), and help to introduce and advertise the work of the university to a wider audience.

One fundamental point was echoed by both presenters, namely, that research can take a lot of time, can (and probably will) eat into our personal time.  Gareth offers some practical advice, urging us to be realistic, develop multiple strategies (or routes to publication), prioritise workload carefully and, importantly, to have fun.

The final talk of the day was by Ian Cook, who spoke about the universities eSTeEM initiative which replaces earlier funding mechanisms.  eSTeEM lets individuals or group of researchers to bid for funding for projects that may able to benefit the university or help to further understand and promote teaching and learning.

Designing a scholarship project

The next part of the day (and a part of the following day) was spent 'in the deep end'.  We were put into groups and asked to work towards creating cross faculty scholarship project which could help us to collectively understand our knowledge of Open University teaching and learning (perhaps through the use of technology).  Following the group discussions, we then had to devise an eight minute presentation to everyone in the room to try to 'win' a pot of imaginary funding.  Here's a rough list of the titles of the various projects that were proposed:

  • Group 1: Can on-line forums enhance students learning?
  • Group 2: What constitutes useful monitoring for associate lecturers?
  • Group 3: Investigate if text messaging can improve TMA (assignment) submission and retention
  • Group 4: Why do students attend (or not attend) synchronous on-line tuition?
  • Group 5: A system for the sharing of media resources between tutors

I have to confess I was involved in group five.

This activity reminded me that different people can understand research (and, subsequently, scholarship) in different ways.   In my 'home discipline' of computer science, research can be considered in terms of 'building stuff'.  This 'stuff' might be a new software system, tool or environment.  The 'stuff' might even be a demonstration of how different technologies may be connected together in a new or novel ways.   I also must confess that my discipline background emerged through our brainstorming activities.

In the end, there were two winners, and it interesting to learn that one of the winning project ideas (the use of text messaging) was the subject of an existing project.  It just goes to show that old adage that good ideas can emerge independently from different (independent) sources!

I enjoyed this activity.  I remember a lot of discussion about dissemination and how to evaluate whether a project had succeeded.  Referring back to the earlier notions of scholarship and Gareth's multiple routes to publication, dissemination can, of course, have a range of different forms, from internal presentations, workshops, focus groups, through to formal internal reports and REFable publications, such as conference and journal papers.

Final presentations

The event was rounded off by two presentations.  Celia Popovic gave a presentation about SEDA, which is an abbreviation for the 'Staff and Educational Development Association', which is a non-profit organisation which aims to facilitate networking and sharing of resources.  Celia begins by asking the question, 'what do you need [to enable you do to your scholarship and research stuff]?' and talked us through a set of different resources and the benefits of being a SEDA fellow.  The resources included books, a magazine, and a number of scholarly journals.

The final presentation, entitled 'Getting Started, Overcoming Obstacles' was by Karen Littleton.  Karen is currently the director of CREET which is a cross-faculty research grouping which comprises of Education and the Institute of Educational Technology (and some others too, I am sure!) 

A couple of things jumped out at me, namely, her advice to 'be pragmatic'.  I am personally guilty of 'thinking big' in terms of research ideas.  I once had this idea to perform some kind of comparison of different virtual learning environments, but it was something that I have never managed to get around to doing, perhaps because my heart sinks when I see all the obstacles that lay ahead of me.

Karen advises us to consider working on a series of smaller projects which have the potential to contribute towards a main goal.  She also mentions the important issue of time and the need to ring fence and guard it carefully, a point that was echoed throughout the two days of the event.

Summary

I'm only just starting to appreciate the different demands on my work time.  I have been wondering, for quite a while now, how to do 'research' within my role as a staff tutor.  What this event told me was that it is possible, but you need to do it with a high level of determination to succeed.

It strikes me that the best way to do research is to make sure that your research activities are aligned, as closely as possible, to some of the other duties of the role.  Of course, it might be possible to do other research, but if your 'job role dots' are not connected together, seeking permission and making cases to go ahead and do your own scholarship is likely to be so much harder.

A feeling that I have always had is that through research there are likely to be opportunities.  An example of this can be finding stuff out that can inform course production, or, connecting to Gareth's example, making contacts may help with recruitment of associate lecturers.  I've also come to the conclusion that network is important too.  Networking might be in the form of internal contacts within the university, or external contacts within either other higher education institutions or in industry.

A really important point that jumped out at me is that you really do need to be passionate about the stuff that you're finding out about.  The word 'fun' was mentioned a number of times too.

As a result of the event I've been thinking about my own scholarly aspirations.  Before changing roles I had some quite firm ideas about what I wanted to do, but this has changed.  As mentioned before, I think it's a good idea to try to align different pieces of my role together (to align the fifty percent of regional work with the fifty percent of 'other stuff').  I hope I'm making some progress in figuring out how to make the best contribution to both courses and research.  I hope to continue to blog about some of the stuff that I'm getting up to whilst on this journey.

I'm also hoping there is a follow up session next year which might ask the question of, 'how is your scholarship coming along, and what practical things could be done to help you do more of it?'

All in all, a really enjoyable event.  Many thanks to the organisers!  For those who can access internal OU sites (and might be staff tutors), some of the presentations have been uploaded to the VLE STLG workspace.

Permalink 1 comment (latest comment by Jonathan Vernon, Friday, 1 July 2011, 18:15)
Share post
Christopher Douce

eTeaching and Learning workshop

Visible to anyone in the world
Edited by Christopher Douce, Sunday, 1 Feb 2015, 13:37

I attended a HEA eTeaching and Learning workshop at the University of Greenwich on 1st June 2011.  It is always a pleasure visiting the Greenwich University campus; it is probably (in my humble opinion) the most dramatic of all university campuses in London - certainly the only one that is situated within a World Heritage site.

My challenge was to find the King William building (if I remember correctly), which turned out to be a Wren designed neo-classical building that sat adjacent to one of the main roads.  Looking towards the river, all visitors were treated to a spectacular view of the Canary Wharf district.  Visitors were also treated to notes emanating from a nearby music school. 

I first went to the eTeaching and Learning workshop back in 2008 where I presented some preliminary work about an accessibility project I was working on.  This time I was attending as an interested observer.  It was a packed day, comprising of two keynotes and eight presentations.

Opening Keynote

The opening keynote was given by Deryn Graham (University of Greenwich).  Deryn's main focus was the evaluation of e-delivery (e-delivery was a term that I had not heard of before, so I listened very intently).  The context for her presentation was a postgraduate course on academic practice (which reminded me of a two year Open University course sounds to have a similar objective).  Some of the students took the course through a blended learning approach, whereas others studied entirely from a distance. 

The most significant question that sprung to my mind was: how should one conduct such an evaluation?  What should we measure, and what may constitute success (or difference).  Deryn mentioned a number of useful points, such as Salmond's e-moderating model (and the difficulty that the first stages may present to learners), and also considered wider economic and political factors.  Deryn presented her own framework which could be used to consider the effectiveness of e-delivery (or e-learning).

This first presentation inspired a range of different questions from the participants and made me wonder how Laurillard's conversational framework (see earlier blog post) might be applied to the same challenge of evaluation.  By way of a keynote, Deryn's presentation certainly hit the spot.

General Issues

The first main presentation was by Simon Walker, from the University of Greenwich.  The title of his paper was, 'impact of metacognitive awareness on learning in technology enhanced learning environments'.

I really liked the idea of metacognition (wikipedia) and I can directly relate it back to some computer programming research I used to stidy.  I can remember myself asking different questions whilst writing computer software, from 'I need to find information about these particular aspects...' through to, 'hmm... this isn't working at all, I need to do something totally different for a while'.  The research within cognitive is pretty rich, and it was great to hear that Simon was aware of the work by Flavell, who defines metacognition as, simply, 'knowledge and cognition about cognitive phenomena'.

Andrew spoke about some research that himself and his colleagues carried out using LAMS (learning activity management system), which is a well known learning design tool and accompanying runtime environment.  An exploratory experiment was described: one group were given 'computer selected' tools to use (though LAMS), whereas the other group were permitted a free choice.  Following the presentation of the experiment, the notion of learning styles (and whether or not they exist, and how they might relate to tool choice - such as blogs, wikis or forums) was discussed in some detail.

Andrew Pyper from the University of Hertfordshire gave a rather different presentation.  Andrew teaches human-computer interaction, and briefly showed us a software tool that could be used to support the activity of computer interface evaluation though the application of heuristic evaluations. 

The bit of Andrew's talk that jumped out at me was the idea that instruction of one cohort might help to create materials that are used by another.  I seemed to have made a note that student-generated learning materials might be understood in terms of the teaching intent (or the subject), the context (or situation) in which the materials are generated, their completeness (which might relate to how useful the materials are), and their durability (whether or not they age over time).

The final talk of the general section returned to the issue of evaluation (and connects to other issues of design and delivery).  Peiyuan Pan, from the London Metropolitan University, draws extensively on the work of others, notably Kolb, Bloom, and Fry (who wrote a book entitled 'a handbook for teaching and learning in higher education - one that I am certainly going to look up).  I remember a quote (or a note) that is (roughly) along the lines of, '[the] environment determines what activities and interactions take place', which seems to also have echoes with the conversational framework that I mentioned earlier.

Peiyuan describes a systematic process to course and module planning.  His presentation is available on line and can be found by visiting his presentation website.  There was certainly lots of food for thought here.  Papers that consider either theory or process always have a potential to impact practice.

Technical Issues

The second main section comprised of three papers.  The first was by Mike Brayshaw and Neil Gordon from the University of Hull, who were presenting a paper entitled, 'in place of virtual strife - issues in teaching using collaborative technologies'.  We all know that on-line forums are spaces where confusion can reign and emotions can heighten.  There are also perpetual challenges, such as none participation within on-line activities.  To counter confusion it is necessary to have audit trails and supporting evidence.

During this presentation a couple of different technologies were mentioned (and demoed).   It was really interesting to an the application of Microsoft Sharepoint.  I had heard that it can be used in an educational context, but this was the first ever time I had witnessed a demonstration of a system that could permit groups of users to access different shared areas.  It was also interesting to hear that a system called WebPA was being used in Hull.  WebPA is a peer assessment system which originates from the University of Loughborough.

I had first heard about WebPA at an ALT conference a couple of years ago.  I consider peer assessment as a particularly useful approach since not only might it help to facilitate metacognition (linking back to the earlier presentation), but it may also help to develop professional practice.  Peer assessment is something that happens regularly (and rigorously) within software engineering communities.

The second paper entitled 'Increased question sharing between e-Learning systems' was presented by Bernadette-Marie Byrne on behalf of her student Ralph Attard.  I really liked this presentation since it took me back to my days as a software developer where I was first exposed to the world of IMS e-learning specifications.

Many VLE systems have tools that enable them to deliver multiple choice questions to students (and there are even projects that try to accept free text).  If institutions have a VLE that doesn't offer this functionality there are a number of commercial organisations that are more than willing to offer tools that will plug this gap.  One of the most successful organisations in this field is QuestionMark.

The problem is simple: one set of multiple choice questions cannot easily be transferred to another.  The solution is rather more difficult: each system defines a question (and question type) and correct answer (or answers) in a slightly different ways.  Developers for one tool may use horizontal sliders to choose numbers (whereas others might not support this type of question).  Other tools might enable question designers to code extensive feedback for use in formative tests (I'm going beyond what was covered in the presentation, but you get my point!)

Ralph's project was to take QuestionMark questions (in their own flavour of XML) at one end and output IMS QTI at the other.  The demo looked great, but due to the nature of the problem, not all question types could be converted. Bernadette pointed us to another project that predates Ralph's work, namely the JISC MCQFM (multiple-choice questions, five methods) project, which uses a somewhat different technical approach to solve a similar problem.  Whereas MCQFM is a web-service that uses the nightmare of XSLT (wikipedia) transforms, I believe that Ralph's software parses whole documents into an intermediate structure from where new XML structures can be created.

As a developer (some years ago now), one of the issues that I came up against was that different organisations used different IMS specifications in different ways.  I'm sure things have improved a lot now, but whilst standardisation has likely to have facilitated the development of new products, real interoperability was always a problem (in the world of computerised multiple-choice questions).

The final 'technical' presentation was by John Hamer, from the University of Glasgow.  John returns to the notion of peer assessment by presenting a system called Aropa and discussing 'educational philosophy and case studies' (more information about this tool can be found by visiting the project page).  Aropa is designed to support peer view in large classes.  Two case studies were briefly described: one about professional skills, and the other about web development.

One thing is certain: writing a review (or conducting an assessment of student work) is  most certainly a cognitively demanding task.  It both necessitates and encourages a deep level of reflection.  I noted down a number of concerns about peer assessment that were mentioned: fairness, consistency, competence (of assessors), bias, imbalance and practical concerns such as time.  A further challenge in the future might be to characterise which learning designs (or activities) might make best use of peer assessment.

Pedagogical Issues

The subjects of collusion and plagiarism are familiar tropes to most higher education lecturers.  A paper by Ken Fisher and Dafna Hardbattle (both from London Metropolitan University) asks the question of whether students might benefit it they work through a learning object which explains to learners what is and what is not collusion.  The presentation began with a description of a questionnaire study that attempts to uncover what academics understand collusion to be.

Ken's presentation inspired a lot of debate.  One of the challenges that we must face is the difference between assessment and learning.  Learning can occur through collaboration with others.  In some cases it should be encouraged, whereas in other situations it should not be condoned.  Students and lecturers alike have a tricky path to negotiate.

Some technical bits and pieces.  The learning object was created using a tool called Glomaker (generative learning object maker), which I had never heard of before.  This tool reminds me of another tool, such as Xerte, which hails from the University of Nottingham.  On the subject of code plagiarism, there is also a very interesting project called JPlag (demo report, found on HEA plagiarism pages).  The JPLAG on-line service now supports more languages than it's original Java.

The final paper presentation of the day was by Ed de Quincy and Avril Hocking, both from the University of Greenwich.  Their paper explored how students might make use of the social bookmarking tool, Delicious.  Here's a really short summary of Delicious: it allow you to record your web favourites to the web using a set of keywords that you choose, enabling you to easily find them again if you use different computers (it also allows you to share stuff with users of similar interest).  

One way it can be used in higher education is to use it in conjunction with course codes (which are often unique, or can be, if a code is combined with another tag). After introducing the tool to users, the researchers were interested in finding out about common patterns of use, which tags were used, and whether learners found it a useful tool.

I have to say that I found this presentation especially interesting since I've used Delicious when tutoring on a course entitled accessible online learning: supporting disabled students which has a course code of H810, which has been used as a Delicious tag.  Clicking on the previous link brings up some resources that relate to some of the subjects that feature within the course.

I agree with Ed's point that a crowdsourced set of links comprises of a really good learning resource.  His research indicates that 70% of students viewed resources tagged by other students.  More statistics are contained within his paper.

My own confession is that I am an infrequent user of Delicious, mainly due to being forced down one browser route as opposed to another at various times, but when I have use it, I've found browser plug-ins to be really useful.  My only concern about using Delicious tags is that the validity of links can age very quickly, and it's up to a student to determine the quality of the resource that is linked to (but metrics saying, 'n people have also tagged this page' is likely to be a useful indicator).

Closing Keynote

Malcolm Ryan from the University of Greenwich School of Education presented the final keynote entitled, 'Listening and responding to learners' experiences of technology enhanced learning'.  Malcolm asked a number of searching questions, including, 'do you believe that technology enhances or transforms practice?' and 'do you know what their experience is?'  Malcolm went on to mention something called the SEEL Project (student experience of e-learning laboratory) that was funded by the HEA.

The mention of this project (which I had not heard of before) reminded me of something called the LEX report (which Malcolm later went on to mention).  LEX is an abbreviation of: learner experience of e-learning.  Two other research projects were mentioned.  One was the JISC great expectations report, another was a HEFCE funded Student Perspectives on Technology report.  I have made a note of the finding that perhaps students may not want everything to be electronic (and there may be split views about mobile).  A final project that was mentioned was the SLIDA project which describes how UK FE and HE institutions are supporting effective learners in a digital age.

Towards the end of Malcolm's presentation I remember a number of key terms, and how these relate to individual project.  Firstly, there is hearing, which relates to how technology should be used (the LEX report).  Listening relates to SEEL.  Responding connects to the great expectations report, and finally engaging, which relates to a QAA report entitled 'Rethinking the values of higher education - students as change agents?' (pdf). 

Malcolm's presentation has directly pointed me towards a number of reports that perhaps I need to spend a bit of time studying whilst at the same time emphasising just how much research has already been done by different institutions.

Workshop Themes

At the end of these event blogs I always try to write something about what I think the different themes are (of course, my themes are likely to be different to those of other delegates!)

The first one that jumped out at me was the theme of theory and models, namely different approaches and ways to understand the e-learning landscape.

The second one was the familiar area of user generated content.  This theme featured within this workshop through creation of bookmarks and course materials.

Peer assessment was also an important theme (perhaps one of increasing importance?)  There is, however, a strong tension between peer assessment and plagiarism, but particularly the notion of collusion (and how to avoid it).

Keeping (loosely) with the subject of assessment, the final theme has to be evaluation, i.e. how can we best determine whether what we have designed (or the service that we are providing) are useful for our learners.

Conclusion

As mentioned earlier, this is the second e-learning workshop I have been to.  I enjoyed it!  It was great to hear so many presentations.  In my own eyes, e-learning is now firmly established.  I've heard it say that the pedagogy has still got to catch up with the technology (how to do the best with all of the things that are possible).

Meetings such as these enable practitioners to more directly understand the challenges that different people and institutions face.  Many thanks to Deryn Graham from the University of Greenwich and Karen Frazer from HEA.

Permalink 2 comments (latest comment by Chris Douce, Tuesday, 7 June 2011, 21:19)
Share post
Christopher Douce

Using and teaching mobile technologies for ICT and computer science

Visible to anyone in the world
Edited by Christopher Douce, Monday, 3 Mar 2014, 18:48

 I recently attended an event entitled Mobile Technologies - The Challenge of Learner Devices Delivering Computer Science held at Birmingham City University last week, organised by the Information and Computer Sciences (ICS) Higher Education Academy (HEA) subject centre.

This blog post aims to present a summary of proceedings as well as my own reflections on the day. If any of the delegates or presenters read this (and have any comments), then please feel free to post a reply to add to or correct anything that I've written. I hope these notes might be useful to someone.

Keynote

The day was kicked off by John Traxler from the University of Wolverhampton. Just as any good keynote should, John asked a number of searching questions. The ones that jumped out at me were whether information technology (or computers) had accelerated the industrialisation of education, and whether mobile technologies may contribute to this.

John wondered about the changing nature of technology ownership. On one hand universities maintain rooms filled with computers that students can use, but on the other hand students increasingly have their own devices, such as laptops or mobile phones. 

John also pointed us towards an article in the Guardian, published in July 2010 about teenagers and technology which has a rather challenging subtitle. Mobility and connectedness, it is argued, has now become a part of our identity.

One thing John said jumped out at me: 'requiring students to use a VLE is like asking them to wear a school uniform'. This analogy points towards a lot of issues that can be unpacked. Certainly, a VLE has the potential to present institutional branding, and a uniform suggests that things might done in a particular way. But a VLE also has the potential be be an invaluable source of information to ensure that we know what we need to know to navigate around an institution.

For those of us who had to wear school uniforms, very many of us customised them as much as we possibly could without getting told off for breaking the rules. Within their constraints, it would be possible to express individuality whilst conforming (to get an education). The notion of customisation and services also has a connection with the idea of a Personal Learning Environment (PLE) (wikipedia), which, in reality, might exist somewhere in between the world of the mobile, a personal laptop and the services that an institution provides.

Session I

The first session was opened by Kathy Maitland from Birmingham City University. Kathy talked about how she used cloud computing to enable students using different hardware to access different different software services. She spoke about the challenge of using different hardware (and operating system) platforms to access services and the technical challenges of ensuring correct configuration.

John Busch from Queen's university, Belfast made a presentation about how to record lectures using a mobile phone. It was great to see a (relatively) low tech approach being used to make educational materials available for students. All John needed to share his lectures with a wider audience was a mid range mobile phone, a tiny tripod, a desk to perch the mobile phone on, and (presumably) a lot of hard won experience.

John gave the audience a lot of tips about how to make the best use of technology, along with a result from a survey where he asked students how they made use of the recordings he made of his computer gaming lectures. 

A part of his talk was necessarily technical, where he spoke about different data encoding standards and which standard was supported by which mobile (or desktop) platform. One of the members of the audience pointed us to Encoding.com, a website that enables transcoding of digital media. The presentation gave way to interesting discussions about privacy. One of the things that I really liked about John's presentation was that is addressed 'mobile' from different perspectives at the same time: using mobile technology to produce content that may, in turn, be consumed by other mobile devices.

Laura Crane, from Lancaster University then gave an interesting presentation about using location, context and preference in VLE information delivery. Laura's main research question appeared to be, 'which is (potentially) more useful - it is information that is presented at a particular location, or information that is presented in a particular time?'

This reminded me of some research that I had heard of a couple of years ago called context modelling. Laura mentioned a subject or area that was new to me, namely, Situation Theory.  Laura's talk was very well received and it inspired a lot of debate. Topics discussed include the nature of mobility research, the importance of personal or learner attributes on learning (such as learning styles). Discussions edged towards the very active area of recommender research (recommender system, Wikipedia), and out to wider questions of combining location, recommender and affective interfaces (interfaces or systems that could give recommendations or make suggestions depending on emotion). A great talk!

Darren Mundy and Keith Dykes gave a presentation about the WILD Project funded by JISC. WILD is an abbreviation for Wireless Interactive Lecture Demonstrator. The idea behind the project is one that is simple and compelling: how to make use of personal technology to enable students to make a contribution to lectures. By contribution, I mean allowing students to add comments and text to a shared PowerPoint presentation.

A lecturer prepares a PowerPoint presentation and providing there is appropriate internet connectivity, there is a link to a WILD webpage, which the students can send messages to. This might be used to facilitate debate about a particular subject, but also enable those learners who are less reluctant to contribute to 'speak up' by 'texting out'. We were also directed towards the project source code.

During the talk, I was introduced to a word that I had never heard of before: prosumerism (but apparently Wikipedia had!). At the end of the talk, during the Q&A session, one delegate pointed us towards the SAP Twitter PowerPoint plug in, which might be able to achieve similar things.

This last presentation of the morning really got me thinking about my own educational practice, and perhaps this is one of the really powerful aspects of using and working learning technology: it can have the potential to encourage reflection about what is and what is not possible, both inside and outside the classroom. I tutor on an undergraduate interaction design course with the Open University, where I facilitate a number of face to face sessions.

Due to various reasons my tutorials are not well as attended as they could be. Students may have difficulty travelling to a tutorial session, they may have family responsibilities, or even have jobs at the weekend. This is a shame, since I sense that some students would really benefit from these face to face sessions. The WILD presentation make me wonder whether those students who attend a face to face tutorial might be able to collectively author a summary PowerPoint that could then be shared with the group of students who were unable to attend. Interactivity, of course, has the potential to foster inclusivity and ownership. Simply put, the more a student does within a lecture (or puts into it) the more they may get out of it.

Session II

After lunch, the second session proved to be slightly more technical. The first half was merely a warm up!

The second session kicked off with demonstration by Doug Belshaw. Doug works for JISCInfoNet. This part of JISC aims to provide information and products known as InfoKits which can be used by senior management to understand and appreciate a range of different education and technology issues. We were directed towards examples, such as effective practice in a digital age, and effective assessment in a digital age. A new kit, entitled JISC mobile and wireless technologies review is currently under presentation.

Doug asked the audience to share information about any case studies. A number of projects were mentioned, along with a set of links. During the discussion part of the demo we were directed towards m.sunderland.ac.uk , and this makes me wonder whether the 'm.' is a convention that I'm not aware of (and perhaps ought to be!) Something called iWebKit was also mentioned. Other projects included MyMobileBristol.com, in collaboration with Bristol University and Bristol City Council. For more information visit the m.bristol.ac.uk site.

There was also a mention of a service provided by Oxford University, m.ox.ac.uk (the project also has an accompanying press release) This service appears to have been developed in association with something called the Molly Project, which seems to be a mobile application development framework. There was a lot to take in!

Gordon Eccleston from Robert Gordon University in Aberdeen gave a fabulous presentation about his work teaching programming the iPhone. Having remained steadfastly in the desktop world, and admitting to being a laggard on the mobile technology front, Gordon answered many questions that I have always had about how one might potentially begin to write an iPhone application. Gordon introduce us to the iPhone software development kit, which I understand was free to universities. The software used to create Apps is called Xcode.  Having predominantly worked within a PC software development environment for too many years than I would care to admit, a quick poke around the Apple Tools website looked rather exciting; a whole new world of languages, terms and technologies.

Gordon had a number of views about the future of App development. He thought that XHTML 5, CSS 3 and accompanying technologies would have an increasingly important role to play. On a related note, the cross mobile platform PhoneGap was mentioned during the following presentation which makes use of some of these same technologies. (Digging further into the web, there's a Wikipedia page called Multiple phone web based application framework, which might prove to be interesting.  There was also some debate about which platform mobile might dominate (and whether mobile dominance may depend on whether how many Apple stores there may be within a particular city or country!)

Gordon also briefly talked about some of the student project he has been involved with. A notable example was an iPhone app for medical students to learn ophthalmology terms and concepts. There were some really good ideas here; how to create applications that have direct benefit to learners by the application of mobile technology through learning how they can be developed.

Karsten Lundqvist from the University of Reading offered technology balance to the day by presenting his work teaching the development of Android applications. Karsten began his presentation by considering the different platforms: iPhone, RIM, and Android, but the choice of platform was ultimately decided by the availability of existing hardware, namely, PC's running Windows or Linux. In place of using Xcode, Java with Eclipse was used. I seem to remember that students may have had some experience using C/C++ before attending the classes, but I can't quite remember.

The question and answer session was really interesting. One delegate asked Karsten whether he had heard of something called the Google Android App Inventor, another mobile software development platform. It was also interesting to hear about the different demo apps. Karsten showed us a picture of a phone in a mini-segway cradle, demonstrating the concept of real-time control, there was also a reference to an app that may help people with language difficulties, and Karsten pointed us to his own website where he has been developing a game template by means of a blog tutorial.

Towards the end of Karsten's session, I recall an echo from the earlier HEA employability event which explored computing forensics. One of the ideas coming from this event was that perhaps it might be a good idea for institutions to share forensic data sets. An idea posed within this event was that perhaps institutions might be able to share application ideas or templates, perhaps for different platforms. Some ideas might include fitness utilities, 'finding your way around' apps (very useful: I still remember my days being a confused fresher during my undergrad days!), simple game templates, and flash card apps to help students to learn a number of different concepts.

Plenary

The plenary discussion was quite wide ranging, and is quite difficult to down to a couple of paragraphs. My own attempt at making sense of the day was to understand the key topics in terms of 'paired terms', which might be either subject dimensions or tensions (depending on how you look at it).

VLEs and apps: different software with different purposes, which connect to the idea of information and content. Information might be where to go to find a lecture theatre, or the location of a bank, and content is a representation of the course materials itself.

Ownership and provision: invariably students will have their own technology, but to what extent should an organisation provide technology to facilitate learning? Provision has been historically thought of in terms of rooms filled with computers, and necessarily conservative institutional IT provision (to make sure that everything keeps working). Entwined with these issues is the notion of legacy information and the need for institutions (and learners) to keep up with technology.

Development and usage: where does the information or content come from? To what extent might consumers of mobile information potentially participate in the development of their own content? Might this also create potential dangers for institutions and individuals. This is related to another tension of control, namely, institutional versus individual control, of either information, content or technology.

Guidance and figuring things out: when it comes to learning, there is always a balance to be reached between providing just enough guidance that enables learners to gain enough information so that they find the information that they need. On one hand, there may be certain apps that facilitate learning in their own right, apps that provide information, and apps that may present content held within a VLE. One idea might be that we may need a taxonomy of uses for both an institution and an individual.

Industry and academia: a two way relationship. We must provide education (about mobile) that industry needs, and also make use of innovations coming from industry, but also we have a role to innovate ourselves and potentially feedback into industry. (I seem to recall quite a few delegates mentioning something called mCampus, but I haven't been able to uncover any information about it!)

Other discussion points that were raised included the observation that location-based information provision is new, and the need to interact with people is one of the things that is driving the development of technology. A broader question, posed by John Traxler was, 'does mobile have the potential to transform teaching and learning?' Learners, of course, differ very widely in terms of their experience and attitude to interactive products.

Points such as accessibility, whether it being availability of technology or ability to perceive information through assistive technologies are also substantial issues. The wider organisational and political environment is also a significant factor when it comes to the development of mobile applications, and their subsequent consumption.

Footnote

All in all, a very enjoyable day! As I travelled into Birmingham from London on the train on the morning of the event my eye caught what used to be the site of an old industrial centre. I had no idea what it used to be. I could see the foundations of what might have been a big factory or a depot. I was quite surprised to discover that Millenium Point building also overlooked the same area.

Walking to the train station for my return journey to London, I thought, 'wouldn't it be great if there was an app that could use your location to get articles and pictures about what used to be here before; perhaps there could be a timeline control which users could change to go back in time to see what was there perhaps twenty, thirty or even one hundred years before'. I imagined a personal time machine in the palm of your hand. I then recalled a mash-up between Google Maps and Wikipedia, and had soon uncovered something called Wikimapia.

Like so many of these passing ideas, there's no such thing as an original thought. What really matters is how such technology thoughts are realised, and the ultimate benefit they may have to the different sets of end user.

Permalink
Share post
Christopher Douce

Enhancing Employability of Computing Students

Visible to anyone in the world
Edited by Christopher Douce, Monday, 3 Mar 2014, 18:49

I was recently able to attend what was the first Higher Education Academy (HEA) event that explicitly aimed to discuss how universities might enhance the employability of computing courses.  The intention of this blog is to present a brief summary of the event (HEA website)  and to highlight some of the themes (and issues) that I took away from it.

The day was held at the University of Derby enterprise centre and was organised on behalf of the HEA Information and Computer Sciences subject group.  I had only ever been to one HEA event before, so I wasn't quite sure what to expect.  This said, the title of the workshop (or mini-conference) really interested me, especially after having returned to the higher education sector from industry.

The day was divided into two sets of paper presentations punctuated by two keynote speeches.  The afternoon paper sessions was separated into two streams: a placements workshop and a computing forensics stream.  Feeling that the placements workshop wasn't really appropriate, I decided to sit in on the computing and forensics stream.

Opening Keynote

The opening address was given by Debbie Law, an account management director at Hewlett Packard.  As well as outlining the HP recruitment process (which sounds pretty tough!) Debbie mentioned that through various acquisions, there was a gradual movement beyond technology (such as PCs and servers) through to the application of services.  Business, it was argued, don't particularly care for IT, but they do care for what IT gives them.

So, what makes an employable graduate?  They should be able to do a lot!  They should be able to learn and to apply knowledge (completing a degree should go some way to demonstrating this).  Candidates should demonstrate their willingness to consider (and understand) customer requirements.  They should also demonstrate problem solving and analytical skills and be able to show a good awareness of the organisations in which they work.  They should be performance driven, show good attention to detail (a necessity if you have ever written a computer program!), be able to lead a team and be committed to continuous improvement and developing personal effectiveness. Phew!

I learnt something during this session (something that perhaps I should have already known about).  I was introduced to something called ITIL (Information Technology Infrastructure Library) (wikipedia).  ITIL was later spoken about in the same sentences as PRINCE (something I had heard about after taking M865, the Open University Project Management course).

First paper session

There were a few changes to the published programme.  The first paper was by McCrae and McKinnon : Preparing students for employment through embedding work-related learning.  It was at this point that the notion of employability was defined as: A set of attributes, skills and knowledge that all labour market participants should possess to ensure they have the capability of being effective in the workplace - to the benefit of themselves, their employer and the wider economy.  A useful reference is the Confederation of British Industry's Fit for the Future: preparing graduates for the world of work report (CBI, 2009).

The presentation went on to explore how employability skills (such as team working, business skills and communication skills) may be embedded within the curriculum using an approach called Work Related Learning (WRL).  The underpinning ideas relate to linking theory and practice, using relevant learning outcomes, widening horizons, carrying out active learning and taking account of cultural diversity.  A mixed methodology was used to determine the effectiveness of embedding WRL within a course.

The second paper was by Jing and Chalk and was entitled: An initiative for developing student employability through student enterprise workshops.   The paper outlined one approach to bridge the gap between university education and industry through a series of seminars over a twelve week period given by people who currently work within industry.  A problem was described where there were lower employment rates amongst computing graduates (despite alleged skills shortages), low enrolment to work placement years (sandwich years), lack of employability awareness (which also includes job application and interview skills).

The third presentation was by our very own Kevin Streater and Simon Rae from the Open University Business School.  Their paper was entitled 'Developing professionalism in New IT Graduates? Who Needs It?'  Their paper addressed the notion of what it may mean to be an IT professional, encouraging us to look at the British Computer Society Chartered IT Professional status (CITP) (in addition to the ITIL and Prince), and something called the Professional Maturity Model (which I had never heard of before).

Something else that I had never heard of before is the Skills Framework for the Information Age (SFIA).  By using this framework it was possible to uncover whether new subjects or modules may contribute to enhancing the degrees of undergraduates who may be studying to work within a particular profession.  Two Open University courses were mentioned: T122 Career Development and Employability, and T227 Change, Strategy and Projects at Work.

This final presentation of the morning was interesting since it asked us to question the notion of professionalism, and presented the viewpoint that the IT profession has a long way to go before it could be considered akin to some of the other more established professions (such as law, engineering and accountancy).

During the morning presentations I also remember a reference to E-Skills, which is the Sector Skills Council for Business and Information Technology, a government organisation that aims to help to ensure that the UK has the IT skills it needs.

Computing and Forensics Stream

This stream especially piqued my interest since I had once studied a postgraduate computing forensics course, M886, through the Open University a couple of years ago.

The first paper was entitled Teaching Legal and Courtroom Issues in Digital Forensics by Anderson, Esen and Conniss.  Like so many different subject, both academic and professional skills need to be applied and considered.  Academic education considers the communication of theories and dissemination of knowledge, and learning how to think about problems in a critical way by analysing and evaluating different types and sources of information.

The second paper was about Syllabus Development with an emphasis on practical aspects of digital investigation, by Sukhvinder Hara, who drew upon her extensive experience as working as a forensic investigator.

The third paper was about how a virtualised forensics lab might be established through the application of cloud computing.  I found this presentation interesting for two reasons.  The first was due to the interesting application of virtualisation, and secondly due to a resonance with how parts of the T216 Cisco networking course is taught, where students are able to gain access to physical hardware located within a laboratory just by 'logging on' to their personal computer or laptop.

The final paper of the day was an enthusiastic presentation by David Chadwick who shared with us his approach of using problem-based learning and how it could be applied to computing forensics.

This final session of the day brought two questions to my mind.  The first related to the relationship between teaching the principles of computing forensics and the challenge of providing graduates who know the tools that are used within industry.  The second related to the general question of, 'so, how many computing forensics jobs are there?'

It stuck me that a number of the forensics courses around the UK demonstrate the use of similar technologies.  I've heard two products mentioned on a number of occasions: EnCase (Wikipedia) and FTK (Wikipedia), both of which are featured within the Open University M889 course.  If industry requires trained users of these tools, is it the remit of universities to offer explicit 'training' in commercial products such as EnCase .  Interestingly, the University of Greenwich, like the Open University (in T216 course), enables students to study for industrial certification whilst at the same time acquiring credit points that can count towards a degree.

So, are there enough forensics jobs for forensics graduates?  You might ask a very similar question which also begs an answer: are there enough psychology jobs for the number of psychology graduates?  I've heard it say that studying psychology introduces students to the notion of evidence, different research methodologies and research designs.  It is a demanding subject that requires you to write in a very clear way.  Studying psychology teaches and develops advanced numeracy and literacy as much as it introduces scientific method and the often confusing and complex nature of academic debate. 

Returning to computing forensics, I sensed that there might not be as many jobs in the field as there are graduates, but it very much depends what kind of job you might be thinking of.  Those graduates who took digital forensics courses might find themselves working as IT managers, network infrastructure designers or software developers as opposed to purely within law enforcement.  Knowing the notion of digital evidence and how to capture it is an incredibly important skill irrespective of whether or not a student becomes a fully fledged digital investigator.

Concluding Discussions

One of the best parts of the day was the discussion section.  A number of tensions became apparent.  One of the tensions relates to what a university should be and the role it should play within wider society.  Another tension is the differences that exist between the notions of training and education (and the role that universities play to support these two different aims).

Each organisation and area of industry will have a unique set of training and educational requirements.  There are, of course, more organisations than there are universities.  A particular industry may have a very specific training problem that necessitates the development of educational materials that is particular to its own context.  Universities, it can be argued, can only go so far in meeting very particular needs.

A related question, is of course, the difference between training and education.  When I worked in industry there were some problems that could be only solved by gaining task specific skills.  Within the field of software development this may be learning how to use a certain compiler or software tool set.  Learning a very particular skill (whilst building upon existing knowledge) can be viewed as training.  An engineer can either sit with a user manual and a set of notes and figure things out over a period of a month or two, or alternatively go on an accelerated training course and learn about what to do in a matter of days.

Education, of course, goes much deeper.  Education is about not just knowing have to use a particular set of tools but its about knowing how to think about your tools (and their limits) and understanding how they may fit within the 'big scheme of things'.  Education is also about learning a vocabulary that enables you to begin to understand how to communicate with others who work within your discipline (so you can talk about your tools).

Within the ICT sector the pace of change continues to astonish me.  There was a time when universities in conjunction with research organisations led the development of computing and computer science.  Meanwhile, industry has voraciously adopted ICT in such a way that it pretty much pervades all our lives.

So, where does this leave degree level education when 'general' industry may be asking for effective IT professionals?  It would be naive to believe that the university sector can fully satisfy the needs of industry since the nature of industry is so diverse.  Instead, we may need to consider how to offer education and learning (which the university sector is good at) which leads towards the efficient consumption of training (which satisfies the need of industry).   This argument implies that the university sector is for the 'common good' as opposed to being a mechanism that allows individuals to gain specialist topic specific knowledge that can immediately lead to a lucrative career.  Becoming an ICT professional requires an ability to continually learn due to perpetual innovation.  A university level education can provide a fabulous basis to gain an introduction into this rapidly challenging world.

Permalink 1 comment (latest comment by Roman Furgalski, Friday, 25 Feb 2011, 08:46)
Share post
Christopher Douce

OU Disabled Student Services Conference

Visible to anyone in the world
Edited by Christopher Douce, Tuesday, 22 Jan 2019, 09:40

I've have a fun couple of days.  I recently attended the Open University's 2010 Disabled Student Services conference.  Okay, I admit, I probably gate crashed the event since I'm not a member of the DSS group, but it was certainly a very worthwhile thing to do.  On more than one occasion colleagues said to me, 'it's great to have someone like you here; we certainly need more faculty at these events'.

The overall objectives of the conference were to develop greater awareness of issues affecting the sector, to gain information about developments within the University, gain a greater understanding of the needs of specific disabilities, work towards more standardised delivery of services and, of course, to find out what each other does.

For me, this conference was all about learning about the different roles that people have, and what information needed to be shared to ensure that all our learners get the best possible service.

Tuesday 9th November

A conference is not really a conference without a keynote.  The first day kicked off with a keynote by Will Swann who is responsible for the development, promotion and evaluation of services to support the teaching of students.

The part of Will's presentation that jumped out at me was a concise presentation of the potential ramifications of changes to Higher Education funding.  One thing was clear: things are going to change, but we're not quite sure exactly how they will change.  The changes may affect those students who wish to study for personal development as opposed to choosing to study for purely economic and career development reasons.  Underneath is an interesting philosophical debate about what higher education is good for.  Essentially, Will asked us to consider the challenge of how to maintain effective provision of services in a world where change is a certainty.

First workshop

Following the keynote, we were led toward the first set of workshops.  I attended a workshop that perhaps had the longest title: Exceptional examination arrangements and special circumstances, policies and procedures.  The event was facilitated by Ilse Berry and Peter Taylor.  Peter is the chair of the subcommittee which makes decisions about very many things exam related, such as whether individual students may be able to defer exams (due to changes in personal circumstances), or whether alternative examination arrangements could be organised.

I got a lot out of this first workshop: I gained more of an understanding of the procedures and policies, and the effect that the Disability Discrimination Act (now the Equality Act) has on these policies.  There was some debate about whether everyone knows everything they should know to best advise our students.  There was some discussion about Associate Lecturers, and I feel that I need to ask whether it might be possible to offer some internal staff development training to those who most closely work with students.

I also learnt quite a bit about the range of different examination arrangements that can be put in place.  I never knew that an exam could be taken over an extended period of time, for example.  It was all very thought provoking and showed me how much we try to collectively help.

Student session

I was unable to attend the afternoon event due to a meeting with a colleague in another department, but I was able to return to the conference in time to hear one of the student sessions.  Alex Wise, a student with dyslexia gave a very clear description of some of the challenges that he has faced during his educational career.  He also described some of the strategies and adaptations he both uses and has discovered.

Alex's presentation underlined a number of different points for me.  Firstly, the complexity and uniqueness of conditions such as dyslexia (I briefly studied the very broad subject of language processing when I was a postgrad student, but I was sorely missing a 'personal' perspective).  Secondly, the fact that effective strategies may only be discovered through a combination of hard won experience and trial and error.  A final point is that strategies and tools need not necessarily be high tech.

Wednesday 10th November

The second day (much like the first) was a delight.  In true academic style, I duly forgot which workshop I had signed up for, and was directed towards a session entitled, 'Sensory impairment: science course for screen readers, and D/eaf students and Openings courses - access for all?', presented by Jeff Bashton and Julie Morrison.  I was later to discover that it was two workshops for the price of one.  I had certainly chosen wisely.

Jeff works as a visual impairment advisor for the Open University.  He introduced the science project he is working on (which is a work in progress), and then he had a treat in store for all delegates.  One by one, we all donned blindfolds that Julie had given us, and we began to study two tactile diagrams (using only our touch).

I found both tactile diagrams unfathomable (which is, pretty much, an understatement!)  I could do nothing more than explore the boundaries of each diagram and get a rough understanding about its size and shape (and how the different elements were related spatially).  I couldn't make a jump from lines and bumps through to understanding a picture as a whole.  This, of course, was one of the points.  The tactile diagrams that I was presented with proved to be totally confusing without accompanying auditory descriptions.

Julie 'talked us through' each image (using our fingers!)  When I removed my blindfold, I was surprised by what I saw - it bore hardly any relationship to what I thought I was 'seeing'.

During the second part of the workshop Julie spoke about her with the British Sign Language, where she presented a small number of case studies to highlight the challenges that BSL users might face when trying to study.  To BSL users, English is, of course, a second language.  Julie's overview of the history of deaf education (and the role that Alexander Graham Bell played) was illuminating.  Thanks Julie!

This second workshop ended with a demonstration of how tough it can be to understand digital materials.  Taking a particularly accessible course as an example, Julie showed us a video without sound (it was an interview which had no subtitles of signing).  We then had a look at the transcript of the video.  The transcript contained all the peculiarities of expression that you find whenever you write down spoken language.  It was briefly considered that perhaps different learners may benefit from different versions of the same materials, which is one of the ideas embedded within the EU4ALL project I worked on for a couple of years.

A fabulous afternoon...

I struggled to give a name to the section of the conference where the delightful and charming Francesca Martinez came to talk to us for an hour or so.  It was only after just under a week of wondering did I come up with this final subheading.

I'm not joking; Francesca had us all rolling in the isles of the conference hall with laughter with a delicious mixture of political and observational stories.  There was, however, a serious tone that resonated clearly with the objectives of the conference: everyone is connected by a common thread of humanity regardless of who we are and what personal circumstances we face. 

Francesca is the best kind of comedian; one who makes us think about ourselves and the absurdities that we face.  I, for one, ended the day thinking to myself, 'I need to seize the day more'.  And seizing the day can, of course, mean making time to find out about new things (and having fun too, of course!) Linking this to studying, it is more than possible to find an abundance of fun in learning and maintain optimism about the way in which the fun present may potentially give rise to a fabulous future.

Themes

So, what were the overriding themes that I took away from the conference?  The first one was communication: we all need to talk to each other because internal policies (as well as external legislation) are subject to perpetual change and evolution.  Talk is an eternal necessity (which is what I continue to try to tell my colleagues when I sneak off to the cafe area...)

The second theme is that of information: advisors as well as students need information to make effective decisions about whether or not to take a course of study.  Accessibility, it was stated, wasn't just a matter of making sure that materials are available in different formats.  It also relates to whether or not materials can be study-able too, and this goes back to whether, for example, individual learning activities.

The final theme relates to challenges that are inherent within the changing political and economic climate.  Whilst education is priceless, it always has a financial cost.  Different ways to pay for education has the potential to affect the decision making of those who may wish to study for a wide range of different reasons (and not just to 'get a better job').

Consider, for example, a hypothetical potential student (who is incidentally fabulous) who might just 'try out' a module just to find out if he or she likes it, who then goes on to discover they are more than capable of degree level study.  A stumbling block to access is, of course, always going to be cost.  As mentioned in the conference keynote, there will be the need for creative solutions to ensure that all students are continued to be presented with equal opportunities to study.

The DSS conference has shown, to me, how much work goes behind the scenes (and how much still needs to be done) to ensure equal opportunity to study remains a reality for all.

Permalink Add your comment
Share post
Christopher Douce

1st International Aegis Conference

Visible to anyone in the world

 Aegis project logo: Open accessibility everywhere - groundwork, infrastructure, standards

7-8 October 2010

It seems like a lot of time has passed between this blog post and my previous one. I begin this entry with an explicit statement of: less time will pass between this one and the next!

This post is all about an accessibility conference that I recently attended in Seville, Spain, on behalf of the EU4ALL project in which the Open University has played an important part. Before saying something about the themes of the Aegis conference and summarising some of the notes that I made during some of the presentations, I guess I ought to say something about the project (from an outsiders perspective).

Aegis is an EU funded project that begins with a silent O (the O stands for Open). It then continues to use the first letters of the words Accessibility Everywhere: Groundwork, Infrastructure, Standards. My understanding is that it aims to learn more about the design, development and implementation of assistive and accessible technologies by not only carrying out what could be termed basic research, but also through the development and testing of new software.

Without further ado, here is a rough summary of my conference notes, complete with accompanying links.  I hope it is useful to someone!

Opening

After Evangelos Bekiaris presented the four cornerstones of the project (make things open, make things programmatically accessible, make sample applications and make things personalisable), Miguel Gonzalez Sancho outlined different EU research objectives and initatives. It was stated that 'there must be research in the area of ICT and accessibility, and this will continue'.

Pointers towards future research included a FP7 call that related to ICT for aging and well being. Other subjects mentioned included the areas of 'tools and infrastructures for mainstream accessibility', 'intelligent and social computing for social interaction' (which would be interdisciplinary, perhaps drawing upon the social sciences) and 'brain-neuronal computer interfaces' (BNCI), as well as plans to develop collaborations with other parts of the world.

It was useful not only get an overview of the domains that the funders are likely to be interested in, but also useful to be given a wealth of information rich links that researchers could explore later. The following links stood out for me: the EC ICT Research in FP7 site and the e-Inclusion activities page.

The Aegis Concept

Peter Korn from Oracle presented a very brief history of accessibility, drawing on the notion of 'building in accessibility' into the built environment. He presented a total of six steps, which I hope I have noted down correctly.

The first is to define what 'accessible' is. This may involve the taking of measurements, such as the width of doors and maybe the tones of elevators, or the sounds that are made of road crossings. The next (second) stage is to create standard building materials. Here you might have a building company creating standard door frames or even making electronic circuits to make consistent tones and noises (this is my own paraphrasing!). The next step is to create some tools to know how best to combine our pieces together. The tools may take the form of standardised instructions.

The next three items are more about the use of the physical items. The fourth step is that you need to make a choice as to where to place a building. Ideally it should be situated close to public transport and in a convenient place. The fifth step is to go ahead and to 'build' the building. The final step is all about dissemination: the telling of people about what has been created.

Peter drew a parallel between the process of creating physical acccessibility and creating accessibility for ICT systems. There ought to be 'stock' components of interface elements (such as the Fluid component set), developers and designers should adhere to good practice guidelines (such as the WCAG guidelines), applications need to be then created (which is akin to going ahead and making our building), and then we need to tell others what we have done.

If my memory is serving me well, Peter then went onto talk about the different generations of assistive technologies. More information about the generations can be found by jumping to my earlier blog post. From my own perspective (as a technologist), all this history stuff is really interesting, but there's such a lot of it, especially when technology is moving on so quickly. Our current challenge is to begin to understand the challenge of mobile devices and learn about how to develop tools and systems that remain optimally functional and accessible.

Other Projects

One of the great things of going to conferences (other than the cakes, of course) is an opportunity to learn about loads of other stuff that you had never heard of before. Blanca Alcanda from Technosite (Fundacion ONCE) spoke briefly about a number of projects, including T-Orienta (slideshare), Gametel (the development of accessible games) and INREDIS (self-adaptive inverfaces).

Roundtable Discussion

Karen Van Isacker was our question master. He kicked off with few killer questions (a number of which he tried to answer himself!) The panel comprised of a journalist, industrialists, researchers and user representatives. The notable questions were: ' what are your opinions about the [Aegis] products that are being developed?', 'how are you going to make sure users know about the tools [that are being developed]?', 'what are the current barriers people face?', and 'can you say something about the quality of AT training in Europe?'

In many ways, these questions were addressed by many of the conference presentations as well as by the panel. Challenges relating to the development of assistive technologies include the continual necessity of maintenance and updates, that users ought to be more aware of the different types of technologies that may be available, the price of technology is significant and one of the significant challenges relating to training is the fact of continual technological change.

After a short break the conference then split into two parallel sessions. I tended to opt for sessions that focussed on more general issues rather than those that related to particular technologies (such as mobile devices) or operating systems. This said, there is always a huge amount of cross over between the different talks.

Parallel session 1b (part 1)

It was good to see a clear presentation of a user centred design methodology (UCD) by Evangelos Bakiaris. Evangelos described user research techniques such as interviews, questionnaires and something called contextual enquiry. His talk reminded me of materials that are presented through the Open University course Fundamentals of Interaction Design (a course which I wholeheartedly recommend!)

My colleague Carlos Velasco from FIT, Germany, gave a very concise outline of early web software before introducing us to WCAG (W3C). Carlos then went onto summarise some intresting research from something called the 'technology penetration report' where it was discovered that out of 1.5 million websites, 65% of them use Javascript (which is know to yield challenges for some assistive technologies). The prevalance of Javascript relates to the increasing application and development of Rich Internet Applications (or RIAs, such as Google Maps, for instance). The characteristics of RIAs include the presentation of engaging UI's and asynchronous content retieval (getting many bits of 'stuff' at the same time). All these developments led to the creation of the WAI-ARIA guidelines (Accessible Rich Internet Applications).

Carlos argued that it was once relatively straightforward to test earlier types of web application, since the pages themselves didn't change. You could just send the pages to an 'page analysis server' or system (perhaps like Imergo), which may then persent a report, perhaps in a formal language like EARL (W3C). Due to the advent of RIAs, the situation has changed. The accessibility of a system very much depends on the state in which it is, and this can change. Testing web accessibility has therefore changed into something more resembling traditional usability testing.

A higher level question might be, 'having an application or product that is accessible is all very well, but do people have access to assistive technology (AT) that enable web sites to be used?' Other related questions include, 'if people have access to AT, do they use it? If not, why not?' These were the questions that Karel Van Isacker aimed to address.

Karel began by saying that different definitions within Europe leads to different estimates of the number of people with disabilities. He told us that the AT supplier market is rather fragmented: there are many suppliers in different countries and there are also substantial differences in terms of how purchases of AT equipment can be funded. He went on to suggest that different countries applied different models of disability (medical, social and consumer) to different market segments.

Some of the challenges were clear: people were often unaware of the solutions that best meet their ICT needs, users of AT's are just given very rudimentary training, and many people may even have a computer that they have used once, and there is a high level of users discarding their AT due to low levels of satisfaction.

Parallel session 1b (part 2)

Francesca Cesaroni began the next part of the afternoon by describing a set of projects that related to the broad theme of user requirements. These included the VISIOBOARD project (which related to eye tracking) and the CAALYX project (Complete Ambiant Assisted Living Experiment).

Harry Geyskens then went on to consider the following question from the perspective of someone with a visual impairment: 'how can I use a device in a comfortable and safe way that is good as a non-disabled person?' Harry then presented different design for all principles (wikipedia) : that a product must be equitable in use, be flexible, be simple and intuitive, provide perceptable information, be tolerant for error, permit usage through low physical effort.

Begona Pino gave an interesting presentation about the use of video game systems and how they could potentially be used for different groups, whilst clearly expressing a call for design simplicity.

The final talk of the day was given my yours truly, where I tried to present a summary of four year project called EU4ALL in twenty minutes. To summarise, the aim of EU4ALL is to try to consider how to enhance the provision of accessible systems and services in higher eduction through the creation of a small number of prototype systems. A copy of my presentation and accompanying paper can be found by visiting the OU knowledge network site (a version will eventually be deposited into the Open Research Online system).

Day 2 keynote

Gregg Venderheiden kicked off day 2 with a keynote entitled 'Roadmap for building a global public inclusive infrastructure'. Gregg imagined a future where user interfaces change to the needs of individual users. Rather than presenting a complicated set of interfaces, a system (a PC or mobile device) may present a more simplified user interface. Gregg pointed us to a project called NPII (National Public Inclusive Infrastructures). It was good to learn that some of the challenges that Gregg mentioned, specifically security and ways to gather preferences were also lightly echoed in the earlier EU4ALL presentation.

Parallel session 2a: Rich RIA!

RIA is an abbreviation for Rich Internet Application. The canonical example of a RIA is, of course, Google Maps or Gmail. Web application development techniques (such as AJAX, wikipedia) that were pioneered by Google and other organisations have now found their way into a myriad of other web products. From their inception RIAs proved to be troublesome for users of assistive technologies.

Juta Trevianus gave a talk with an intreguing title: 'changing the world - on a tiny budget'. She began by saying that being on-line and being connected is no longer an option. Digital exclusion can lead to social exclusion. The best bargain is often, in my experience, one that you can find through a web browser. I made a note of some parts of her talk that jumped out at me, i.e., 'laws work when they are clear, simple, consistent and stable', but, 'laws cannot create a culture of change'. Also, perhaps we need to move from a case where one size fits all (universal design) to the case where one size fits one (personalised design, which may be facilited through technology).

Being an engineer, I was struck by Juta's quote from computer scientist Alan Kay: 'the best way to predict the futuer is to invent it'. It's not too difficult to relate this quote back to the Aegis theme of openness and open source software (OSS): freedom of code has the potential enable the freedom of invention.

The first session was concluded by Dominique Hazael-Massieux from the W3C mobile web initative (W3C). The challenges of accessibility now reach much further than the increasingly quaint desktop PC. They now sit within the hands and pockets of users.

One early approach to dealing with the explosion of new devices was to provide a separate websites: one for mobile devices, another for 'traditional' computers. This approach yields the inevitable challenge of maintenance. Dominique told us about HTML 5 (wikipedia) and mentioned that it has the potential to help with site navigation and make it easier for developers (and end users) to work with rich media.

The remainder of the day was mainly focused upon WAI-ARIA. I particularly enjoyed Mike Squillace's presentation that returned to the challenge of testing rich internet applications. Mike presented some work that attempted to codify with WCAG rules into executable Javascript which could then be used within a test engine. Jan Richards, from OCAD, Canada, presented the Fluid project.

Parallel session 3b: Standardisation and valorisation

I split my time in the final afternoon between the two parallel sessions, visiting the standardisation session first, then moving onto the coordination strand half way through. There were presentations that described the process of standardisation and its importance in the process of accessibility. During this session Loic Martinez presented his work on the creation of a tool to support the development of accessible software.

Parallel session 3a: Coordinating research

The final session of the conference yielded a mix of presentations, ranging from description of physical centres that people could visit through to another presentation about the EU4ALL project made by my colleague from Madrid. This second EU4ALL presentation outlined a number of proposed prototype accessibility information services. Our two presentations complemented each other very well: my presentation outlined (roughly) an accessibility framework, whereas this second presentation seemed to an alternative perspective on how the framework might be applied and used within an institution.

Summary

One of the overriding themes was the necessity to not only make assistive technology available to others but also to make sure the right kind of technology was selected, and to ensure that users were given ample opportunity to learn how to use it. If you are given a car and you have never driven before you shouldn't just get into it and start driving: it takes time to learn the controls, and it takes time to build confidence and to learn about the different places you might want to go to (and besides, it's dangerous!) To risk stretching a metaphor too far, this is a bit like assistive technologies: it takes time to understand what controls you have at your disposal and where you would like to travel to. As Karol pointed out in his talk: far too much technology sits unused in a box.

Another theme of the conference was about the solidity of 'this box'. Rather than having everyting in 'a box' or installed on your computer (or mobile device), perhaps another idea might be to use technology 'on demand' from 'the cloud' (aka, the internet). Tools may have the potential to be liberated, but this depends on other types of technology 'groundwork' being available, i.e. good, fast and reliable connectivity.

Finally, another theme (and one that is pretty fundamental) is the issue of usability and simplicity. The ease of use of systems will continue to be a perpetual challenge due to the continual differences between people, task and context (where the person and the task takes place). Whilst universal design offers much possibility in terms of making product for the widest possible audience, there is also much opportunity to continue to explore the notion of interface, product and system personalisation. Through simplicity comes accessibility, and visa versa.

All in all, a very interesting couple of days. I came away feeling that there was a strong and vibrant community committed to creating useful technologies to help people in their daily lives. I also came away feeling that there is so much more to do, and a stronger feeling that whilst technology can certainly help there are many other complementary actions that need to be taken before technology can even begin to play a part.

The latest project newsletter (available at the time for writing) can now be downloaded (pdf).

See also Second International Education for All Conference (blog post), 24 October 2009.

Permalink
Share post
Christopher Douce

Understanding Moodle Themes

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 21 July 2010, 18:27

A section of a Moodle screen showing three icons: forums, lessons and resources

This post is about the journey that I followed trying to understand Moodle Themes.  Moodle Themes are pieces of programming magic that change the visual appearance of your Moodle installation.

If you download Moodle and play around with it, you might eventually arrive at a decision that it might be useful within your institution.  You might hold a meeting with senior management where you may say, 'I think it's a good idea if we try to use this thing call Moodle to host some of our courses'.  After answering some difficult questions about maintenance and development costs, your managers might say, 'okay, you've convinced us... let's give it a go, I'll give you a budget'

Other than figuring out which operating system and database to use and where (and how) your instance of Moodle is to be hosted, one of the first development activities you will have to do is make sure that your Moodle system is 'on brand', i.e. it's visual appearance should reflect the institution that you work for.

This is pretty much what I have to do.  I have to try and make my 'vanilla' (unmodified) version of Moodle blend in with a set of existing web pages that have been built as a part of a research project I'm working on.  Other development teams within my university have already done something similar with their production version of Moodle, but I need to tackle this problem myself.

I start with a couple of questions: what makes up a theme and how might you go about changing one or maybe even making a new one?

Resources galore

Before I can answer these questions I needed to find something to read, and it didn't take a lot of browsing to find a number of potentially useful resources.

The first page that I discovered was a link to over one hundred different themes thanks to the Database of Moodle Themes.  Perhaps I shouldn't be so surprised given the number of Moodle installations that are out there in the world.

I soon discovered the Themes documentation pages and a number of other related links, including a set of themes related FAQs and dedicated discussion forum.

The Themes documentation link (for a Themes novice) seems to be the most useful.  One of the sections says that themes can be delivered in zip files.  You download them, unzip them and place the contents in the /moodle/theme directory, and then click on some admin tools to activate it.  This sounds almost too easy!

Towards Code

Being someone who likes to view code I thought it might be useful to look at some of the magic that makes Moodle themes work.  To do this, I chose a random theme from the themes database and unzipped a folder to my desktop.  To begin to make sense of it properly, I felt that it might be a good idea to compare this random theme against one that already existed.  This made me answer, 'which theme is used by default?'

To answer this question, I logged onto my local instance of Moodle (which was running on my local machine, localhost) as an administrator.  After struggling to remember my username and password, I clicked on the Administration link, followed Appearance, Themes and then on the Theme Selector link (because I couldn't really make sense of the Theme Settings options).

I quite like the Theme Selector page.  It presents all the different themes that have been installed.  The current theme that is selected is highlighted by a black square.  The one that was selected by default (in the case of my installation - I cannot remember whether I changed it) was named standardwhite.

I delve into the Moodle code area, take a copy of standardwhite and placed it alongside the one I have randomly downloaded and started poking around.

Looking at code

I noticed similarities and differences.  The similarities are that some of the filenames are the same.  I see two PHP files, styles and config, followed by two html files, header and footer.  There seems to be a CSS file (Wikipedia) in both themes (but the downloaded theme contains a few more than the default theme).  I also notice a graphics file called gradient in the default theme (which is a jpg), and a png graphics file in the other one.  A big difference lies was that the theme I have downloaded contains a directory which seems to contain a bunch of graphic files.

Before deciding I'm terribly confused, I decide to do one more thing: open up both of the PHP files to see what they contain.

In a config script, I see assignments to a variable called $THEME.  Different attributes appear to do different kinds of magic.  Looking in the styles script, a comment tells me that 'there should be no need to modify this file'.  It seems to do something that relates to the presentation of a CSS file.  That is good enough for me!

I have a quick peek into the header and the footer html files.  It looks like these are templates (of some kind) that are filled out using the contents of some PHP variables.  Obviously the pages that the Moodle code creates have a pretty well defined structure, and presumably this structure is documented somewhere.  This is perhaps something I might need to remember later.

Return to the documentation

At this point, I roughly (think) I know what a Theme comprises: some magic scripts which define some variables (and some other stuff), some header and footer scripts which look at bit like templates, a CSS file of some kind, perhaps a graphic (which could be used by the CSS file?) and maybe a bunch of graphics that replace those that are used in Moodle by default.

If this is my current understanding, can I now find the documentation easier to understand?

I soon uncover two further pages: Make your own Theme and Creating a Custom Theme (the first link seems to be easier to understand).  A couple of clicks takes me to a documentation page called Theme config file which goes some way to explaining the variables that I have touched upon above.

The final comment in the Creating a Custom Theme page was instructive.  Other than saying that you can't change everything, if you want to make your site look like an existing site, it might be a good idea to make use of a tool called Firebug which is a plug in for your Firefox browser.

With Firebug, you can browse to a web page of your choice and uncover what CSS definitions have been used to build its visual appearance.  I've used Firebug before, and mentioning that it is a good tool is certainly a good piece of advice.  The Moodle developers have also been kind enough to prepare a CSS FAQ which is certainly worth a look.

Although I could have tried to create a new theme from scratch, I'm in a lucky position since one of my colleagues has already created a customised theme for a custom instance of Moodle.

Towards testing things out

To test things out, I copy of my customised theme into my local 'themes' directory and hit refresh on my browser.  I then select my newly installed theme and everything starts to go wrong.  The action of selecting a theme seemed to have rendered my local copy of Moodle useless since only a tiny fraction of a HTML page is created (which I see by viewing the code the browser receives).

A problem seems to have been created since the version of Moodle that I am using and the structure of the theme that I have transferred are not completely compatible with each other.  I need to go back to my default theme! But how do I do this? Where are the theme settings held?

My first guess is in the database.  I open up a front end to the MySQL database that is running on my PC, using a tool called SqlYog.  I eyeball the contents of the database to see if there's anything I can use.  I discover a 'config' table, but this doesn't tell me much.  I did, however, discover that there is a theme setting within individual courses as well as individual users.

I turn my attention towards the code by first looking at the code within the themes directory and I soon find myself fruitlessly searching through different libraries.  Finding a simple answer may necessitate spending quite a bit of time.

To get things working again, you sometimes have to cheat.  I renamed my theme to something totally different and refreshed my Moodle page.  Moodle then had no choice but to return to its default setting (which was, again, 'standardwhite').

Incrementally merging

I have two themes: one theme that I want to use but doesn't work (because it has been modified for a customised version of Moodle), and another theme that does work which I don't want to use.  When I'm faced with this situation, I try to get 'code to speak to me' by incrementally taking the one that works and making it look like the one that doesn't work.  I find I can really understand stuff when things stop working!

I begin by looking at the files and then the contents of the files.  The first thing that strikes me is that the header and footer files are quite different.  There seems to be quite a bit more happening in the customised theme when compared to the standard theme.  A step at a time I move files across and test, beginning with the favicon, then the config file, and finally the pix's.  I discover that both themes require the use of a CSS file that is contained within the standard theme directory.

The effect of moving files around seems to produce, more or less, what I was after.  The interactive 'side blocks' (particularly the show/hide buttons) are not presented as they should, but further searching reveals a magic variable, allowuserblockhiding that can be used to control this functionality.

Moodle version 2

A question to complete this post is: what is the situation regarding Moodle version 2.0?  This is a development that I have heard about for some time, but I have not heard recent any announcements regaring its expected release.  After a quick search, I reacquaint myself with something called the Moodle Roadmap.

This appears to state that there will be a beta release of V2 by the end of 2009, followed by some months of testing before a final release.  Judging by the planning document, there appears to be quite a lot more coding to do (nearly four hundred days of development time to go, so we should expect some delays).

I appreciate that giving opinion is certainly a lot easier than giving code, so I hope that Moodlers who read this section will forgive me.  I personally hope that the code for the next version is a lot cleaner.  Since the developers are forced to move to PHP version 5, I hope they will choose to adopt its object-oriented features which can help developers to form clearer (less leaky) abstractions.

In a perfect world, developers should be able to look at a screenful of code and be able to describe, more or less, what that section of code does without having to look at other code (providing, of course, they more or less have an understanding of what the product does).  From what I have seen so far in version 1.9, there is a long way to go to, but I'm certainly looking forward to learning how things have moved on in version 2.0.

Wrap-up

It's great that the developers of Moodle have designed it in such a way that it is 'themeable' (if there is such a word).  In some respects, I was surprised to discover things were not as difficult as I had expected.  Whilst, in some ways, going directly to the code and looking what is there may be a daunting challenge, it is one that I certainly recommend doing.

There's a whole lot more to the issue of Moodle themes.  I haven't touched upon the structure of Moodle pages and how they relate to elements in stylesheets, for example.  I'll leave this challenge for another day!

Permalink
Share post
Christopher Douce

Considering Middleware and Service Oriented Architecture

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 21 July 2010, 18:25

4815941514_0b0c87dda6_m.jpg

I wrote the following notes some time ago as a way to share information about the subject of middleware and service-oriented architecture.  I think I began by asking the questions 'what is middleware?' and 'what can it do for us?', explicitly in the context of making information systems that can help to support the delivery of useful services to support learning.

I should add a disclaimer: some of the stuff that is presented here is quite technical and seems quite a long way away from my earlier posts that relate to accessibility, but there are connections in terms of understanding how to build information systems that can help an organisation to manage the delivery of accessibility services (such as the loan of assistive technology).

Beginning my search

I began by exploring a number of definitions.  I first attacked the notion of workflow (Wikipedia).  What does workflow mean?  Is it one of those terms that can have different meanings to different people?  I rather like the Wikipedia definition, which goes:

  • A workflow is a reliably repeatable pattern of activity enabled by a systematic organization of resources, defined roles and mass, energy and information flows, into a work process that can be documented and learned. Workflows are always designed to achieve processing intents of some sort, such as physical transformation, service provision, or information processing.

I then asked myself, 'how does the idea of workflow relate to the notion of middleware?' (I had heard they were connected, but wasn't quite sure how).  Again, the Wikipedia definition of middleware proved to be useful:

  • Middleware is the software that sits 'in the middle' between applications ... stretched across multiple systems or applications. ... The software consists of a set of enabling services that allow multiple processes running on one or more machines to interact across a network. This technology evolved to provide for interoperability in support of the move to client/server architecture. It is used most often to support complex, distributed applications. ... Middleware is especially integral to modern information technology based on XML, SOAP, Web services, and service-oriented architecture.

So, these two ideas are connected.  Carrying out workflow may involve making use of a number of different services, which might be able to call through some sort of middleware...

Further links

A little more digging pointed me to a number of other directions.  Clever people have proposed something called BPEL, an abbreviation for Business Processing Execution Language.  Wikipedia is again useful:

  • WS-BPEL (or BPEL for short) is a language for specifying business process behavior based on Web Services. Processes in WS-BPEL export and import functionality by using Web Service interfaces exclusively.

On this page, there is a link to a blog post which is a very good primer and introduction.  It is lots more clearer than the Wikipedia page.

I found the following text to be useful:

  • In BPEL, a business process is a large-grained stateful service, which executes steps to complete a business goal. That goal can be the completion of a business transaction, or fulfilling the job of a service. The steps in the BPEL process execute activities (represented by BPEL language elements) to accomplish work. Those activities are centered on invoking partner services to perform tasks (their job) and return results back to the process.

Interestingly, it also contained the following:

  • As for limitations, BPEL does not account for humans in a process, so BPEL doesn't provide workflow-there are no concepts for roles, tasks and inboxes.

We are almost at the point where the same terms may be used to mean different things.  Perhaps there is a difference between what workflow is and what business processes are?  Michelson (the blog author) seems to equate workflow with 'things that people do'.  The point is that a wide definition of workflow can include things that BPEL does not.

At this point, I was wondering, 'if I have a process (say, a task that I have to complete), where half of the task has to be completed by a machine and the other half has to be completed by a person, then what technologies should I use?'.  All is not lost.  The blog mentions there is something called  BPEL4People (Wikipedia), and contains a link to an IBM whitepaper.

I've extracted some fragments that caught my eye:

  • The BPEL specification focuses on business processes ... But the spectrum of activities that make up general purpose business processes is much broader. People often participate in the execution of business processes ...

Following this, I stumbled across the following scenario:

  • Consider a service that takes place out-of-sight of the initiating process. In many circumstances, it may be immaterial as to whether the service is performed with or without user interaction, for example, a document translation service.

This made me wonder about my own involvement in the EU4ALL project, which is exploring processes that enable lecturers to order alternatives formats, such as tactile maps or other kinds of materials, for instance.

Application Servers

BPEL is represented using something called XML (Wikipedia), which is, of course (more or less) a text file that has lots of structure (created by the enthusiastic use of angled brackets).

BPEL is not the only way to represent or describe business processes (or workflow).  Another approach might be to use something called State Chart XML (SCXML), for instance.   There are probably loads more other data structures or standards you might use.

At this point, you might be asking, "okay, so there are these magic XML data structures that allow you to describe entire processes but how do you make this stuff real so people can use it?".  The answer is use something called an Application Server (Wikipedia).

Here, I am again lazy and quote from Wikipedia:

  • Application server products typically bundle middleware to enable applications to intercommunicate with dependent applications, like web servers, database management systems ...

Although an application server may be able to run middleware (and potentially sequence the order in which activities are carried out), we need to add interfaces so people can interact with it.

Always being the pragmatist, I asked myself another question, 'all this sounds like good fun, but where can I find one of these application servers that does all this magic stuff to manage our workflow and processes?'  I don't have a precise answer to this question, but I did find something called Apache ODE.

To quote from the project website,

  • Apache ODE (Orchestration Director Engine) executes business processes written following the WS-BPEL standard. It talks to web services, sending and receiving messages, handling data manipulation and error recovery as described by your process definition. It supports both long and short living process executions to orchestrate all the services that are part of your application.

Another distinction (as opposed to long and short running processes) include processes that require human intervention (or actions) and those that can run on their own, such as executing a database query or sending messages to another part of a large organisation to request the availability of resources.

All this sounds great!  All I have to do now is to find some time to study this stuff further.

Other approaches

Whilst reading all this stuff, the purpose of other products that never made sense to me started to become clear.  A couple of years ago, I had heard something called Biztalk mentioned, but never properly understood what it was.  Again, Wikipedia is useful, describing Biztalk (Wikipedia) as

  • a business process management (BPM) server. Through the use of "adapters" which are tailored to communicate with different software systems used in a large enterprise, it enables companies to automate and integrate business processes.

I've not looked into this very deeply, but it also seems that the House of Microsoft might have concocted something of their own called the Windows Workflow Foundation (Wikipedia) which I understand also connects to the topic of BPEL.

Of course, there's a whole other set of terms and ideas that I haven't even looked at.  These include technologies and ideas such as an enterprise service bus (ESB), message queues, message-oriented middleware (MOM), the list goes on and on...

A summary (of sorts)

The issue of service-oriented architecture design goes a lot deeper than simply creating a set of solitary web services running on different systems.  Designers need to consider how to ensure that messages are received successfully, how to consider or address redundancy and how to measure or ensure performance.  The ultimate choice of architectural components and elements depend very much on your requirements, the boundaries of your organisation, your needs for communication and who you need to communicate with.

What I found surprising was the number of technologies that could be potentially used within the project that I'm currently working on.  The ultimately choice of technologies are likely to boil down to the key issue of: 'what do we know about right now', and 'what is the best thing we can do'.

Footnote

I was going to add a footnote to one of the earlier sections, but because my notes have turned into a blog post, I've decided to put it here.

I like this stuff because it reminds me of two areas that always fight with each other: software maintenance and business process re-engineering.  Business practices can change more quickly than software systems.  The need for process flexibility (and abstraction) is one motivation that has driven the development of things like SOA and BPEL.

This stuff is also interesting because workflow is where the world of 'work' and the world of software nearly combine.  There is another dimension: would you like a computer telling you what to do?  Also, no matter how much we try to be comprehensive in our understanding of a particular institution there will always be exceptions.  Any resulting architecture should ideally try to accommodate change efficiently.

Middleware (in some senses) can have a role to play in terms of gathering information about the performance of services, i.e. how long it takes for certain actions to a certain kind of request, and has the potential toe manage the delivery of interventions (such as issue escalation to supervisors) should service quality be at risk.

Acknowledgements

Blog image is a modified version of the one found on the Wikipedia SOA page.  I also cheekily consulted an O'Reilly book when I was preparing an earlier version of these notes, but I've long since returned it to the library (and I can't remember its title).

Permalink
Share post
Christopher Douce

Second International Education for All Conference

Visible to anyone in the world
Edited by Christopher Douce, Thursday, 21 Nov 2019, 11:24

This blog post represents a review of the second International Education for All conference that I was lucky enough to attend in September 09.  I originally intended to post a review earlier, but mitigating circumstances (which I hope will become clear at the end) prevented this.

Like the first conference, held in '07, this conference was also hosted at the University of Warsaw.  The '07 conference represented a finale of an EU project, a collaboration between universities and other organisations from Germany, Poland, Estonia, Croatia and others (please forgive my poor memory!), but this one was slightly different.

Below I attempt to present a brief summary of some of the sessions that I attended.  There were three parallel sessions, so I had to choose carefully.  This is then followed by what I took away in terms of the themes of the conference.

Opening Keynote

The opening keynote was by Dianne Ryndak from the University of Florida.  Dianne explored the topic of inclusion, particularly the differences between generalised and specialised education.  Dianne explained how personalised support and learning activities could be provided as a part of general learning activities.

She went on to present a powerful description of two students: one who was educated within a segregated school, the other who was educated, with the provision of additional support, through a main stream (or general education) school.  I remember her saying that 'education is a service that goes to the student, not a place where the student goes' and that education should be 'only as special as necessary'.

Although Dianne's presentation primarily related to high school education the themes she highlighted can be directly brought to bear on higher education too.  Technology can be used as a way to help with the inclusion of people with disabilities main stream education.  This said, teachers have a more important role where they need to be viewed as collaborators as well as educators.

Three dimensional solid science models for tactile teaching materials

I sometimes like to visit museums.  One of the things that frustrate me is the sight of signs that say 'please do not touch!'  This strikes me as particularly unfair when I discover sculptures in art galleries.  Given that sculptors use their haptic sense when creating an object, it seems unfair to deny visitors the possibility of using this same modality.

Yoshinori Teshima, from the Digital Manufacturing Research Centre in Japan, gave an inspiring presentation where he showed a number of different models, ranging from abstract objects (such as polyhedra) through to hugely magnified representations of microscopic creatures that can only be seen under a microscope (imagine a microscoping monster the size of your fist!)

Yoshinori briefly spoke about the manufacturing methods, which included stereolithography and 3D printing using either plaster on nylon powder.  His relief emphasised models of the earth and mars were fabulous.

It struck me that his models could be used by all students, regardless of visual abilities.  It also struck me that the ultimate use of such models within the classroom depends ultimately upon the skills and the expertise of the teacher.

Talking Tactile Tablet

I have heard about tactile tablets before.  This presentation demonstrated a product that was also included within the assistive technology exhibition that was also hosted at the conference.

I came away with two points from this presentation.  Firstly, referring to three dimensional objects using two dimensional symbols is a skill that I take for granted.  Secondly, it is now possible for educators to author their own tactile materials.  We were shown how it was possible to create a small family tree.  Audio materials were recorded using Audacity (Wikipedia), which were then associated to positions on a tablet.  Corresponding tactile representations could be produced using embossers.

Opening Linux for the Visually Impaired

This presentation primarily focused upon a screen reader called Sue (Screenreader Usability Extensions) that was developed by the Study Centre for the Visually Impaired Students (SZS) based at the University of Karlsruhe, Germany.  This presentation reminded me a little of a presentation of the Orca screen reader, made at the Aegis project dissemination event that I wrote about a couple of months ago.

One of the interesting things about Sue was that it could be connected to both a refreshable braille display (Wikipedia) (visiting this page was interesting, since it mentioned a new type of refreshable display called a rotating-wheel display) and a screen magnifier at the same time.

Although Sue could be considered to be 'yet another screen reader', having multiple versions of similar products is undeniably, in my view, a good thing.  Competition between individual products, whether it is in terms of popularity or functionality can help their development and enhancement.

Distance Education and Training Programme on Accessible Web Design

I was drawn to this presentation due to involvement in the Open University Accessible online Learning and Fundamentals of Interaction Design courses.  I was not to be disappointed.  There were some strong echoes with these current courses, but I should say the curriculum is perhaps complementary.

The course was developed as a part of a European project called Accweb with the intention of creating distance learning materials that could address a need for a professional qualification or a certificate in accessible web design. The materials comprised of eighteen units which could be delivered through the ATutor VLE, amounting to the equivalent of 60 points of study.

Key elements of the curriculum included:

  • Fundamentals of web accessibility
  • Guidelines and legal requirements
  • Assistive technologies
  • Accessible content creation (which included issue such as methodology, evaluation, rich internet application and authoring tools)
  • Design and usability (themes from human-computer interaction)
  • Project development

The materials do not seem to be available through this site at the moment, but I hope they will be available in time.

Helping children to play using robots: IROMEC project experience

This presentation, by Francesca Caprino, outlined the IROMEC project, which is an abbreviation of Interactive Robotic Social Mediators as Companions and a sister project called the adapted robot project.  Francesca began by describing play: what it is, what it can do and considered the effect of play deprivation on the development of children.  Robots, it was argued, can help children with physical disabilities and other impairments participate in play activities.

The robots that were described were essentially toy robots that were modified to allow them to be controlled in different ways.  Future research objectives included uncovering of new play scenarios, considering how to adapt or modify other robots and assessing the educative and therapeutic outcomes of robot assisted  play interventions and developing associated guidelines.

Studying Sciences as a Blind Person: Challenges to AT/IT

Joachim Klaus introduced us to the Centre for the Visually Impaired Students, which seemed to have similarities to the Open University Office for Students with Disabilities (more information is available through the services for disabled students portal).

After presenting an overview of the centre, Joachim presented the ICC: International Camp on Computers and Communication.  The ICC 'tries to make young blind and visually impaired students aware what technology can do for them, which computer skills they have to have, where they should put efforts to enhance their technical skills, the level of mobility as well as their social skills'.  Using and learning to use assistive technologies can be difficult.  This international camp, which is held in different countries, can help people become more skilled at using assistive technologies, thus removing substantial barriers to access.

Improving the accessibility of virtual learning environments using the EU4ALL framework

During the conference, I gave a short talk about the EU4ALL project I am working on.  The presentation focussed on an architecture that the project has been creating and how it can improve the accessibility of services that are delivered to students.  The architecture takes into consideration a number of different stakeholders, each of which has a responsibility in helping to deliver accessibility.  My slides are available on-line through the Open University Knowledge Network (presentation information).

At the same time as my presentation, my colleague Elisabeth Unterfrauner from the Centre for Social Innovation, Technology and Knowledge, Vienna, was presenting her research, the socio-economic situation of students with disabilities, also carried out within the EU4ALL project.

Fostering accessibility through Design4All education in mainstream education

Continuing the theme of EU projects, Andrea Petz gave an interesting presentation about universal design, or Design4All.  Andrea has been involved with EDeAN, and abbreviation for the European Design for All e-Accessibility Network.  EDeAN has studied how D4ALL is covered or treated in different universities across Europe and has helped to guide the development of a masters programme.

Andrea began her presentation by saying that D4ALL is not design for the average, but for the widest possible group of users and went on to talk about the principles of universal design (Wikipedia):

  1. Equitable use
  2. Flexibility in use
  3. Accessible information
  4. Tolerance for errors
  5. Simple and intuitive
  6. Low physical effort
  7. Size and space for approach and use

Andrea pointed us towards a conference called the International Conference on Computers Helping People with Special Needs (IICHP).

Inclusive science strategies

Greg Stefanich added two more to Andrea's list of principles, namely:

  1. Build a community of learners
  2. Create a positive instructional climate, i.e. one that is welcoming.

Greg also connected his talk to the theme of inclusive education by saying 'a person in a regular class room setting will have better relationships with the general public, family and be employed' and emphasised the view that inclusion of all students will have no real consequence on other students.  An important point is to spend time to get to know the needs of individuals so they can be effectively supported.

Foreign language courses for students with hearing impairments

Ewa Domagała-Zyśk talked about her experience of teaching English to hearing impaired students.  Her presentation made me reflect on my own experience of using the virtual world SecondLife (although Ewa's presentation was mostly about why teaching foreign languages is a good thing and what some of the challenges might be).

Quite some time ago, I was adventurous enough to visit a couple of 'Polish speaking' virtual bars where I tried to interact, using text, with some of the 'regulars' and found myself totally lost.  This experience showed me that this was an interesting and predominately unthreatening way to help to learn (and understand) written language. It did make me wonder whether virtual worlds (in combination with real-word assistance, of course), might be an interesting way to expose people to new languages.  The issue of whether such environments promote the creation of new dialects is, of course, a whole other issue.

Educational practices towards increasing awareness among academic teachers

Dagmara Nowak-Adamczyk (or one of her colleagues) spoke about the DARE project, a disability awareness project.  The project website states, 'The long-term objective of the project and the DARE Consortium is raising public awareness of disability and the way people with disabilities function in modern (knowledge-based) society'.  The project has produced some training materials which are currently being evaluated.

Access for science students with disabilities in an open distance learning institution in South Africa

I was particularly looking forward to this presentation since its title contained particularly interesting themes.  Eleanore Johannes, from the University of South Africa, spoke about different models of disability, introduced us to the Advocacy and Resource Centre for Students with Disabilities (ARCSWiD) office and spoke of some of the challenges that students face: funding, electricity, inaccessibility web pages and training.  She also described a qualitative research project that is exploring the experiences of science students with disabilities that is taking place over a period of three years.

No contradiction : Design4All and assistive technologies

Michaela Freudenfeld introduced the INCOBS web portal that provides information about assistive technologies that are available in Germany.  The INCOBS portal has the objective of carrying out market surveys of products and services, testing of assistive devices and workplace technologies, evaluating the accessibility of software applications and offer seminars for facilitators and advisors.

Understanding educational needs of students with psychiatric disabilities

Enid Weiner, from the Counselling and Disability Services, York University, Canada, gave an impressive hour long talk on the theme of mental health difficulties.  The subtitle of her presentation was 'making the invisible more visible'.

Enid spoke about a number of interesting related themes, such as different illnesses, the effect of medication and that some illnesses can have an episodic nature, which may cause the place of learning to be slower or take place over an extended period.  Accommodations are to be made on a case by case basis.

Enid emphasised the importance of a 'community of support' and said how important it is for educational providers not to 'get hung up' on an individual diagnosis and instead focus on individual accommodations i.e. what learners need to study.

Environmental influences on participation with disabilities: a Sri Lankan perspective

This was an interesting presentation since it presented a different perspective.  Samanmali Kularatne outlined the situation in Sri Lanka and then described a study that is aiming to explore the experiences of children with disabilities in mainstream schools.

The study adopts a qualitative approach.  Interviews are carried out in class rooms, which are then transcribed and subjected to thematic analysis.  The participants include children, teachers, parents and non-participative observers.  Themes identified included: attitudes, values and beliefs, support and relationships, products and technology, natural and built environment.

The conclusion was that inclusion is not really a reality and that there is a lack of resources.  The actions resulting from the study includes further communication with educational authorities, an awareness programme for parents and launching a project to try to initiate more inclusion.

How AT can help with learning maths for people who are print disabled

The presentation and manipulation of mathematical notation is a perennial problem.  When some mathematical expressions are translated into spoken language ambiguities can easily arise.  One proposed solution is to make use of the LaTeX (Wikipedia) language.

The challenge is not just technical.  Acceptance of technology relates to the willingness of learners to accept technological solutions and effectiveness of teachers to communicate their potential benefits.  It also relies upon educators having both the time and wiliness to understand different tools.

Considering different definitions of disability

Paweł Wdówik from the University of Warsaw Office for Persons with Disabilities spoke about the different definitions of disability and the differences that exist between primary and secondary education.  Paweł highlighted that through the medical model, if you don't have a medical model, then a disability is not likely to exist.  As a result, people who do have disabilities are likely to slip through the system.  Paweł emphasised that the views of the individual should always be fundamental.

Working towards inclusive education in Europe

Andrea Watkings from the European Agency for Development in Special Needs Education spoke about the different agency projects and mentioned the UNESCO Salamanca statement and asked the question, 'how do we change our systems so they are inclusive from the beginning?' and made the point that inclusion needs to take account of all peoples and groups.  Amanda emphasised that inclusion is a process, not a state.

Closing Keynote

The final presentation of the conference was made by Yvonne Bonner, Reggio Emilia, Italy.  Yvonne presented a range of very thought provoking images.  She asked the question, 'why work in this area?'  She answered philosophically (and, of course, I paraphrase) that by working in this area  you are considering (and hopefully challenging others to enhance) the rights of all people.  It was a great way to close the conference.

Conference themes

One of the ways that this conference differed from the previous event was that there were more distinct themes.  Whereas the previous event focused quite a lot on an EU project that had just concluded, this conference I felt was slightly more wide ranging (but this could be cause I was more attuned to what to look out for).

One of the more prominent theme were debates surrounding inclusion or exclusion, or more specifically, how to help ensure that people with additional needs could be effectively brought into mainstream education.  This theme was clearly articulated within the opening and closing keynote speeches.  There are differences between countries (and the models of disability that are applied).  Sharing understanding of definitions was certainly a subject that was considered to be important.

Another theme was the need to listen to the individual.  I recall two projects that are aiming to learn more about the experience of individuals.

Quite a few of the presentations related to on-going projects and programmes.  Ideally I would have liked to go away with a more fuller set of conference proceedings to help me remember what was said and recall the arguments that were presented.  Hopefully, as the conference series proceeds, this may be something that the organisers may consider, but hopefully without detracting from an underlying sense that delegates are happy to discuss, share and learn from each others practice.

Addendum

After the conference ended I had some free time.  It was suggested that a fun thing to do would be to go hiking in the Tatra mountains.  I had been told a lot about a town called Zakopane and how it once exerted a strong draw for artists and Bohemians, and how a special cheese was likely to be sold everywhere in the town.  I was not to be disappointed.  As well as having extraordinary mountains and restaurants, tourists walking down the main street would stumble across cheese purveyors (Wikipedia) very thirty or so metres.

My choice of words is no accident, but instead I was embroiled in one.  Rather than literally stumbling across cheese sellers, I literally stumbled down the side of a mountain and broke my arm (although I should add that the stumble was a relatively modest one of about forty or so centimetres).  Visiting the accident and emergency ward was an experience, where a sharp witted paramedic, upon hearing that I was from the UK asked, 'were you walking on the wrong side of the path?'  There were more jokes, but their charm has long since worn away!

The upshot of the accident was that I found myself temporarily disabled, my dominant arm immobilised in plaster.  It all came as a bit of a shock.  Simple tasks suddenly became trials of patience and took considerable longer, if I could figure out how do them at all.  Shoe laces became a liability and shirt buttons became almost impossible.  I had to go about cancelling events and activities that were scheduled for the time after I was to return to the UK.

Upon return to the UK, my motivation levels nose dived. I quickly recalled the universal design principles when doors became difficult to open and jars and cans a frustrating challenge. I also had entered the UK health system, but I had no real sense of who I should ask about information about things.  Whilst monolithic institutions are there to help, individuals are necessary to provide support and peace of mind.

Despite my frustrations, I still had an overwhelming desire to do stuff.  Although I wasn't able to attend a weekend meeting I was scheduled to attend, one of my colleagues from the University of Leeds helped me to participate.  Using Skype text chat, it was possible to take part in a short group discussion activity which helped to lift my mood.  This showed me what a positive impact technology can by providing effective alternative ways of communication.  It also distinctly underlined on of the conference themes: the importance (and power) of inclusion.

I think I'm on the mend now, but I understand it's going to take some time.  All in all, the trip (both to the conference, and over a part of mountain) was an interesting experience.  I've certainly learnt a few things.

Permalink
Share post
Christopher Douce

Aegis Project : Open accessibility everywhere

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 21 July 2010, 18:20

Aegis project logo: Open accessibility everywhere - groundwork, infrastructure, standards

I recently attended a public dissemination event that was held by the AEGIS project, hosted by the European headquarters of the company that developed the Blackberry, Research in Motion.

The Aegis project has the strapline that has three interesting keywords: groundwork, infrastructure and standards.  When I heard about the project from a colleague, I was keen to learn what lay hidden underneath these words and how they connect to the subject of accessibility.

The Aegis project website states that it 'seeks to determine whether 3rd generation access techniques will provide a more accessible, more exploitable and deeply embeddable approach in mainstream ICT (desktop, rich Internet and mobile applications)' and goes on the say that the project will explore these issues through the development of an Open Accessibility Framework (or OAF).  This framework, it is stated, 'provides embedded and built-in accessibility solutions, as well as toolkits for developers, for "engraving" accessibility in existing and emerging mass-market ICT-based products'.  It goes on to state that the users of assistive technologies will be placed at the centre of the project.

The notion of the 'generations' of access techniques is an interesting concept that immediately jumped out at me when reading this description (i.e. what is the third generation and what happened to the other two generations?), but more of this a little later on.

Introductory presentations

The dissemination day began with a couple of contextualising presentations that outlined the importance of accessibility.  A broad outline of the project was given by the project co-ordinator who emphasised that the point that the development of accessibility required the co-operation of a large number of different stakeholders, ranging from expert users of assistive technology (AT), tutors, and developers.

There was a general call for those who are interested in the project to 'become involved' in some of the activities, particularly with regards to understanding different use cases and requirements.  I'm sure the project co-ordinator will not be offended if I provided a link to the project contacts page.

AT Generations

The next presentation was made by Peter Korn of Sun Microsystems who began by emphasising the point that every hour (or was it every second?) hundreds of new web pages are created (I forget the exact figure he presented, but the number is a big one).  He then went on to outline the three generations of assistive technologies.

The first generation of AT could be represented by the development of equipment such as the Optacon (wikipedia), an abbreviation for Optical to Tactile Converter.  This is the first time I had heard of this device before, and this represented the first 'take away' lesson of the day.  The Wikipedia page looks to be a great summary of its development and its history.

One thing that is missing is the lack of an explicit link to a personal computer.  The development of a PC gave way to a new second generation of AT that served a wider group of potential users.  This generation saw the emergence of specialist AT software vendors, such as companies who develop products such as screen readers and screen magnifiers.  Since computer operating systems are continuing to develop and hardware is continually changing (in terms of increases in processing power), this places unique pressures on the users of assistive technology.

For some AT systems to operate successfully, developers had have to apply a number of clever tricks.  Imagine a brand new application package, such as a word processing program, that had been developed for the first generation of PCs, for example.

The developers of such an application would not be able to write code in such a way that allows elements of the display to be presented to users of assistive technology.  One solution for AT vendors is to rely on tricks such as the reading of 'video memory' to convert on-screen video displays that could be presented to users with visual impairments using synthetic speech.

The big problem of this second generation of AT is that when there is a change to the underlying operating system of a computer it is possible that the 'back door' routes that assistive technologies may use to gain access to information may become closed, making AT systems (and the underlying software) rather brittle.  This, of course, leads to a potential increase in development cost and no end of end user frustration.

The second generation of AT is said to have existed between the late 1980s to the early 2000s.  The third generation of AT aims to overcome these challenges since operating systems and other applications begin to providing a series of standardised Accessibility Application Programming Interfaces (AAPIs).

This means that different suppliers of assistive technology can write software that uses a consistent interface to find out what information could be presented to an end user.  An assistive technology, such a screen reader, can ask a word processor (or any other application) questions about what could be presented.  An AAPI could be considered as a way that one system could ask questions about another.

Other presentations

Whilst an API, in some respects can represent one type of standard, there are a whole series of other standards, particularly those from the International Organization for Standardization (ISO) (and other standards bodies) that can provide useful guidance and assistance.  A further presentation outlined the complex connections between standards bodies and underlined the connection to the development of systems and products for people with disabilities.

A number of presentations focussed on technology.  One demonstration used a recent release of the OpenSolaris operating system (which makes use of the GNOME desktop system) to demonstrate how the Orca screen reader can be used in conjunction with application software such as OpenOffice.

With all software systems, there is often loads of magic stuff happening behind the scenes.  To illustrate some of this magic (like the AAPI being used to answer questions), a Gnome application called Accerciser was used.  This could be viewed as a software developer utility.  It is intended to help developers to 'check if an application is providing correct information to assistive technologies'.

OpenOffice can be extended (as far as I understand) using the Java programming language.  Java can be considered as a whole software development framework and environment.  It is, in essence, a virtual machine (or computer) running on a physical machine (the one that your operating system runs on).

One of the challenges that developers of Java had to face was to how to make its user interface components accessible to assistive technology.  This is achieved using something called the Java Access Bridge.  This software component is, in essence, 'makes it possible for a Windows based Assistive Technology to get at and interact with the Java Accessibility API'.

On the subject of Java, one technology that I had not heard of before is JavaFX.  I understand this to be a Java based language that has echoes of Adobe Flash and Microsoft Silverlight about it, but I haven't had much of a time to study it.  The 'take home' message is that rich internet applications (RIA) need to be accessible too, and having a consistent way to interface with them is in keeping with the third generation approach to assistive technologies.

Another presentation made use of a Blackberry to demonstrate real time texting and show the operation of an embedded screen reader.  A point was made that the Blackberry makes extensive use of Java, which was not something that I was aware of.  There was also a comment about the importance of long battery life, an issue that I have touched upon in an earlier post.  I agree, there is nothing worse than having to search for power sockets, especially when you rely on technology.  This is even more important if your technology is an assistive technology.

Towards the fourth generation

Gregg Vanderheiden gave a very interesting talk where he mentioned different strategies that could be applied to make systems accessible, such as making adaptations to an existing interface, providing a parallel interface (for example, you can carry out the same activities using a keyboard or a mouse), or providing an interface that allows people to 'plug in' or make use of their own assistive technology.  One example of this might be to use a software interface through an API, or to use a hardware interface, such as a keyboard, through the use of a standard interface such as USB.

Greg's talk made me think about an earlier question that I had asked during the day, namely 'what might constitute the fourth generation of assistive technologies?'  In many respects this is an impossible question to answer since we can only identify generations when they have passed.  The present and especially the future will always remain perpetually (and often uncomfortably) fuzzy.

One thought that I had regarding this area firmly connects to the area of information pervasiveness and network ubiquity.  Common household equipment such as central heating systems and washing machines often continue to remain resolutely unfathomable to many of us.   I have heard researchers talking about the notion of 'networked homes', where it is possible to control your heating system (or any other device) through your computer.

I remember hearing a comment from a delegate who attended the Open University ALPE project workshop who said, 'the best assistive technologies are those that benefit everyone, regardless of disability, such as optical character recognition'.  But what of a home of networked household goods which can potentially offer their own set of wireless accessible interfaces?  What benefit can such products provide for users who do not have the immediate need for an accessible interface?

The answer could lie with increasing awareness of the subject of energy consumption and management.  Washing machines, cookers and heating systems all consume energy.  Exposing information about energy consumption of different products could allow households to manage energy expenditure more effectively.  In my view, the need for 'green' systems and devices may facilitate the development and introduction of products could potentially contain lightweight device level accessibility APIs.

Further development directions

One of the most interesting themes of the day was the idea of the accessibility API that has made the third generation of assistive technologies what they are today.  A minor comment that featured during the day was the question of whether we might be able to make our software development tools and environments accessible.  Since accessibility and usability are intrinsically connected, the question of, 'are the current generation of accessibility API's as usable as they can be?'

Put another way, if the accessibility APIs themselves are not as usable as they could be, this might reduce the number of software developers who may make use of them, potentially reducing the accessibility of end user applications (and frustrating the users who wish to make use of assistive technologies).

Taking this point, we might ask, 'how could we test (or study) the accessibility of an API?'  Thankfully, some work has already been carried out in this area and it seems to be a field of research that is becoming increasingly active.  A quick search yields a blog post which contains a whole host of useful resources (I recommend the Google TechTalk that is mentioned).  There is, of course, a presentation on this subject that I gave at an Open University conference about two years ago, entitled Connecting Accessibility APIs.

It strikes me that a useful piece of research to carry out is to explore how to conduct studies to evaluate the usability of the various accessibility APIs and whether they might be able to be improved in some way.  We should do whatever we can to try to smooth the development path for developers.  Useful tools, in the form of APIs, have the potential to facilitate the development of useful and accessible products.

And finally...

Towards the end of the day delegates were told about a site called RaisingTheFloor.net (RTF).  RTF is described as a consortium of organizations, projects and individuals from around the world 'that is focused on ensuring that people experiencing disabilities, literacy problems, or the effects of aging are able to access and use all of the information, resources, services and communities available on or through the Web'.  The RTF site provides a wealth of resources relating to different types of assistive technologies, projects and stakeholders.

We were also told about an initiative that is a part of Aegis, called the Open Accessibility Everywhere Group (OAEG).  I anticipate that more information about OAEG will be available in due course.

I also heard about the BBC MyWebMyWay site.  One of the challenges for all computer users is learning and knowing about the range of different ways in which your system can be configured and used.  Sites like this are always a pleasure to discover.

Summary

It's great to go to project dissemination events.  Not only do you learn about what a project aims to achieve, but sometimes the presentations can often inspire new thoughts and point you toward new (and interesting) directions.  As well as learning about the Optacon (which I had never heard of before), I also enjoyed the description of the different generations of assistive technologies.  It was also interesting witness the various demonstrations and be presented with a teasing display of the complexities that lie very often hidden amidst the operating system of your computer.

The presentations helped me to connect the notions of the accessibility API and pervasive computing.  It also reminded me of some research themes that I still consider to be important, namely, the usability of accessibility APIs.  In my opinion, all these themes represent interesting research directions which have the fundamental potential of enhancing the accessibility and usability of different types of technologies.

I wish the AEGIS project the best of luck and look forward to learning more about their research findings.

Acknowlegements

Thanks are extended to Wendy Porch who took the time to review an earlier draft of this post.

Permalink
Share post
Christopher Douce

Formative e-assessment dissemination day

Visible to anyone in the world
Edited by Christopher Douce, Monday, 19 Nov 2018, 10:40

A couple of weeks ago I was lucky enough to be able to attend a 'formative e-assessment' event that was hosted by the London Knowledge Lab.  The purpose of the event was to disseminate the results of a JISC project that had the same title.

If you're interested, the final project report, Scoping a Vision for Formative e-Assessment is available for download.  The slides for this event are also available, where you can also find Elluminate recordings of the presentations.

This blog post is a collection of randomly assorted comments and reflections based upon the presentations that were made throughout the day.  They are scattered in no particular order.  I offer them with the hope that they might be useful to someone!

Themes

The keynote presentation had the subtitle, 'case stories, design patterns and future scenarios'.  These words resonated strongly with me.  Being a software developer, the notion of a design pattern (wikipedia) is one that was immediately familiar.  When you open the Gang of Four text book (wikipedia) (the book that defines them), you are immediately introduced to the 'architectural roots' of the idea, which were clearly echoed in the first presention.

The idea of a pattern, especially within software engineering, is one that is powerful since it provides software developers with an immediate vocabulary that allows effective sharing of complex ideas using seemingly simple sounding abstractions.  Since assessment is something that can be described (in some sense) as a process, it was easy to understand the objective of the project and see how the principle of a pattern could be used to share ideas and facilitate communication about practice.

Other terms that jumped out at me were 'case stories' and 'scenarios'.  Without too much imagination it is possible to link these words to the world of human-computer interaction.  In HCI, the path through systems can be described in terms of use cases, and the settings in which products or systems could be used can be explored through the deployment of descriptive scenarios and the sketching of storyboards.

Conversational framework

A highlight of the day, for me, was a description of Laurillard's conversational framework.  I have heard about it before but have, so far, not had much of an opportunity to study it in great detail.  Attending a presentation about it and learning about how it can be applied makes a conceptual framework become alive.  If you have the time, I encourage you to view the presentation that accompanies this event.

I'm not yet familiar enough with the model to summarise it eloquently, but I should state that it allows you to consider the role of teachers and learners, the environment in which the teacher carries out the teaching, and the space where a learner can carry out their own work.  The model also takes into account of the conversations (and learning) that can occur between peers.

Representation of the confersational framework which presents space for teacher, learner and other pratice and links between teacher, student and peers, indicating the types of conversations that can facilitate learning

During the presentation, I noted (or paraphrased) the words: 'the more iterations through the conversational model you do, the higher the quality of the learning you will obtain'.  Expanding this slightly, you could perhaps restate this by saying, 'the more opportunities to acquire new ideas, reflect on actions and receive feedback, the more familiar a learner will become with the subject that is the focus of study'.

In some respects, I consider the conversational framework to be a 'meta model' in the sense that it can (from my understanding) take account of different pedagogical approaches, as well as different technologies.

Links to accessibility

Another 'take away' note that I made whilst listening to the presentation was, 'learning theories are not going to change, but how these are used (and applied) will change, particularly with regards to technology'.

It was at this point when I began to consider my own areas of research.  I immediately began to wonder, 'how might this model be used to improve, enhance or understand the provision of accessibility?'  One way to do this is to consider each of the boxes the arrows that are used to graphically describe the framework.  Many of the arrows (those that are not labelled as reflections) may correspond to communications (or conversations) with or between actors.  These could be viewed as important junctures where the accessibility of the learning tools or environments that could be applied need to be considered.

Returning to the issue of technology, peers, for instance, may share ideas by posting comments to discussion forums.  These comments could then be consumed by other learners (through peer learning) and potentially permit a reformulation or strengthening of understandings.

Whilst learning technologies can permit the creation of digital learning spaces, such as those available through the application of virtual learning environments, designers of educational technologies need to take account of the accessibility of such systems to ensure that they are usable for all learners.

One of my colleagues is one step ahead of me.  Cooper writes, on a recent blog post,  'Laurillard uses [her framework] to analyse the use of media in learning. However this can be further extended to analyse the accessibility of all the media used to support these different conversations.'  The model, in essence, can be used to understand not only whether a particular part of a course is accessible (the term 'course' is used loosely here), but also be used to highlight whether there are some aspects of a course that may need further consideration to ensure that is as fully inclusive at it could be.

Returning to the theme of 'scenario', one idea might be to use a series of case studies to further consider how the framework might be used to reason about the accessibility status of a course.

Connections

There may be quite a few more connections lurking underneath the terms that were presented to the audience.  One question that I asked to myself was, 'how do these formative assessment patterns relate to the idea of learning designs?' (a subject that is the focus of a number of projects, including Cloudworks, enhancements to the Compendium authoring tool, the LAMS learning activity management system and the IMS learning design specification).

A pattern could be considered as something that could be used within a part of a larger learning design.  Another thought is that perhaps individual learning designs could be mapped onto specific elements of the conversational model.  Talking in computing terms, it could represent a specific instantiation (or instance).  Looking at it from another perspective, there is also the possibility that pedagogical patterns (whether e-assessment or otherwise) may provide inspiration to those who are charged with either constructing new or using existing learning designs.

Summary

During the course of the day, the audience were directed, on a number of occasions to the project Wiki.  One of the outcomes of the project was a literature review, which can be viewed on-line.

I recall quite a bit of debate surrounding the differences between guidelines, rules and patterns.  I also see links to the notion of learning designs too.  My understanding is that, depending on what you are referring to and your personal perspective, it may be difficult to draw clear distinctions between each of these ideas.

Returning to the issue of the conversational model being useful to expose accessibility issues, I'm glad that others before me have seen the same potential connection and I am now wondering whether there are other researchers who may have gone even further in considering the ways that the framework might be applied.

In my eyes, the idea of e-assessment patterns and the notion of learning designs are concepts that can be used to communicate and share distilled best practice.  It will be interesting to continue to observe the debates surrounding these terms to see whether a common vocabulary of useful abstractions will eventually emerge.  If they already exist, please feel free to initiate a conversation.  I'm always happy to learn.

Acknowlegements

Thanks are extended to Diana Laurillard who gave permission to share the presentation slide featured in this post.

Permalink
Share post
Christopher Douce

Green Code

Visible to anyone in the world
Edited by Christopher Douce, Friday, 3 Jan 2020, 18:34

Photograph of a beautiful young fern that is unfolding

It takes far too long for my desktop PC to finish booting up every morning.  From the moment I throw the power switch of my aging XP machine to the on position and click on my user name, I have enough time to walk to the kitchen, brew a cup of tea, do some washing and tidying up and drink half my cup of tea (or coffee), before I can begin to load all the other applications that I need to open before settling down to do some work.

I would say it takes over fifteen minutes from the point of power up to being able to do some 'real stuff'.  All this hanging around inevitably sucks up quite a bit of needless energy.  Even though I do have some additional software services installed, such as a database and a peer-to-peer TV application, I don't think my PC is too underpowered (it's a single core running just over a gigahertz with half a gig of memory).

Being of a particular age, I have fond memories of the time when you turned on a computer, the operating system (albeit a much simpler one) was almost instantly available. Ignoring the need to load software from cassettes or big floppy disks, you could start to issue commands and do useful stuff within seconds of powering up.

This is one of the reasons why I like my EEE netbook (wikipedia): if I have an idea for something to write or want to talk to someone or find something out, then I can turn it on and within a minute or two it is ready for use. (As an aside, I remember reading in Insanely Great by Steven Levy (Amazon) the issue of boot up time was an important consideration when designing the original Macintosh).

Green Code

These musings make me wonder about the notion of 'green code': computer software that is designed in such a way that it supports necessary functionality by demanding a minimal amount of processor or memory resources. Needless to say, this is by no means an original idea. It seems that other people are thinking along similar lines.

In a post entitled, Your bad code is killing my planet, Alistair Croll writes, 'Once upon a time, lousy coding didn't matter. Coder Joel and I could write the same app, and while mine might have consumed 50 percent of the machine's CPU whereas his could have consumed a mere 10 percent, this wasn't a big deal. We both paid for our computer, rackspace, bandwidth, and power.'

Croll mentions that software is often designed in terms of multiple levels of abstraction. He states that there can be a lot of 'distance and computing overhead between my code and the electricity of each processor cycle'. He goes on to write, 'Architecture choices, and even programming language, matter'. Software architecture choices do matter and abstractions are important.

Green Maintenance

Making code that is efficient is only part of the story. Abstractions allow us to hide complexity. They help developers to compartmentalise and manage the 'raw thought stuff' which is computer code. Well designed abstractions can give software developers who are charged with working and maintaining existing systems a real productivity boost.

Code that is easier to read and work with is likely to be easier to maintain. Maintenance is important since some researchers' report that maintenance accounts for up to 70% of costs of a software project.

In my opinion, clean code equals green code. Green code is code that should be easy to understand, maintain and adapt.

Green Challenges

Croll, however, does have a point. Software engineers should need to be aware of the effect that certain architectural choices may have on final system performance.

In times when IT budgets may begin to be challenged (even though IT may be perceived as technology that can help to create business and information process efficiencies), the request for an ever more powerful server may be frowned upon by those who hold the budgetary purse strings. You may be asked to do more with less.

This challenge exposes a fundamental computing dilemma: code that is as efficient as it could be may be difficult to understand and work with. Developers have to consider such challenges carefully and walk a careful path of compromise. Just as there is an eternal trade off between speed of a system and how much power is consumed, there is also a difficult trade offs to consider in terms of efficiency and clarity, along with the dimensions of system flexibility and functionality.

One of the reasons why Microsoft Vista is not considered to be popular is the issue of how resource hungry it is in terms of memory, processor speed and disk drive space. Microsoft, it seems is certainly aware of this issue (InfoWorld).

Turning off some of the needless eye candy, such as neatly shaded three dimensional buttons, can help you to get more life out of your PC. This is something that Ted Samson mentions, before edging towards discussing the related point of power management.

Ted also mentions one of those well known laws of computing. He writes, 'just because there are machines out there that can support enormous system requirements doesn't mean you have to make your software swell to that footprint'. In other words, 'your processor and disk space needs expands to the size of your machine' (another way of also saying 'your project expands to the amount of time you have available'!)

Power Budgets

Whilst I appreciate my EEE PC in terms of its quick boot up time, it does have an uncomfortable side effect: it also acts as a very effective lap warmer. Even more surprisingly, its batteries are entirely depleted within slightly over two hours of usage. This is terrible! A mobile device should not be tethered to a mains power supply. It also makes me wonder about whether its incessant demand for power is going to cut short the life of its batteries (which represent their own environmental challenge).

When working alongside electrical engineers, I would occasionally over hear them discussing power budgets, i.e. how much power would be consumed by components of a larger electrical system. In terms of software, both laptop and desktop PC offer a range of mysterious software interfaces that provide 'power management' functionality. This is something that I have not substantially explored or studied. For me, this is an area of modern PCs that remain a perpetual mystery. It is certainly something that I need do to something about.

Sometimes, the collaboration between software developers and hardware engineers can yield astonishing results. I again point towards the One Laptop per Child project. I remember reading some on-line discussions that described changes that were made to the Linux operating system kernel to make the OLPC device more power efficient. A quick search quickly throws up an Environmental Impact page.

The OLPC device, whether you agree with the objective of the OLPC project or not, has had a significant impact on the design of laptop systems. A second version of the device raises the possibility of netbooks using the energy efficient ARM processor (wikipedia) - the same processor that is used (as far as I understand) in the iPhone and iPod I, for one, look forward to using a netbook that doesn't unbearably heat up my lap and allows me to do useful work without having to needless wasted time searching for power sockets.

My desktop computer (which was assembled by my own fair hands) produces a side effect that is undeniably useful during the winter months: it perceptibly heats up my room almost allowing me to totally dispense with other forms of heating completely (but I must add that a chunky jumper is often necessary). When I told someone else about this phenomenon, I was asked, 'big computer or small room?' The answer was, inevitably, 'small room' (and small computer).

Google

On a related note, I was recently sent a link to a YouTube video entitled Google container data centre tour. It was astonishing (and very interesting!) It was astonishing due to the sheer scale of the installation that was presented, and interesting in terms of the industrial processes and engineering that were described. It reminded me of a news item that was featured in the media earlier this year that related to the carbon cost of carrying out a Google search.

The sad thing about the Google data centre (and, of course, most power plants) is that most of the heat that is generated is wasted. I recently came across this article, entitled Telehouse to heat homes at Docklands. Apparently there are other schemes to use data centres for different kinds of heating.

Before leaving Google alone, you might have heard of a site called Blackle. Blackle takes the Google homepage and inverts it. The argument is that if everyone uses a black search page, large power savings can be made.

Mark Ontkush describes the story of Black Google and others in a very interesting blog post which also mentions other useful ideas, such as the use of Firefox extensions. Cuil is another search engine (pronounced 'cool') that embodies the same idea.

Carbon Cost of Spam

I recently noticed a news item entitled Spam e-mails killing the environment (ITWorld). Despite the headline having a passing resemblance to headlines that you would find on the Daily Mail, I felt the article was worth a look. It references a more sensibly titled report, The carbon footprint of email spam, published by McAfee.

The report is interesting, pointing towards the fact that that we may spend a lot of time both reading and processing junk emails that end up in our inbox. The article has an obvious agenda: to sell spam filters. An effective spam filter, it is argued, can reduce the amount of time that email users spend processing spam, thus helping to save the planet (a bit). Spam can fill up email servers, causing network administrators to use bigger disks. To be effective, email servers need to spend time (and energy) filtering through all the messages that are received. I do sense that more research is required.

Invisible Infrastuctures

There is a further connection between the challenge of spam and the invisible infrastucture of the internet. Messages to your PC, laptop or mobile device pass through a range of mysterious switches, routers and servers. At each stage, energy is mysteriously consumed and paid for by an invisible set of financial transactions.

My own PC, I should add, is not as power friendly as it could be. It contains two hard disk drives: a main drive that contains the operating system, and a secondary drive that contains backup files and also 'swap' area. The main reason for the second drive is to gain a performance boost.

Lower power PCs

After asking the question, 'how might I create an energy efficient PC', I discovered an interesting article from Ars Technica entitled It's easy being green. It describes each of the components of a PC and considered how much power they can draw. The final page features a potential PC setup in the form of 'an extreme green box'.

It is, however, possible to go further. The Coding Horror blog presents one approach: use kit that was intended for embedded systems - a domain where power consumption is high on the design agenda. An article, entitled Building Tiny, Ultra Low Power PCs is a fun read.

Both articles are certainly worth a view. One other cost that should be considered, however, is the cost of manufacturing (and also recycling) your existing machine. I don't expect to change my PC until the second service pack for Windows 7 is released. It's going to be warming my room for quite some time, but perhaps the carbon consumption stats that relate to PC manufacture and disposal are out there somewhere that may help me to make a decision.

Concluding thoughts

Servers undeniably cost a lot of money not only in terms of their initial purchase price, but also in terms of how much energy they consume over their lifetime.

Efficient software has the potential to reduce server count, allowing more to be achieved with less. Developers should aspire to write code that is as efficient as possible, and take careful account of the underlying software infrastructures (and abstractions) that they use. At the heart of every software development lies a range of challenging compromises. It often takes a combination of experience and insight to figure out the best solution, but it is important to take account of change since the majority of the time on any software system is likely to be during the maintenance phase of a software project.

The key to computing energy reduction doesn't only rest with computer scientists, hardware designers and software engineers. There are wider social and organisational issues at play, as Samson hints at in an article entitled No good excuses not to power down PCs. The Open University has a two page OU Green computing guide that makes a number of similar points.

One useful idea is to quantify computer power in terms of megahertz per miliwatt (MPMs) instead of millions of instructions per second (MIPS) - I should add that this isn't my idea and I can't remember where it came from! It might be useful to try to establish a new aspirational computing 'law'. Instead of constantly citing Moore's law which states that the number of transistors should double every two years, perhaps we need to edge towards trying to propose a law that proposes a reduction in power consumption whilst maintaining 'transactional performance'. In this era of multi-core multi-function processors, this is likely to be a tough call, but I think it's worth a try.

One other challenge is whether it might be possible to crystalise what is meant by 'green code', and whether we can explore what it means by constructing good or bad examples. The good examples will run on low powered slower hardware, wherease the bad examples will likely to be sluggish and unresponsive. Polling (constantly checking to see whether something has changed) is obviously bad. Ugly, inelegant, poorly structured and hard to read (and hard to change) code could also be placed in a box named 'bad'.

A final challenge lies with whether it might be possible to explore what might be contained within a sub-discipline of 'green computing'. It would be interesting to see where this might take us.

Acknowlegements: Photograph of a fern, entitled 'Mandelbrot En Vert', has been liberated from Flickr. It is licensed under creative commons by GaijinSeb.

Permalink
Share post
Christopher Douce

Retro learning technology

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 21 July 2010, 18:15

A photograph of a speak and spell toy

I was recently told about a conference called Game Based Learning.  Although I wasn't able to attend in person (and besides, my main research interests are perhaps somewhat tangential to the topic), the subject remains of perpetual interest.   One of my colleagues, Liam Green-Hughes, who was lucky enough to be there a couple of weeks ago has written a comprehensive blog post describing some of the themes and presentations.

The appearance of this conference made me begin to reflect on my own experiences of what 'game based learning' means to me.  It soon struck me that much of my experiences are from an earlier generation of learning technologies (and I'm not just talking about books!)  This post represents a personal history (perhaps a 'retro' history) of using a number of different 'learning' games and devices.  I hope, for some, it evokes some fun memories!

Early mobile learning

My first exposure to mobile learning (or learning games) was through a device called Spelling B made by Texas Instruments (now more famous for their DSP processors).

The device presented a truly multi-modal experience.  Not only did it contain an alphabetic keyboard, it was also adorned with bright colours and came complete with a picture book.  Quite a proportion of the picture book was initially unfathomable since you had to translate the pictures into correctly spelt words.

For quite a long time after using the device, I continued to spell 'colour' incorrectly (or correctly, depending upon your point of view), and believed that all people with peaked hats (who were not witches) were Pilgrims (the historical importance of which was totally lost on a naive British child).

If you spelt some words correctly you got a chirpy bleeping (or buzzing) tune.  If you spelt something incorrectly, you were presented with a darker rasping noise.

After around two months of playing, it was game over.  I was able to spell, by rote, the one hundred and twenty seven words (or close to it), none of which probably had more than eight characters.

One Christmas I was lucky enough to graduate to the more elegantly designed and ultimately more mind blowing Speak and Spell (wikipedia).  It was astonishing to learn that something with batteries could speak to you!  Complete with integral handle, rugged membrane board (which you could spill your orange squash onto), an audio connection to prevent parents from going potty (like mine did), and a battery charge length that would put any modern laptop to shame.  In my view, the Speak and Spell is a design triumph and you might be able to see some similarities with the OLPC XO computer (if you squint hard enough).

To this day, I remember (without looking at simulations) its rhythmic incantations of 'that is correct! And now spell...' which punctuated my own personal successes and failures.

Learning technology envy

This era of retro learning technology didn't end with the Speak and Spell.  One Christmas, Santa gave a group of kids down my street a mobile device called the Little Professor, also from Texas Instruments.  Pre-dating the Nintendo DS by decades, this little hand-held beauty presented a true advance in learning technology.  When you got a calculation right, the little professors' moustache started to jump around in animated delight.  It also incidentally had one of those new fangled Liquid Crystal Display screens rather than the battery hungry LED readouts (but this was inconsequential in comparison to the hilarious moustache).

Learning technology envy was a real phenomenon.  I remember a Texas Instruments Speak and Maths devices was undoubtedly more exciting (and desirable to play with) than a lowly Little Professor.  Game based learning was a reality, and parent pester power was definitely at work.  For those kids who had parents who were well off, the nadir of learning technology envy (when I was growing up) manifested itself in the form of the programmable and mysterious Big Trak.

Big Trak inspired a huge amount of wonderment, especially for those of us who had never been near a logo turtle.  The marketing was good as the advertisements of the time (video) testify.  It presented kids the opportunity to consider the challenge of creating stored programs.

Learning with the Atari

A number of my peers were Atari 2600 gamers.  As well as being enthralled at the prospect of blasting aliens, driving racing cars and battling with dragons represented by pixels the size of a small coins, I gradually became aware of a range of educational games that some retailers were starting to sell.

A number of educational Sesame Street game cartridges were created, presumably with close collaboration wit Atari.  I personally found them rather tedious and somewhat unexciting, but an interesting innovation was the presence of a specially designed 'Kids controller'.  Each game was provided with a colourful overlay which only presented the buttons that should be used.  (As was the case with some Atari games, the box art and instructions leaflets could be arguably more exciting than the game itself).

I have no real idea whether any substantial evaluations were carried out to assess either the user experience of these products, or whether they helped to develop motor control.

Behold!  The personal computer...

My first memory of an educational game that was presented through a personal computer was a business game that ran on the BBC Model B.  The scenario was that you were the owner of a candy floss store (an obvious attraction for kids) and you had to buy raw materials (sugar) and hope that the weather was good.  If it was good, you made money.  If it was bad, you didn't.  I must add I used this game when Thatcherism was at its peak.  This incidental memory connects with wider issues relating to the link between game deployment and wider educational policy, but I digress...

Whilst using the 'candy floss game' I don't have any recollection of having to spend extra money for petrol for the generator or have to pay the council rent (or contend with issues of price rises every year) but I'm sure there was a cost category called 'overheads' that featured somwhere.  I'm also pretty sure you could set you own prices of your products and be introduced to the notion of 'breaking even'.

The BBC Model B figured again in my later education when I discovered a business studies lab that was packed with the things, and an occasional Domesday machine, running on an exotically modified BBC Master 128.  The Domesday project pointed firmly towards the future.  I remember a lunch hour passing in a flash whilst I attempted to navigate through an animated three dimensional exhibition that represented a portal to a range of different encyclopaedic themes.

During business studies classes we didn't use the Beebs (as they were colloquially known) for candy floss stall simulations, but instead we used a program called Edword (educational word processor), a spreadsheet and a simple database program.  When we were not using these applications, we were using the luxury of disk drive to play Elite (wikipedia).  A magical galaxy drawn in glorious 3D wireframe taught us about commodity trading and law enforcement.

Sir Clive

For a while the Sinclair Spectrum (a firm favorite amongst my peers) was sold with a bonus pack of cassettes.  Two memorable ones included an application called Make-A-Chip which allowed you to draw sets of logic gate circuits.  You couldn't do much more than create a binary adder, but messing around with it for a couple of hours  really helped when it came to understanding these operators when it came to working with real programming languages.

I also have recollections of using a simulation program that allowed you to become a fox or a rabbit (or something else) and forage for food.  As the seasons (and years) change, the availability of food fluctuated due to 'natural' changes in population.  I never did get the hang of that game: it was a combination of being too slow and not having sufficiently engaging feedback for it to be attractive.

Assessing their impact

If I was asked whether I learnt anything by using the beeping and speaking mobile devices that were mentioned earlier, I couldn't give you an honest answer.  I suspect I probably did, and the biggest challenge that researchers face isn't necessarily designing a new technology (which is a challenge that many people grossly underestimate), but understanding the effect the introduction that a particular technology has, and ultimately, whether it facilitates learning.

There is, of course, also a social side to playing learning games (and I write this without any sense of authority!).  I remember my peers showing me how to use a Little Professor, and looking at an on-screen 'candy floss sales' graph at feeling thoroughly puzzled at what was being presented to me.  A crowd of us used to play that game.  Some players, I seem to remember, were more agressive traders than others.

Towards the future

History has yielded an interesting turn of events.  I spent many a happy hour messing around on a BBC Model B (albeit mostly playing Elite) and later marveled at its ultimate successor, the Acorn Archimedes (some of the time playing Zarch written by the same author as Elite).

I remember my fascination at discovering it was possible to write programs using the BBC Basic programming language (version 5) without using line numbers.  Acorn computer eventually folded but left a lasting legacy in the form of ARM (Acorn Risc Machine), a company that sells its processor designs which have found their way into a whole range of different devices: phones, MP3 players, PDAs.

I recently heard that the designers of the One Laptop Per Child project were considering using ARM-based processors in the second generation designs.  The reason for this choice is directed by the need to pay careful consideration to power consumption and the fact that the current processor will not be subject to further on-going development by AMD. (One loosely connected cul-de-sac of history lies with the brief emergence of the inspirational and sadly ill fated Apple eMate, which was also ARM-based).

One dimension of learning games that is outside the immediate requirements of developing skills or knowledge in a particular subject, lies with their potential to engender motivation and enthusiasm.

It's surprising how long electricity powered educational video games and devices have been around for.  It's also surprising that it was possible to do so much with so little (in terms of processing power and memory).  Educational gaming, I argue, should not be considered as an idea that is considered to be revolutionary.  I personally look forwards to learning about what new evolutionary developments are waiting for us around the corner and what they may be able to tell us about how we learn.

Permalink
Share post
Christopher Douce

Learning technology and return on investment

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 21 July 2010, 18:13

Image of a man playing a tired old piano in front of a building site that has warning signes.  One says, 'danger, men working overhead'.

A couple of years ago I attended a conference held by the Association of Learning Technology (ALT).  I remember a riveting keynote that asked the members of the audience to consider not only the design and the structure of new learning technologies, but also the benefits that they can offer institutions, particularly in terms of costs.  I remember being reminded that technology (in terms of educational technology) need not be something that is considered in terms of 'high technology'.  A technology could be a bunch of cards tied to a desk chair!

I came away with a message that it is important to try to assess the effectiveness of systems that we construct, and not to get ahead of ourselves in terms of building systems that might not do the job they set out to do, or in fact may even end up solving the wrong problem.

When attending the learning technologies exhibition a couple of months ago the notions of 'return on investment' and 'efficiency' were never too far away.  E-learning, it is argued, can help companies to develop training capacities quickly and efficiently and allow company employees to prepare for change.

I might have been mistaken but I think I may have heard the occasional presenter mentioning some figures.  The presenters were saying things like, 'if you implement our system in this way, it can help you save a certain number of millions of pounds, dollars or euros per year'.  Such proclamations made me wonder, 'how do you go about measuring the return on investment into e-learning systems?' (or any other learning technology system, for that matter).

I do not profess to be an expert in this issue by any means.  I am aware that there have been many others (who have greater levels of expertise than myself) have both blogged and written about this issue at some length.  I hope by sharing my own meagre notes on the subject might make a small contribution to the debate.

Measuring usefulness

Let's say you build a learning technology or an e-learning tool.  You might create an on-line discussion forum or even a hand held classroom response system.  You might have created it with a particular problem (or pedagogic practice) in mind.  When you have built it, it might be useful to determine whether the learning technology helps learners to learn and acquire new understandings and knowledge.

One way of conducting an evaluation is to ask them about their experience.  This can allow you to understand how it worked, whether it was liked, what themes were learnt and what elements of a system, product or process might have inspired learners.

Another way to determine whether a technology is affective is to perform a test.  You could take two groups, one that has access to the new technology, the other who do not, and see which group perform better.  Of course, such an approach can open up a range of interesting ethical issues that need to be carefully negotiated and conquered.

Dimensions of measurement

When it comes to large e-learning systems the questions that can uncover learners experience can relate to a 'low level' understanding of how learning technologies are used and applied.  Attempting to measure the success of a learning technology or e-learning system for a whole department or institution could be described as a 'high level' understanding.  It is this 'high level' of understanding that relates to the theme of how much money a system may help to save (or cost) an organisation.

Bearing in mind that organisations are very unlikely to carry out experiments, how is it possible to determine how much return on investment an e-learning system might give you?  This is a really tough question to ask since it depends totally on the objectives of a system.  The approach taken to measure the return on investment of a training system is likely to be different to one that has been used to instil institutional values or create new ways of working (which may or may not yield employee productivity improvements).

When considering the issue of e-learning systems that aim to train (I'm going to try to steer clear of the debates around what constitutes training and what constitutes education), the questions that you might ask include:

  • What are the current skills (or knowledge base) of your personnel?
  • What are the costs inherent in developing a training solution?
  • What are the costs inherent in rolling this out to those who need access?

Another good question to ask is: what would you have to do to find out the same information if you had not invested in the new technologies?  A related question is: would there be any significant travel costs attached to finding out the information?, and, would it be possible to measure the amount of disruption that might take place if you have to ask other people for the information that you require?

These questions relate to actions that can be measured.  If you can put a measure on the costs of finding out key pieces of information before and after the implementation of a system, you might be able to edge towards figuring out the value of the system that you have implemented.  What, for example, is the cost of running the same face to face training course every year as opposed to creating a digital equivalent that is supported by a forum that is supported by an on-line moderator?  You need to factor in issues such as how much time it might take for a learner to take the course.  Simply providing an on-line course is not enough.  Its provision needs to be supported and endorsed by the organisation that has decided to sponsor the development of e-learning.

The above group of points represents a rather simplistic view.  The introduction of a learning technology system may also facilitate the development of new opportunities that were perhaps not previously possible.  'Up skilling' (or whatever it is called) in a limited amount of time may enable employees to respond to a business opportunity that may not have been able to exploited without the application of e-learning.

Other themes

Learning technologies are not only about the transmission of information (and knowledge) between a training department and their employees.  They can also have a role to play in facilitating the development of a common culture and strengthening bonds between work groups.

Instances of success (or failure) can be shared between fellow employees. Details of new initiatives or projects may be disseminated through a learning system.  The contents of the learning system, as a result, may gradually change as a result of such discussions.

The wider cultural perspectives that surround the application of learning technologies, in my humble opinion, are a lot harder to quantify.  It's hard to put a value on the use a technology to share information (and learning experiences) with your co-workers.

Related resources

A quick search takes me to the wikipedia definition of ROI and I'm immediately overwhelmed by detail that leaves my head spinning.

Further probing reveals a blog entitled ROI and Metrics in eLearning by Tony Karrer who kindly provides a wealth of links (some of which were updated in April 2008).  I have also uncovered a report entitled What Return on Investment does e-Learning Provide? (dated July 2005) (pdf) prepared by SkillSoft found on a site called e-Learning Centre.

Summary

The issue of e-learning and learning technology return on investment is one that appears to be, at a cursory glance, one that can be quite difficult to understand thoroughly.  Attaching numbers to the costs and benefits of any learning technology is something that is difficult to well or precisely.  This can be partly attributed to the nature of software: often, so much of the costs (whether it be in terms of administration or maintenance) or benefits (ability to find things out quicker and collaborate with new people) can be hidden amongst detail that needs to be made clearly explicit to be successfully understood.

When it comes to designing, building and deploying learning technology systems, the idea of return on investment is undoubtedly a useful concept, but those investing in systems should consider issues beyond the immediately discoverable costs and benefits since there are likely to be others lurking in the background just waiting to be discovered.

Acknowledgements: Image licensed under creative commons, liberated via Flickr from Mr Squee.  Piano busker in front of building site represents two types of investments: a long term investment (as represented by the hidden building site), and a short term investment (using the decrepit piano to busk).  The unheard music represents hidden complexities.

Permalink
Share post
Christopher Douce

Inclusive Digital Economy Network event

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 28 June 2023, 10:28

I recently attended an event that was hosted by the Inclusive Digital Economy Network.  The network, led by the University of Dundee, comprises of a variety of groups who wish to collectively ensure that people are able to take advantage of digital technologies.

The event was led by Prof Alan Newell from Dundee.  Alan gracefully introduced a number of keynote speakers; the vice-chancellor from City University, the dean of Arts and Social Sciences and representatives from the government and the funding body: the EPSRC.

Drama

One really interesting part of the day was the use of 'theatre' to clearly illustrate the difficulties that some people can have when using information technology.  I had heard about the use of drama when I have spoken to people from Dundee before but this was the first time I was able to witness it.  In fact, I soon found out that I was going to witness a film premiere!

After the final credits had appeared, I was surprised to discover that two of the actors who played central roles in the film were in the audience.  The film was not the end of the ‘theatre’ event, it was the beginning.  The actors carried out an improvisation (using questions from the audience) that was based upon the roles we had been introduce to through the film.

The notion of drama and computing initially seemed to me to be a challenging combination, but any scepticism that had very quickly dissipated once the connections between the two areas became plainly apparently.  Drama and theatre relies on characters.  Computer systems and technologies are ultimately used by people.  The frustrations that people encounter when they are using computer systems manifest themselves in personal (and collective) dramas that might be as small as uttering the occasional expletive when your machine doesn't do what it supposed to do, to calling up a call centre to harass an equally confused call centre operative.

The lessons of the 'computing' or 'user' theatre were clear to see: the users should be placed centre stage when we think about the design of information systems.  They may understand things in ways that designers of systems may not have imagined.  A design metaphor that might make complete sense to an architect may seem to be completely nonsensical to an end user who has a totally different outlook and background.  Interaction design tools such as creating end user personas are powerful tools that can expose differences and help to create more usable systems.

Debates

I remember a couple of important (and interesting) themes from the day.  One theme (that was apparent to me) was occasional debate about the necessity to ensure that users are involved with the design of systems from the outset to ensure that any resulting products and systems are inclusive (user led design).  This connected to a call to 'keep the geeks from designing things'.  In my view, users must be involved with the creation of interactive systems, but the 'geeks' must be included too.  The reasons for this being that the geeks may imagine functionality that the users might not be aware exits.  This argument underlines the interdisciplinary nature of interaction design (wikipedia).

Much of the focus of the day was about how technology can support elderly people; how to create technologies and pedagogies that can promote digital inclusion.  Towards the end of the day there was a panel discussion from representatives from Help the Aged, a UK government organisation called the Technology Strategy Board, the BBC, OFCOM and the University of York.

Another them that I remember relates to the cost of both computing and assistive technologies.  There was some discussion about the possibility of integrating internet access within set top boxes (and a couple of comments relating to the Digital Britain report that was recently published by the UK government).  There was also some discussion about the importance of universal design (wikipedia) and tensions with personalised design (which connects to some of the themes underpinning the EU4ALL project).

Another recollection from the event was that some presenters stated that although there is much excellent work happening within the academic community (and within other organisations) some of the lessons learnt from research are often not taken forward into policy or practice.  This said, it may be necessary to take the recommendations from a number of different research projects to obtain a rich and complete understanding of a field before fully understanding how policy might be positively influenced.  The challenge is not only combining and understanding the results from different projects, but communicating the results.

Summary

Projects such as the Inclusive Digital Economy Network, from my outsiders perspective, attempt to bridge the gaps between different stakeholders and facilitate a free exchange of ideas and experiences that may point towards areas of investigation that can allow us learn more how digital technologies can make a difference to us all.

Acknowledgements: many thanks are extended to the organisers of the event – an interesting day!

Permalink
Share post
Christopher Douce

Source code accessibility through audio streams

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 28 June 2023, 10:28

A screenshot of some source code being edited by a software developer

One of my colleagues volunteers for the Open University audio recording project.  The audio recording project takes course material produced by course teams and makes audio (spoken) equivalents for people with visual impairments.  Another project that is currently underway is the digital audio project which aims to potentially take advantage of advances in technology, mobile devices and international standards.

Some weeks ago, my colleague tweeted along the lines of 'it must be difficult for people with visual disabilities to learn how computer programs are written and structured' (I am heavily paraphrasing, of course!)  As soon as I read this tweet I began to think about two questions.  The first question was: how do I go about learning how a fragment of source code works? and secondly, what might be the best way to convert a function or a 'slice' of programming code into an audio representation that helps people to understand what it does and how it is structured?

Learning from source code

How do I learn how a fragment of source code works?  More often than not I view code through an integrated development environment, using it to navigate through the function (or functions) that I have to learn.  If I am faced with some code that is really puzzling I might reach for some search tools to uncover the connections between different parts of the system.

If the part of the code that I am looking at is quite small and extremely puzzling, I might go as far as grab a pen and paper and begin to sketch out some notes, taking down some of the expressions that may appear to be troubling and maybe split these apart into their constituent components.  I might even try to run the various code fragments by hand.  If I get really confused I might use the 'immediate' window of my development environment ask my computer to give me some hints about the code I am currently examining.

When trying to understand some new source code my general approach is to try to have a 'conversation' with it, asking it questions and looking at it from a number of different perspectives.  In the psychology of programming literature some researchers have written about developers using 'top down' and 'bottom up' strategies.  You might have a high level hypothesis about what something does on one hand, but on the other, sections of code might help you to understand the 'bigger picture' or the intentions behind a software system.

In essence, I think understanding software is a really hard task.  It is harder and more challenging than many people seem to imagine.  Not only do you have to understand the language that is used to describe a world, but you also have to understand the language of the world that is described.  The world of the machine and the world of the problem are intrinsically and intimately connected through what can sometimes seem an abstract collection of words and symbols.  Your task, as a developer, is to make sense of two hidden worlds.

I digress slightly... If learning about computer programming code is a hard task, then it is possible that it is likely to be harder for people with visual impairments.  I cannot imagine how difficult it must be to be presented with a small computer program or a function that has been read out to you.  Much of the 'secondary notation', such as tabbing and white space can be easily lost if there are no mechanisms to enable them to be presented through another modality.  There is also the danger that your working memory may become quickly overwhealmed with names of identifiers and unfamiliar sounding functions.

Assistive technology for everyone

The tasks of learning the fundamentals of programming and learning about a program are different, yet related.  I have heard it said that people with disabilities are given real help if technologies are created that are useful for a wide audience.  A great example of this is, for example, optical character recognition.  Whilst OCR technology can save a great deal of cost typing, it has also created tools that enable people with low vision to scan and read their post.

Bearing the notion of 'a widely applicable technology' in mind, could it be possible to create a system that creates an interactive audio description that could potentially help with the teaching of some of the concepts of computer programming for all learners?

Whenever I read code I immediately begin to translate the notion of code into my own 'internal' notation (using different types of memory, both internal and external - such as scraps of paper!) to iteratively internalise and make sense of what I am being presented with.  Perhaps equivalents of programming code could be created in a form that could be navigated.  Code it not something that you read in a linear fashon - code is something you work with.

If an interesting and useful (and interactive) audio equivalent of programming code could be created there then might be the potential that these alternative forms might be useful to all students, not only to learners who necessarily require auditory equivalents.

Development directions

There are a number of tools that could help us to create what might amount to 'interactive audio descriptions of programming code'.  The first is the idea of plan or schema theory (wikipedia) – the notion that your understanding of something is drawn from previous experience.  Some theorists from the Psychology of Programming have extended and drawn upon these ideas, positing ideas such as key lines of code such as beacons.

Another is Green's Cognitive Dimensions framework (wikipedia).  Another area to consider looking at is the interesting sub-field of Computer Science Education research.  There must be other tools, frameworks and ideas that can be drawn upon.

Have you got a sec?

Another approach that I sometimes take when trying to understand something is that I ask other more experienced people for help.  I might ask the question, 'what does this section represent?' or, 'what does this section do?'  The answers from collegues can be instrumental in helping me to understand the purpose behind fragments of programming code.

Considering browsing

I can almost imagine what could be an audio code browser that has some functionality that allows you to change between different levels of abstraction.  At one level, you may be able to navigate through sets of different functions and hear descriptions of what they are intended to do and hope to receive by way of parameters (which could be provided through comments).  On another level there may be summaries of groups of instructions, like loops, with descriptions that might sound like, 'a foreach loop that contains four other statements and a call to two functions'.  Finally, you may be able to tab into a group of statements to learn about what variables are manipulated, and how.

Of course this is all very technical stuff, and it could be stuff that has already been explored before.  If you know of similar (or related) work, please feel free to drop me a line!

Acknowledgement: random image of code by elliotcable, licenced under creative commons, discovered using Flickr.

Permalink
Share post
Christopher Douce

Exploring Moodle forums

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 21 July 2010, 18:08

A set of spanners loosely referring to moodle tools and debugging utilities

Following on from the previous post, this post describes my adventures into the Moodle forums source code.

Forums, I understand, can be activities (a Moodle term) that can be presented within individual weeks or topics. I also know that forums can be presented through blocks (which can be presented on the left or right hand side of course areas).

To begin, and remembering the success that I had when trying to understand how blocks work, I start by looking at what the database can tell me and quickly discover quite a substantial number of tables.  These are named: forum (obviously), forum_discussions, forum_posts, forum_queue, forum_ratings (ratings is not something that I have used within the version of Moodle that I am familiar with), forum_read, forum_descriptions, forum_subscriptions and forum_track_prefs.

First steps

Knowing that some of the data tables are called, I put aside my desire to excitedly eyeball source code and sensibly try to find some documentation.

I begin by having a look at the database schema introduction page (Moodledocs), but find nothing that immediately helps.  I then discover an end user doc page that describes the forum module (and the different types of forum that are on offer in Moodle).  I then uncover a whole forum documentation category (Moodledocs) and I'm immediately assaulted by my own lack of understanding of the capabilities system (which I'll hopefully blog about at some point in the future – one page that I'll take note of here is the forum permissions page).

From the forums category page I click on the various 'forum view pages', which hints that there are some strong connections with user settings.

Up to this point, what have I learnt?

I have learnt that Moodle permits only certain users to carry out certain actions to Moodle forums.  I have also learnt that Moodle forums have different types.  These, I am lead to believe (according to this documentation page) are: standard, single discussion, each person posts one discussion, and question and answer.  I'm impressed:  I wasn't expecting so much functionality!

So, can we discover any parallels with the database structures?

The forum table contains fields which are named: course, type, name, description followed by a whole other bunch of fields I don't really understand.  The course field associates a forum with a course (I'm assuming that somewhere in the database there will be some data that connects the forum to a particular part or section of a course) and the type (which is interestingly, an enumerated type) which can hold data values that roughly represents the forum types that were mentioned earlier.

A brief look at the code

I remember that the documentation that I uncovered told me that the 'forums' was a module. In the 'mod' directory I see notice a file called view.php.  Other interesting files are named: post.php, lib.php, search.php and discuss.php.  View.php seems to be one big script which contains a big case statement in the middle.  Post.php looks similar, but has a beguiling sister called post_form which happens to be a class.  Lib, I discover, is a file of mystery that contains functions and fragments of SQL and HTML.  Half of the search file seems to retrieve input parameters, and discuss is commented as, 'displays a post, and all the posts below it'.

Creating test data

To learn more about the data structures I decide to create some test data by creating a forum and making a couple of posts.  I open up an imaginatively titled course called 'test' and add an equally imaginatively titled forum called 'test forum'.  When creating the forum I'm asked to specify a forum type (the options are: single simple discussion, Q and A forum, standard forum for general use).  I choose the standard forum and choose the default values for aggregate type and time period for blocking.  The aggregate type appears to be related to functionality that allows students to grade or rate posts.

When the forum is live, I then make a forum post to my test forum that has the title 'test post'.

Reviewing the database

The action of creating a new forum appears to have created a record in the forum table which is associated to a particular course, using the course id.  The act of adding a post to the test forum has added data to forum_discussions, where the name field corresponds to the title of my thread: 'test post'.  A link is made with the forum table through a foreign key, and a primary key keeps track of all the discussions held by Moodle.

The forum_posts table also contains data.  This table stores the text that is associated with a particular post.  There is a link to the discussion table through a discussion id number.  Other tables that I looked at included forum_queue (not quite sure what this is all about yet), forum_ratings (which probably stores stuff depending on your forum settings), and forum read, which simply stores an association between user id, forum id, discussion id and post id.

One interesting thing about forums is that they can have a recursive structure (you can send a reply to a reply to a reply and so on).  To gain more insight into how this works, I send a reply to myself which has the imaginative content, 'this is a test post 2'.

Unexpectedly, no changes are made to the forum_discussions table, but a new entry is added to the forum_posts table.  To indicate hierarchy a 'parent' field is populated (where the parent relates to an earlier entry within the forum_posts table).  I'm assuming that the sequence of posts is represented by the 'created' field which stores a numerical representation of the time.

Tracing the execution flow

These experiments have given me with three questions to explore:

  1. What happens within the world of Moodle code the user creates a new forum?
  2. What happens when a user adds a new discussion to a forum?
  3. What happens when a user posts a reply?

Creating a new forum

Creating a new forum means adding an activity.  To learn about what code is called when a forum is added, I click on 'add forum' and capture the URL.  I then give my debugger the same parameters that are called (id, section, sesskey and add) and then begin to step through the course/mod.php script.  The id number seems to relate to the id of the course, and the add parameter seems to specify the type of the activity or resource that is to be added.

I quickly discover a redirect to a script called modedit.php, where the parameters add=forum, type= (empty), course=4, section=1, return=0.  To further understand what is going on, I stop my debugger and start modedit.php with these parameters.

There is a call to the database to check the validity of the course parameter, fetching of a course instance, something about the capability, fetching of an object that corresponds to a course section (call to get_course_section in course/lib code).   Data items are added to a $form variable (which my debugger tells me is a global).  There is then the instantiation of a class called mod_forum_mod_form (which is defined within mod/forum/mod_form.php).  The definition class within mod_forum_mod_form defines how the forum add or modification form will be set out.  There is then a connection between the data held within $form and the form class that stores information about what information will be presented to the user.

After the forum editing interface is displayed, the action of clicking the 'save and return to course' (for example) there is a postback to the same script, modedit.php.  Further probing around reveals a call to forum_add_instance within forum/lib.php (different activities will have different versions of this function) and forum_update_instance.  At the end of the button clicking operation there is then a redirect to a script that shows any changes that have been made.

The code to add a forum to course will be similar (in operation) to the code used to add other activities.  What is interesting is that I have uncovered the classes and script files that relate to the user interface forms that are presented to the user.

Adding a new discussion

A new discussion can be added by clicking on the 'Add a new discussion topic' button once you are within a forum.  The action of clicking on this button is connected to the forum/post.php script.  The most parameter associated to this action is the forum number (forum=7, for example).

It's important to note the use of the class mod_frum_post_form contained within post_form.php which represents the structure of the form that the user enters discussion information to.

The code checks the forum id and then finds out which course it relates to.  It then creates the form class (followed by some further magic code that I quickly stepped through).

The action of clicking on the 'post to forum' button appears to send a post back (along with all of the contents of the form) to post.php (the same script used to create the form).  When this occurs, a message is displayed and then a redirect occurs to the forum view summary.  But where in the code is the database updated?  One way to do this is to begin with a search to the redirect.  Whilst browsing through the code I stumble across a comment that says 'adding a new discussion'.  The database appears to be updated through a call to forum_add_discussion.

Posting a reply to a discussion

The post.php script is also used to save replies to discussions (as well as adding new discussions) to the database.  When a user clicks on a discussion (from a list of discussions created by discuss.php), the link to send replies are represented by calls to post.php with a reply parameter (along with a post number, i.e. post.php?reply=4).  The action of clicking on this link presents the previous message, along with the form where the user can enter a response.

Screen grab of user sending a reply to a forum discussion

To learn more about how this code works, I browse through the forums lib file and uncover a function called forum_add_new_post.  I then search for this in post.php and discover a portion of code that handles the postback from the HTML form.  I don't explore any further having learnt (quite roughly) where various pieces of code magic seems to lie.

Summary

The post.php script does loads of stuff.  It weighs in at around seven hundred lines in length and contains some huge conditional statements.

Not only does post appear to manage the adding of new discussions to a forum but it also appears to manage the adding, editing and deletion of forum messages.  To learn about how this script is structured I haven't been able to look at function definitions (because it doesn't contain any) but instead I have had to read comments.  Comments, it has been said, can lie, whereas code always tells the truth.  More functions would have helped me to more quickly learn the structure of the post.php script.

The creation of the user interfaces is partially delegated to the mod and post form classes.  Database updates are performed through the forum/lib.php file.  I like some of the function abstractions that are beginning to emerge but any programming file that contains both HTML and SQL indicates there is more work to be done.  The reason for this aesthetic (and person) opinion is simple: keeping these two types of code separate has the potential to help developers to become quickly familiar where certain types of software actions are performed.  This, in turn, has the potential to save developer time.

One of the central areas of functionality that forum developers need to understand is how Moodle works and uses forms.  This remains an area of mystery to me, and one that I hope to continue to learn about.  Another area that I might explore is how PHP has been used to implement different forum systems so I can begin to get a sense of how PHP is written by different groups of developers.

Acknowledgements: Photograph licenced under creative commons by ciaron, liberated from Flickr.

Permalink
Share post
Christopher Douce

Forums 2.0

Visible to anyone in the world
Edited by Christopher Douce, Tuesday, 20 May 2014, 09:52

I like forums, I use them a lot.  I can barely remember when I didn’t know what one was.  I think my first exposure to forums might have been through a dial-up bulletin board system (used in the dark ages before the internet, of course).  This was followed through a brief flirtation with usenet news groups.

When trying to solve some programming problems, I more often than not would search for a couple of keywords and then stumble across a multitude of different forums where tips, tricks and techniques might be debated and explored.  A couple of years ago I was then introduced to the world of FirstClass forums (wikipedia) and then, more recently, to Moodle forums.  Discussions with colleagues has since led me towards the notion of e-tivities.

I have a confession to make: I use my email account for a whole manner of different things.  One of the things that I incidentally use my email account for is sending and receiving email!  I occasionally use email as a glorified ‘todo’ list (albeit one that has around a thousand items!)  If something comes in that is interesting and needs attention, I might sometimes use click on an ‘urgent’ tick box so that I remember to look at the message again at a totally unspecified time in the future.  If it is something that must be bounded by time, I might drag the item into my calendar and ask my e-mail client to remind me about it at a specified time in the future (I usually ponder over this for around half a minute before choosing one of two options: remind me in a weeks time, or remind me in a fortnight).

I have created a number of folders within my email client where I can store interesting stuff (which I very often subsequently totally forget about).  Sometimes, when working on a task, I might draft out some notes using my email editor and them store them to a vaguely titled folder.

The ‘saving of draft’ email doesn’t only become something that is useful to have when the door knocks or the telephone rings – email, to me, has gradually become an idea and file storage (and categorisation) tool that has become an integral part of how I work and communicate.  I think I have heard it said that e-mail is the internet’s killer application (wikipedia).  For me, it is a combined word processor, associative filing cabined, ideas processor and general communications utility.

Returning to the topic of forums… Forums are great, but they are very often nothing like email.  I can’t often click and drag forum messages from one location into folder or to a different part of the screen.  I can’t add my own comments to other people’s posts that only I can see (using my mail client I can save copies of email that other people send me).  On some forum systems I can’t sort the messages using different criteria, or even search for keywords or phrases that I know were used at some point.

My forum related gripes continue: I cannot delete (or at least) hide the forum message that I don’t want to see any more.  On occasions I want to change the ‘read status’ from ‘read’ to ‘unread’ if I think that a particular subject that is being discussed might be useful to remember when I later turn to an assessment that I have to submit.  I might also like to take fragments of different threads and group them together in a ‘quotation set’, building a mini forum centric e-portfolio of interesting ideas (this said, I can always copy and paste to email!)If a forum were like a piece of paper where you could draw things at any point I might want to put some threads on the left of the page (those points that I was interested in) and others on the right of the page (or visa-versa).

I might want to organise the threads spatially, so that the really interesting points might be at the top, or the not so interesting points at the bottom – you might call this ‘reader generated threading!’  When one of my colleagues makes a post, there might be an icon change that indicates that a contribution has been made against a particular point.

I might also be able to save thread (or posting) layout, depending on the assignment or topic that I am currently performing research.  It might be possible to create a ‘thread timeline’ (I have heard rumours that Plurk might do something like this), where you see your own structured representation of one or more forums change over time.  Of course, you might even be able to share your own customised forumscape with other forum users.

An on-line forum is undoubtedly a space where learning can occur.  When we think about how we might further develop the notion of a forum we soon uncover the dimension of control.

Currently, the layout and format of a forum (and what you can ultimately do with it) is ultimately constrained by the design of the forum software and a combination of settings assigned by an administrator.  Allowing forum users to create their own customised view of a forum communication space may allow learners tools to make sense of different threads of communication.  Technology can be then used to enable an end user to formulate a display that most effectively connects new and emerging discussions with existing knowledge.

This display (or forumscape) might also be considered as a mask.  Since many different discussions can occur on a single forum at the same time choosing the right mask may help salient information become visible.

The FirstClass system, with its multiple discussion areas and the ability to allow the end user to change the locations of forum icons on a ‘First Class’ desktop begins to step toward some of these ideas.

Essentially, I would like discussion forums to become more like my email client: I would like them to do different things for me.  I would like forum software to not only allow users to share messages.  I would like forum software to become richer and permit the information they display to the users be more malleable (and manageable).  I know this would certainly be something that would help me to learn!

Acknowlegements: Picture from Flickr taken by stuckincustoms, licenced under creative commons.

Permalink 1 comment (latest comment by Sam Marshall, Thursday, 5 Feb 2009, 12:30)
Share post
Christopher Douce

How Moodle block editing works: database (part 2)

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 21 July 2010, 18:05

Pattern of old computer tapes intended to represent databases

This is a second blog entry about how Moodle manages its blocks (which can be found either at a site level or at a course level).  In my previous post I wrote about the path of execution I discovered within the main Moodle index.php file.  I discovered that the version of Moodle that I was using presented blocks using tables, and that blocks made use of some interesting object-oriented features of PHP to create the HTML code that is eventually presented to the end user.

This post has two objectives.  The first is to present something about the database structures that are used to store information about which blocks are stored where, and secondly to explore what happens when an administrator clicks on the various block editing functions.  The intention behind this post is to understand Moodle in greater detail to uncover a little more of how it has been designed.

Blocks revisited

Screen grab of the latest news block with moving and deletion editing icons

Blocks, as mentioned earlier, are pieces of functionality that can sit on the left hand or right hand borders of courses (or the main Moodle site page).  Blocks can present a whole range of functions ranging from news items through to RSS feeds.

Blocks can be moved around within a course page with relative ease by using the Moodle edit button.  Once you click on ‘edit’ (providing it is there and you have the appropriate level of permissions), you can begin to add, remove and move blocks around using a couple of icons that are presented.  Clicking on the left icon moves the block to the left hand margin, clicking the down arrow icon changes its vertical position and so on.

One of my objectives with this post is to understand what happens when these various buttons are clicked on.  What I am hoping to see are clearly defined functions which will be called something like moveBlockUp, moveBlockDown or deleteBlock.

Perhaps with future versions it might be possible to have a direct manipulation interface (wikipedia) where rather than having buttons to press, users will be able to drag blocks around to rapidly customise course displays.  Proposing ideas and problems to be solved is a whole lot more easier than going ahead and solving them.  Also, to happily prove there’s no such thing as an original thought, I have recently uncovered a Moodle documentation page.  It seems that this idea has been floating around since 2006.

Before I delve into trying to uncover how each of the Moodle block editing buttons work, it is worthwhile spending some time to look at how Moodle remembers what block is placed where.  This requires looking at the database.

Remembering block location

I open up my database manipulation tool (SqlYog) and begin to browse through the database tables that are used with Moodle.  I quickly spot a bunch of tables that contain the name block.  One that seems to be particularly relevant is a table called block_instance.

The action of creating a course (and adding blocks to it) seems to create a whole bunch of records in the block_instance.  Block_instance appears to be the table that Moodle uses to remember what block should be displayed and when.

The below graphic is an excerpt from the block_instance data table:

Fragment of the block_instance datatable showing a number of different fields

The field weight seems to relate to the vertical order of blocks on the screen (I initially wondered whether it related to, in some way, some kind of graphical shading, thinking of the way that HTML uses the term weight).  Removing a block from a course seems to change the data within this table.

The blockid seems to link each entry within block_instance to data items held within the  Block table:

Fragment of the blocks table, showing the field headings and the data items

The names held within the name field (such as course_summary) are connected to the programming code that relates to a particular block.  The cron (and the lastcron) relate to regular processes that Moodle must execute.  With the default installation of Moodle everything is visible, and at the time of writing I have no idea what multiple means.

Returning to block_instance, does the pageid field relate to the id used in the course?  Looking at the course table seems to add weight to his hypothesis.

I continue my search for truth by rummaging around in the Moodle documentation, discovering a link to the database schema and uncover some Block documentation that I haven’t seen before (familiarity with material is a function of time!)  This provides a description of the block development system as described by the original developer.

Knowing that these two tables are now used to store block location my question from this point onwards is: how does this table get updated?

Database updates

To answer this question I applied something that I have called ‘the law of random code searching’: if you don’t know what to look for and you don’t know how things work, carry out a random code search to see what the codebase tells you.  Using my development environment I search to find out where the block_instance datatable is updated.

Calls to the database to be spread out over a number of files: blocks, lib, accesslib, blocklib, moodlelib, and chat/lib (amongst others).  This seems to indicate that there is quite a lot of coupling between the different sections of code (which is probably a bad thing when it comes to understanding the code and carrying out maintenance).

Software comprehension is sometimes an inductive process.  Occasionally you just need to read through a code file to see if it can yield any clues about its design, its structure and what it does.  I decided to try this approach for each of the files my search results window pointed to:

Accesslib
Appears to access control (or permission management) to parts of Moodle.  The comments at the top of the file mention the notion of a ‘context’ (which is a badly overloaded word).  The comments provide me no clue as to the context in which context is used.  The only real definition that I can uncover is the database description documentation which states, ‘a context is a scope in Moodle, for example the whole system, a course, a particular activity’.  In AccessLib, there are some hardcoded definitions for different contexts, i.e. CONTEXT_SYSTEM, CONTEXT_USER, CONTEXT_COURSECAT and so on.

The link to the blocks_instance database lies within a huge function called create_context which updates a database table of the same name.  I’ve uncovered a forum explanation that sheds a little more light onto the matter, but to be honest, the purpose of these functions is going to take some time to uncover.  There is a clue that the records held within the context table might be cached for performance reasons.  Moving on…

Moodlelib

Block_instance is mentioned in a function named remove_course_contents which apparently ‘clears out a course completely, deleting all content but don’t delete the course itself’.  When this function is called, modules and blocks are removed from the course.  Moodlelib is described as ‘main library file of miscellaneous general-purpose Moodle functions’ (??), but there is a reference towards another library called weblib which is described as ‘functions that provide web output’.

Blocks
A comment at the top of the blocks.php file states that it ‘allows the admin to configure blocks (hide/show, delete and configure)’.  There is some code that retrieves instances of a block and then deletes the whole block (but in what ‘context’ this is done, at the moment it’s not clear).

Blocklib
The file contains the lion’s share of references to the block_instance database.  It is said to include ‘all the necessary stuff to use blocks in course pages’ (whatever that means!)  At the top there are some constants for actions corresponding to moving a block around a course page.  Database calls can be found within blocks_delete_instance, blocks_have_content, blocks_print_group and so on.  The blocks_move_block seems to adjust the contents of the database to take account of moment.  There also appears to be some OO type magic going on that I’m not quite sure about.  Perhaps the term ‘instance’ is being used in too many different ways.  I would agree with the coder: blocklib does all kinds of ‘stuff’.

Lib files
Reference to block_instance can be found in lib files for three different blocks: chat, lesson and quiz.  The functions that contain the call to the database relate to the removing of an ‘instance’ of these blocks.  As a result, records from the block_instance table are removed when the functions are called.

So, what have I learnt by reading all this stuff?  I’ve seen how the database stores stuff, that there is a slippery notion of a course context (and mysterious paths), and know the names of some files that do the block editing work, but I’m not quite sure how.  There is quite a lot of complexity that has not yet been uncovered and understood.

Digressions

I have a cursory glance through the lib folder to see what else I can discover and find an interestingly named script file entitled womenslib.php.  Curious, I open it and see a redirect to a wikipedia page.  The Moodle developers obviously have a sense of humour but unfortunately mine had failed!  This minor diversion was unwelcome (humour failure exception), costing me both time and ‘head’ space!

Bizarrely I also uncover seemingly random list of words (wordlist.txt) that begins: ‘ape, baby, camel, car, cat, class, dog, eat …’ etc.  Wondering whether one of the developers had attended the famous Dali school of software engineering, I searched for a file reference to this mysterious ‘wordlist’.

It appeared that our mysterious list of words was referenced in the lib\setup.php file, where a path to our  worldlist was stored in what I assumed to be a Moodle configuration variable.  How might this file be used?  It appears it is used within a function called generate_password.

Thankfully the developers have been kind enough to say where they derived some of their inspiration from.   The presence of the wordlist is explained by the need to construct a function to create pronounceable automatically generated passwords (but perhaps only in English?)

This was all one huge digression.  I pulled myself together just enough to begin to uncover what happens when a user clicks on either the block move up, down, or delete buttons when a course is running in edit mode.

Button click action

Returning to the task in hand, I add two blocks (both in the right hand column, and one situated on top of the other) to my local Moodle site with a view to understanding that function code that contributes to the moveBlockUp and deleteBlock functionality.

4815869094_9c27f1aaf4.jpg

I take a look at the links that correspond to the move up and the delete icons.  I notice that the action of clicking sends a bunch of parameters to the main Moodle index.php.  The parameters are sent via get (which means they are sent as a part of the hypertext link).  They are: instanceid (which comes straight out of the block_instance table), sesskey (which reminds me, I really must try to understand how Moodle handles sessions (wikipedia) at some point), and a blockaction parameter (which is either moveup or delete in the case of this scenario).

The question here is: what happens within index.php?  Luckily, I have a debugger that will be able to tell me (or, at least, help me!)

I log in as an administrator through my debugger.  When I have established a session, I then add some breakpoints on my index.php code and launch the index.php code using the parameters for ‘move activity upwards’.

Index.php begins to execute, and a call to page_create_object is made. It looks like a new object is created.  An initialisation function within the page_base class is called (contained within pagelib).  A blocks_setup function is called and block positions from the block_instance database is retrieved.  After some further tracking I end up at a function called blocks_execute_url_action.  The instanceid is retrieved and a call is made to blocks_execute_action where the block action (moveup or delete) is passed in as a parameter with the block instance record that has just been retrieved from the database.

In blocks_execute_action a 'mother of all switch statements' makes a decision about what should be done next.  After some checks, two update commands to the database are issued through the update_record function updated weight values (to change the order of the respective blocks).  With all the database changes complete, a page redirect occurs to index.php.  Now that the database has the correct representation of where each block should be situated index.php can now go ahead and display them.

Is the same mechanism used for course pages?

A very cursory understanding tells me that the course/view.php script has quite a lot to do with the presentation of courses, and at this point gathering an understanding of it is proving to be elusive.  Let’s see what I can find.

4815245535_c6182b86b1.jpg

Initially it does seem that the index.php script controls the display of a Moodle site and course/view.php script does control the course display.  Moving the mouse over the ‘move block up’ icons reveals a hyperlink to the view.php script with get parameters of: id (which corresponds to the course number held within the course data table), instance id (which corresponds to a record within the block_instance table) and sesskey and blockaction parameters (as with index.php).

To get a rough understanding of how things work, I do something similar as before: open up a session through my debugger and launch the view.php with this bunch parameters.  The view.php course is striking.  It doesn’t seem to be very long and nor does it produce any HTML so it looks like there’s something subtle going on.

In view.php, there are some parameter safety checks, followed by some context_instance magic, checking of the session key followed by calls to the familiar page_create_object (mentioned in the earlier section).  Blocks_setup is then called, followed by blocks_get_by_page_pinned and blocks_get_by_page which asks the database which blocks are associated to this particular page (which is a course page).

Like earlier, there is a call to blocks_execute_url_action when updates the database to carry out the action that the administrator clicked on.  At the end of the database update there is a redirect.  Instead of going to index, the redirect is to view.php along with a single parameter which corresponds to the course id.

This raises the question: what happens after the view.php redirect?

Redirect to view.php

When view.php makes a call to the database to get the data that corresponds to the course id number it has been given.  There is then a check to make sure that the user who is requesting the page is logged into Moodle and eventually our old friends page_create_object and blocks_setup are called, but this time since no buttons have been clicked on, we don’t redirect to another page after we have updated the database.

Towards the end of view.php we can begin to see some magic that begins to produce the HTML that will be presented to the user.  There is a call to print_header.  There is then a script include (using the PHP keyword ‘required’) which then creates the bulk of the page that is presented to the user, building the HTML to present the individual blocks.  When running within my debugger, the script course/format/weeks/format.php was included.  The script that is chosen depends on the format of the course that has been chosen.  When complete, view.php adds the footer and the script ends.

Summary

So, what have I learnt from all this messing about?

It seems that (broadly speaking) the code used to move blocks around on the main Moodle site is also used to move blocks around on a course page, but perhaps this isn’t too surprising (but it is reassuring).  I still have no idea what ‘pinned blocks’ means or what the corresponding data table is for but I’m sure I’ll figure it out in time!

Another thing that I have learnt is that the view course and the main index.php pages are built in different ways.  As a result, if I ever need to change the underlying design or format of a course, I now know where to look (not that I ever think this is something that I’ll need to do!)

I have seen a couple of references to AJAX (MoodleDocs) but I have to confess that I am not much wiser about what AJAX style functionality is currently implemented within the version of Moodle I have been playing with.  Perhaps this is one of those other issues that will become clearer with time (and experience).

One thing, however, does strike me: the database and the user interface components are very closely tied together (or closely coupled) which may make, in some cases, change difficult.  One of the things that I have on my perpetual ‘todo’ list is to have a long hard look at the Fluid Project, but other activities must currently take precedence.

This pretty much concludes my adventure into the world of Moodle blocks. There’s a whole load of Moodle related stuff that I hope to look at (and hopefully describe) at some point in the future: groups, roles, contexts, and forums.  Wish me luck!

Acknowlegements: Image from lifeontheedge, licenced under Creative Commons.

Permalink
Share post

This blog might contain posts that are only visible to logged-in users, or where only logged-in users can comment. If you have an account on the system, please log in for full access.

Total visits to this blog: 2149646