I attended a HEA eTeaching and Learning workshop at the University of Greenwich on 1st June 2011. It is always a pleasure visiting the Greenwich University campus; it is probably (in my humble opinion) the most dramatic of all university campuses in London - certainly the only one that is situated within a World Heritage site.
My challenge was to find the King William building (if I remember correctly), which turned out to be a Wren designed neo-classical building that sat adjacent to one of the main roads. Looking towards the river, all visitors were treated to a spectacular view of the Canary Wharf district. Visitors were also treated to notes emanating from a nearby music school.
I first went to the eTeaching and Learning workshop back in 2008 where I presented some preliminary work about an accessibility project I was working on. This time I was attending as an interested observer. It was a packed day, comprising of two keynotes and eight presentations.
Opening Keynote
The opening keynote was given by Deryn Graham (University of Greenwich). Deryn's main focus was the evaluation of e-delivery (e-delivery was a term that I had not heard of before, so I listened very intently). The context for her presentation was a postgraduate course on academic practice (which reminded me of a two year Open University course sounds to have a similar objective). Some of the students took the course through a blended learning approach, whereas others studied entirely from a distance.
The most significant question that sprung to my mind was: how should one conduct such an evaluation? What should we measure, and what may constitute success (or difference). Deryn mentioned a number of useful points, such as Salmond's e-moderating model (and the difficulty that the first stages may present to learners), and also considered wider economic and political factors. Deryn presented her own framework which could be used to consider the effectiveness of e-delivery (or e-learning).
This first presentation inspired a range of different questions from the participants and made me wonder how Laurillard's conversational framework (see earlier blog post) might be applied to the same challenge of evaluation. By way of a keynote, Deryn's presentation certainly hit the spot.
General Issues
The first main presentation was by Simon Walker, from the University of Greenwich. The title of his paper was, 'impact of metacognitive awareness on learning in technology enhanced learning environments'.
I really liked the idea of metacognition (wikipedia) and I can directly relate it back to some computer programming research I used to stidy. I can remember myself asking different questions whilst writing computer software, from 'I need to find information about these particular aspects...' through to, 'hmm... this isn't working at all, I need to do something totally different for a while'. The research within cognitive is pretty rich, and it was great to hear that Simon was aware of the work by Flavell, who defines metacognition as, simply, 'knowledge and cognition about cognitive phenomena'.
Andrew spoke about some research that himself and his colleagues carried out using LAMS (learning activity management system), which is a well known learning design tool and accompanying runtime environment. An exploratory experiment was described: one group were given 'computer selected' tools to use (though LAMS), whereas the other group were permitted a free choice. Following the presentation of the experiment, the notion of learning styles (and whether or not they exist, and how they might relate to tool choice - such as blogs, wikis or forums) was discussed in some detail.
Andrew Pyper from the University of Hertfordshire gave a rather different presentation. Andrew teaches human-computer interaction, and briefly showed us a software tool that could be used to support the activity of computer interface evaluation though the application of heuristic evaluations.
The bit of Andrew's talk that jumped out at me was the idea that instruction of one cohort might help to create materials that are used by another. I seemed to have made a note that student-generated learning materials might be understood in terms of the teaching intent (or the subject), the context (or situation) in which the materials are generated, their completeness (which might relate to how useful the materials are), and their durability (whether or not they age over time).
The final talk of the general section returned to the issue of evaluation (and connects to other issues of design and delivery). Peiyuan Pan, from the London Metropolitan University, draws extensively on the work of others, notably Kolb, Bloom, and Fry (who wrote a book entitled 'a handbook for teaching and learning in higher education - one that I am certainly going to look up). I remember a quote (or a note) that is (roughly) along the lines of, '[the] environment determines what activities and interactions take place', which seems to also have echoes with the conversational framework that I mentioned earlier.
Peiyuan describes a systematic process to course and module planning. His presentation is available on line and can be found by visiting his presentation website. There was certainly lots of food for thought here. Papers that consider either theory or process always have a potential to impact practice.
Technical Issues
The second main section comprised of three papers. The first was by Mike Brayshaw and Neil Gordon from the University of Hull, who were presenting a paper entitled, 'in place of virtual strife - issues in teaching using collaborative technologies'. We all know that on-line forums are spaces where confusion can reign and emotions can heighten. There are also perpetual challenges, such as none participation within on-line activities. To counter confusion it is necessary to have audit trails and supporting evidence.
During this presentation a couple of different technologies were mentioned (and demoed). It was really interesting to an the application of Microsoft Sharepoint. I had heard that it can be used in an educational context, but this was the first ever time I had witnessed a demonstration of a system that could permit groups of users to access different shared areas. It was also interesting to hear that a system called WebPA was being used in Hull. WebPA is a peer assessment system which originates from the University of Loughborough.
I had first heard about WebPA at an ALT conference a couple of years ago. I consider peer assessment as a particularly useful approach since not only might it help to facilitate metacognition (linking back to the earlier presentation), but it may also help to develop professional practice. Peer assessment is something that happens regularly (and rigorously) within software engineering communities.
The second paper entitled 'Increased question sharing between e-Learning systems' was presented by Bernadette-Marie Byrne on behalf of her student Ralph Attard. I really liked this presentation since it took me back to my days as a software developer where I was first exposed to the world of IMS e-learning specifications.
Many VLE systems have tools that enable them to deliver multiple choice questions to students (and there are even projects that try to accept free text). If institutions have a VLE that doesn't offer this functionality there are a number of commercial organisations that are more than willing to offer tools that will plug this gap. One of the most successful organisations in this field is QuestionMark.
The problem is simple: one set of multiple choice questions cannot easily be transferred to another. The solution is rather more difficult: each system defines a question (and question type) and correct answer (or answers) in a slightly different ways. Developers for one tool may use horizontal sliders to choose numbers (whereas others might not support this type of question). Other tools might enable question designers to code extensive feedback for use in formative tests (I'm going beyond what was covered in the presentation, but you get my point!)
Ralph's project was to take QuestionMark questions (in their own flavour of XML) at one end and output IMS QTI at the other. The demo looked great, but due to the nature of the problem, not all question types could be converted. Bernadette pointed us to another project that predates Ralph's work, namely the JISC MCQFM (multiple-choice questions, five methods) project, which uses a somewhat different technical approach to solve a similar problem. Whereas MCQFM is a web-service that uses the nightmare of XSLT (wikipedia) transforms, I believe that Ralph's software parses whole documents into an intermediate structure from where new XML structures can be created.
As a developer (some years ago now), one of the issues that I came up against was that different organisations used different IMS specifications in different ways. I'm sure things have improved a lot now, but whilst standardisation has likely to have facilitated the development of new products, real interoperability was always a problem (in the world of computerised multiple-choice questions).
The final 'technical' presentation was by John Hamer, from the University of Glasgow. John returns to the notion of peer assessment by presenting a system called Aropa and discussing 'educational philosophy and case studies' (more information about this tool can be found by visiting the project page). Aropa is designed to support peer view in large classes. Two case studies were briefly described: one about professional skills, and the other about web development.
One thing is certain: writing a review (or conducting an assessment of student work) is most certainly a cognitively demanding task. It both necessitates and encourages a deep level of reflection. I noted down a number of concerns about peer assessment that were mentioned: fairness, consistency, competence (of assessors), bias, imbalance and practical concerns such as time. A further challenge in the future might be to characterise which learning designs (or activities) might make best use of peer assessment.
Pedagogical Issues
The subjects of collusion and plagiarism are familiar tropes to most higher education lecturers. A paper by Ken Fisher and Dafna Hardbattle (both from London Metropolitan University) asks the question of whether students might benefit it they work through a learning object which explains to learners what is and what is not collusion. The presentation began with a description of a questionnaire study that attempts to uncover what academics understand collusion to be.
Ken's presentation inspired a lot of debate. One of the challenges that we must face is the difference between assessment and learning. Learning can occur through collaboration with others. In some cases it should be encouraged, whereas in other situations it should not be condoned. Students and lecturers alike have a tricky path to negotiate.
Some technical bits and pieces. The learning object was created using a tool called Glomaker (generative learning object maker), which I had never heard of before. This tool reminds me of another tool, such as Xerte, which hails from the University of Nottingham. On the subject of code plagiarism, there is also a very interesting project called JPlag (demo report, found on HEA plagiarism pages). The JPLAG on-line service now supports more languages than it's original Java.
The final paper presentation of the day was by Ed de Quincy and Avril Hocking, both from the University of Greenwich. Their paper explored how students might make use of the social bookmarking tool, Delicious. Here's a really short summary of Delicious: it allow you to record your web favourites to the web using a set of keywords that you choose, enabling you to easily find them again if you use different computers (it also allows you to share stuff with users of similar interest).
One way it can be used in higher education is to use it in conjunction with course codes (which are often unique, or can be, if a code is combined with another tag). After introducing the tool to users, the researchers were interested in finding out about common patterns of use, which tags were used, and whether learners found it a useful tool.
I have to say that I found this presentation especially interesting since I've used Delicious when tutoring on a course entitled accessible online learning: supporting disabled students which has a course code of H810, which has been used as a Delicious tag. Clicking on the previous link brings up some resources that relate to some of the subjects that feature within the course.
I agree with Ed's point that a crowdsourced set of links comprises of a really good learning resource. His research indicates that 70% of students viewed resources tagged by other students. More statistics are contained within his paper.
My own confession is that I am an infrequent user of Delicious, mainly due to being forced down one browser route as opposed to another at various times, but when I have use it, I've found browser plug-ins to be really useful. My only concern about using Delicious tags is that the validity of links can age very quickly, and it's up to a student to determine the quality of the resource that is linked to (but metrics saying, 'n people have also tagged this page' is likely to be a useful indicator).
Closing Keynote
Malcolm Ryan from the University of Greenwich School of Education presented the final keynote entitled, 'Listening and responding to learners' experiences of technology enhanced learning'. Malcolm asked a number of searching questions, including, 'do you believe that technology enhances or transforms practice?' and 'do you know what their experience is?' Malcolm went on to mention something called the SEEL Project (student experience of e-learning laboratory) that was funded by the HEA.
The mention of this project (which I had not heard of before) reminded me of something called the LEX report (which Malcolm later went on to mention). LEX is an abbreviation of: learner experience of e-learning. Two other research projects were mentioned. One was the JISC great expectations report, another was a HEFCE funded Student Perspectives on Technology report. I have made a note of the finding that perhaps students may not want everything to be electronic (and there may be split views about mobile). A final project that was mentioned was the SLIDA project which describes how UK FE and HE institutions are supporting effective learners in a digital age.
Towards the end of Malcolm's presentation I remember a number of key terms, and how these relate to individual project. Firstly, there is hearing, which relates to how technology should be used (the LEX report). Listening relates to SEEL. Responding connects to the great expectations report, and finally engaging, which relates to a QAA report entitled 'Rethinking the values of higher education - students as change agents?' (pdf).
Malcolm's presentation has directly pointed me towards a number of reports that perhaps I need to spend a bit of time studying whilst at the same time emphasising just how much research has already been done by different institutions.
Workshop Themes
At the end of these event blogs I always try to write something about what I think the different themes are (of course, my themes are likely to be different to those of other delegates!)
The first one that jumped out at me was the theme of theory and models, namely different approaches and ways to understand the e-learning landscape.
The second one was the familiar area of user generated content. This theme featured within this workshop through creation of bookmarks and course materials.
Peer assessment was also an important theme (perhaps one of increasing importance?) There is, however, a strong tension between peer assessment and plagiarism, but particularly the notion of collusion (and how to avoid it).
Keeping (loosely) with the subject of assessment, the final theme has to be evaluation, i.e. how can we best determine whether what we have designed (or the service that we are providing) are useful for our learners.
Conclusion
As mentioned earlier, this is the second e-learning workshop I have been to. I enjoyed it! It was great to hear so many presentations. In my own eyes, e-learning is now firmly established. I've heard it say that the pedagogy has still got to catch up with the technology (how to do the best with all of the things that are possible).
Meetings such as these enable practitioners to more directly understand the challenges that different people and institutions face. Many thanks to Deryn Graham from the University of Greenwich and Karen Frazer from HEA.
Comments
New comment
Most dramatic of all London University campuses? I reckon Royal Holloway still beats it
http://personal.rhul.ac.uk/Shared/Images/Founders.jpg
RebeccaF
Dramatic
Well, one of the most dramatic