OU blog

Personal Blogs

Christopher Douce

Generative AI and the future of the OU

Visible to anyone in the world
Edited by Christopher Douce, Tuesday, 20 June 2023, 10:24

On 15 June 2023 I attended a computing seminar about generative AI, presented by Michel Wermelinger.

In some ways the title of his seminar is quite provocative. I did feel that his presentation relates to the exploration of a very specific theme, namely, how generative AI can play a role in the future of programming education; a topic which is, of course, being explored by academics and students within the school.

What follows is a brief summary of Michel's talk. As well as sharing a number of really interesting points and accompanying resources, Michel did a lot of screensharing, where he demonstrated what I could only describe as witchcraft.

Generative AI tools

Michel showed us Copilot, which draws on code submitted through GitHub. Copilot is said to use something called OpenAI Codex. The witchcraft bit I mentioned was this: Michel provided a couple of comments in a development environment, which were parsed by the Copilot, which generated readable and understandable Python code. There was no messing about with internet searches or looking through instruction books to figure out how to do something. Copilot offered immediate and direct suggestions.

Copilot isn’t, of course, the only tool that is out there. There are now a bunch of different types of AI tools, or a taxonomy of tools, which are emerging. There are tools where you pay for access. There are tools that are connected with integrated development environments (IDEs) that are available on the cloud, and there are tools where the AI becomes a pair programmer chatbot. There are other tools, such as learning environments that offer both documentation and the automated assessment of programming assignments.

The big tech companies are getting involved. Amazon has something called CodeWhisperer. Apparently Google has something called AlphaCode, which has participated in competitive programming competitions, leading to a paper in Nature which questions whether ChatGPT and AlphaCode going to replace programmers? There’s also something called StarCoder, which has also been trained on GitHub sources.  

AI can, of course, be used in other ways. It could be used to offer help and support to students who have additional requirements. AI could be used to transcribe lectures, and help student navigate across and through learning materials. The potential of AI being a useful learning companion has been a long held dream, and one that I can certainly remember from my undergraduate days, which were in the last century.

Implications

An important reflection is that Copilot and all these other AI tools are here to stay. It wouldn’t be appropriate to try to ban them from the classroom since they are already being used, and they already have a purpose. Michel also mentioned there is already a textbook which draws on Generative AI: Learn AI-assisted Python programming

Irrespective of what these tools are and what they do, everyone still needs to know the fundamentals. Copilot does not replace the need to understand language syntax and semantics and know the principles of algorithmic thinking. Developers and engineers need to know what is meant by thorough testing, how to debug software, and to write helpful documentation. They need to know how to set breakpoints, use command prompts, and also know things about version and configuration management.

An important question to ask is: how do we assess understanding? One approach is an increasing use of technical interviews, which can be used to assess understanding of technical concepts. This won’t mean an academic viva, but instead might mean some practical discussions which both help to assess student’s knowledge, and help them to prepare for the inevitable technical interviews which take place in industry.

New AI tools may have a real impact on not only what is taught but how teaching is carried out, particularly when it comes to higher levels of study. This might mean the reformulation of assignments, perhaps developing less explicit requirements to expose learners to the challenge of working with ambiguity, which students must then intelligently resolve.

Since these tools have the potential to give programmers a performative boost, assignments may become more bigger and more substantial. Irrespective of how assignments might change there is an imperative that students must learn how to critically assess and evaluate whatever code these tools might suggest. It isn’t enough to accept what is suggested; it is important to ask the question: “does the code that I see here make sense of offer any risks, given what I’m trying to do?”

A term that is new to me is: prompt engineering. This need to communicate in a succinct and precise way to an AI to get results that are practical and useful within a particular context. To get useful results, you need to be clear about what you want. 

What is the university doing?

To respond to the emergence of these tools the university has set up something called the Generative AI task and finish group. It will be producing some interim guidance for students and will be offering some guidance to staff, which will include the necessity to be clear about ethical and transparent use about AI. It is also said to highlight capabilities and limitations.  There will also be guidance for award boards and module results panels. The point here is that Generative AI is being looked at. 

Michel suggested the need for a working group within the school; a group to look at what papers coming out, what the new tools are, and what is happening across the sector at other institutions. A thought that it might be useful to widen it out to other schools, such as the School of Physical Sciences, and any others which make use of any aspect of coding and software development.

Reflections

Michel’s presentation was a very quick overview of a set of tools that I knew very little about. It is now pretty clear that I need to know a lot more about them, since there are direct implications for the practice of teaching and learning, implications for the school, and implications for the university. There is a fundamental imperative that must be emphasised: students must be helped to understand that a critical perspective about the use of AI is a necessity.

Although I described Michel’s demonstration of Copilot as witchcraft all he did was demonstrate a new technology.

When I was a postgraduate student, a lecturer once told me that one of the most fundamental and important concepts in computing was abstraction. When developers are faced with a problem that becomes difficult, they can be said to ‘abstract up’ a level, to get themselves out of trouble, and towards another way of solving a problem. In some senses, AI tools represent a higher level of abstraction; it is another way of viewing things. This doesn’t, of course, solve the problem that code still needs to be written.

I have also heard that one of the fundamental characteristics of a good software developer or engineer is laziness. When a programmer finds a problem that requires solving time and time again, they invariably develop tools to do their work for them. In other words, why write more code than you need to, when you can develop a tool that solves the problem for you?

My view is that both abstraction and laziness are principles that are connected together.

Generative AI tools have the potential to make programmers lazy, but programmers must gain an appreciation about how and why things work. They also need to know how to make decisions about what bits of code to use, and when. 

It takes a lot of effort to become someone who is effective at being lazy.

Permalink Add your comment
Share post
Christopher Douce

Object-oriented programming: seven tips

Visible to anyone in the world
Edited by Christopher Douce, Friday, 16 Feb 2024, 16:01

Over the last few years, I’ve been tutoring M250 Object-oriented Java programming. During some of the tutorials that I facilitate, I share set of tips with students. What follows is a brief summary of the tips, and some accompanying notes. I hope these might be helpful to anyone studying M250, or any other OU module that involves a bit of programming.

1. You can't learn programming by reading the course books. You need to do it. You need to spend serious time playing.

It’s important to spend some quality time with the language that you’re using and the integrated development environment that you’re using to manipulate that language. You can only properly learn to get a feel for both object-oriented programming, and programming constructs by using them. Get a feel for the words and the punctuation that you’re using. Also, instrument your code with print statements, and consider using a debugger to really see what is happening. Play and mess about. Getting yourself in a tangle is all a part of the process. There is another related tip is: do one thing at a time.

2. Use the examples as a starting point; then go further.

Start to play, and then to play a bit more, and see where this will take you. Invariably, you’ll end up writing more and more code. This means that you’ll get to a point where you need to think about how to make things a bit easier again. If you’ve found a problem in a textbook, think about how might alter that problem to solve a slightly different problem, or a more general problem.

3. Accept that things are going to be uncomfortable sometimes: it’s impossible to understand everything at once, things will only make sense after you've spent the hours playing and learning.

There’s a lot going on with object-oriented programming. 

There’s the key ideas of types (or classes), objects, attributes and member functions. Not to mention, of course, how objects might work with each other to solve problems. Plus, there’s constructors, libraries and iterators.

It’s all a lot of take in, and it isn’t a surprise if you start to feel a bit overwhelmed. If you see difficult things and struggle to understand what is going on, accept certain things at face value for the time being; full understanding will come a bit later.

4. Always make a backup copy.

This relates to the first tip: playing.

When you play with code, you can also mess things up and get yourself in a tangle, especially if you follow tip 2 where you build on earlier things you have earlier done. As you figure everything out, make sure you take a backup copy of your code. If you’re making lots of changes, you might want to create different versions of your code. You might create a copy, save all your files in a new directory and call it ‘version 1’, ‘version 2’ etc. 

Also, do make sure you save your files in a location that is different your computer, just in case your computer goes wrong. A bit later on, you might start to use something called GitHub.

5. Try to explain your code to someone else. (Or, get a plant, and call it Dijkstra)

Sometimes coding presents some real puzzles; sometimes there’s something that isn’t quite understood, or something doesn’t quite work as expected. As a developer, I’ve sometimes had bugs which have been both weird and persistent. When this happens, I would “have a chat with Dijkstra”.

Let me explain. I once heard that in a computer lab in Cambridge, there was a houseplant, which was named called Dijkstra, named after a famous Dutch computer scientist called Edsger Dijkstra. If a student was struggling with some code, and asked themselves the question “why doesn’t this work?” they were told to explain their code to Dijkstra. The very act of explaining your code, a step at a time, has the potential to help you to understand what is happening, and what the problem is. 

If you have a partner, sibling, or pet, they can all become Dijkstra.

6. If you keep going over the same things time and time again, don’t be afraid to step away from it. Sleep on it, and come back to it with fresh eyes.

In computing, there’s a term called thrashing, which is sometimes used to describe a phenomenon that occurs with computer operating systems. This needs a bit of explanation, so please do bear with me.

Your computer has two types of memory: random access memory, and backing store memory. Random access memory is fast and expensive, but your computer doesn’t have very much of it. In contrast, there is typically a lot backing store memory in your computer (which used to be held on a magnetic disk), which is pretty inexpensive in compared to random access memory. Your computer operating system provides programmers with a lot more memory than is actually available through random access memory. It does this by moving data between different types of memory.

Thrashing is what happens when your computer operating system causes your computer to spend all its time trying to get things done by moving data between different types of memory, rather than doing the work that needs to be done.

If you find yourself ‘thrashing’, you need to reboot. You need to step away from your code and come back to it after a break.

I remember once having an idea about how to solve a coding problem when I was having a shower. A break can do you the world of good. This point leads me to my final point.

7. Have fun, and be gentle with yourself.

Everyone learns at different speeds; learning isn’t a race, so do be gentle with yourself. It’s important to have fun too. I remember that one of my first object-oriented programs was a simulated card game that was based on a television gameshow. It was fun to write, and it was fun to play. This point about playing takes us back to the first point: you can't learn programming by reading the course books; you need to find the time to play.

Permalink Add your comment
Share post
Christopher Douce

Reflections on M250

Visible to anyone in the world

I’ve just finished tutoring my first presentation of M250 Object-oriented Java programming.

I first applied to tutor on the predecessor to this module back in 2005. At the time I was a full time Java programmer working in industry, writing software that drove some equipment that was used to teach telecommunication principles. 

I wasn’t offered a contract on M250, but I was offered a contract on M364, which was called Fundamentals of Interaction Design. I tutored M364 for a little over ten years. It was a great module; it was well designed, it had a clear structure, and gave students some practical experience of carrying out some really simple usability evaluations.

In 2019, I heard from a colleague that there was a M250 vacancy in the London region. I hesitated; I’ve a lot on. I also tutor on the project module, TM470, and have a few other OU responsibilities. Since my research at university was about object-oriented programming, I simply couldn’t resist the opportunity to play a part in teaching people about object-oriented programming. I applied. I was interviewed and considered appointable.

Books

In the post I was sent three glossy looking books. In the very early stages of tutoring, I sat down and started to read them, skimming over the activities; a lot of what I was reading was already familiar to me, and I could understand the concepts that were expressed through the amphibian-related activities (frogs and toads were used to introduce the concept of objects and messages).

Through the module website, I found that there were PDF and ePub versions of books. I downloaded the ePub versions onto my eReader, just so I could carry them around with me a bit more easily.

Getting everything going

At the start of the module, I set up some introduction threads on the tutor group forum and wrote to each student telling them to subscribe to it. I also asked students to get in touch with me to say hello. For those who didn’t reply, I chased them up with a text message and a quick phone call or voicemail. 

My first tutorial

My first ever M250 tutorial took place in a seminar room at the University of Westminster. I was there to support my fellow tutor, Lindsey, who has been allocated to me as my mentor.  Two things struck me: she knew terms to describe Java that I had forgotten, and carried out almost all of her teaching using a combination of whiteboard, and pen and paper. This method of teaching programming was a method that I approved of; it forces everything to move a whole lot more slowly.

My first solo tutorial

My first ever online introductory tutorial was fun. I prepped for it by looking at what other tutors had done, using sections of the module material and sharing bits of the TMA question. 

During the first tutorial, I tried my best to emphasise the fundamental concepts of object-oriented programming. I asked everyone who came along to look around their immediate environment. We made classes out of those objects, and gave them attributes. I also compared non-OO programming to OO programming, to really emphasise why it’s an important subject. I also recorded the tutorial and did two things to follow up: I posted a link to the recording on the tutor group forum, and also sent an email to all student to let them know they could find a link to the recording by visiting the forum.

Whenever I can, I try to connect different things together; tutorials with module materials, and forums with recordings.

My first TMA

The first TMA of a new module means that you never know what you’re going to expect. I always knew that there would be a lot of support behind the scenes. I subscribed to the tutor forums (in M250, there was one support forum for every TMA), printed out all of the tutor notes (which were comprehensive), along with the TMA question. I also made liberal use of my highlighter to identify bits that I needed to pay attention to.

I quickly realised that students were asked to submit their TMAs in two parts. Firstly, there was the written part (presented within a Word document), then there was some programming code, that was submitted in a zip file. The code in the zip file was also presented in the Word document, and could add teaching comments into the Word document.

Another thing that was new to me was the BlueJ Java programming environment. I soon figured out how it worked: projects were contained within directories, and these directories contained a project file. I easily found the compile button, and figured out that there were another bunch of tools that had been created by the university: something called the OU workspace which presented a graphical display, and a way to dynamically work with Java code.

There was something that really helped me to get going in the very early days, and that was a testing tool that had been created by the module team. Essentially, you run a Java program that then compares a specified Java program (i.e. a student’s submission) against a predefined definition or specification. Essentially, it’s a tool that tells you whether a student’s code is right or wrong. The tutor’s job is to interpret everything: the tool output, the student’s submission and the tutor notes and provide some sensible teaching comments, along with a mark.

I soon realised that I could apply a familiar tried and tested marking approach to M250: I could mark one question (or question section) at a time, for all student submissions. The advantage of doing it this way is: (1) consistency, and (2) speed. When you’re doing this, you can put quite a lot of the marking guide into your head and also make sure that you provide consistent comments and feedback for each of the student submissions.

My first additional support session

After marking the first TMA, I noticed that a couple of students may be struggling to understand some of the fundamental concepts of OO programming. A tip off for this was how some of the Java code was expressed. It might have been things like students not quite understanding the purpose of member variables and how they related to member functions (for example). 

I emailed all the students who might be struggling to ask them whether they might be interested in a one to one session. A couple of students agreed.

During one of the additional support sessions, which took place in a tool called Adobe Connect, I used screen sharing. Rather than telling students what they needed to do, I asked questions to probe their understanding of some of the fundamental Java and OO concepts. I then used screen sharing, in combination with the BlueJ environment, to do what is usually called ‘live coding’. Essentially, during the tutorial, we co-created some code which explored similar concepts that were explored within the TMA questions.

I had never done any live coding before. I had certainly never done it using BlueJ and Adobe Connect. In some respects, I was taking quite a few risks, but everything seemed to work okay. Object-oriented concepts were communicated and shared through a combination of English and Java.

My first examination preparation session

During my first presentation of M250, something unexpected happened; a global pandemic. What this meant was that the expected M250 written exam was cancelled. This mean that the final assessment score was going to be calculated from the scores of all the TMAs. This was possible, since the TMAs assessed all the key learning outcomes from the module.

Exams are useful, since they enable learners to consolidate their earlier learning. Rather than running an examination preparation session, I’m going to be running what I can only call a module consolidation tutorial. During this final tutorial I’m going to be talking about what was going to be assessed, why different questions were to be asked, and how they may relate to studies on other modules. 

Reflections

I’ve enjoyed tutoring my first presentation of M250.

Tutoring the module was a bit of a surprise, in the sense that I didn’t expect to become a tutor on M250; I thought the opportunity had passed. I applied, since I felt that I had some hidden skills (knowledge of OO programming and Java) that I could use. 

I enjoyed realising that I remembered how to code and how the key parts of the language worked. I also enjoyed working with the new bits: collection classes and iterators; bits of the language that had been introduced after I had stopped using it on a daily basis.

Although the marking was hard work, it was looking at something that was familiar, which meant I was able to get into the swing of it relatively quickly. I soon learnt to accept that wasn’t going to understand everything that was in the tutor notes (tutor marking instructions) straight away. Understanding, of course, came by playing with code, and looking through the answers that students had submitted.

The real fun bits were the tutorials and the one-to-one sessions. It was in these sessions that I felt that I could really add something as a tutor.

If asked whether there was something I would change for the next presentation, it would be: I would take even more risks during tutorials. Programming has the potential to be a really fun subject. I have the tools to make it fun. It’s going to be up to me to make it so. 

Permalink
Share post
Christopher Douce

Teaching programming across STEM

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 8 Aug 2018, 18:07

In February 2018 I went to a 'Teaching programming across STEM' workshop that was organised by my colleague Michel Wermelinger. The aim of the workshop was to get different colleagues from different parts of the STEM faculty together to share experiences about how they teach programming, raise awareness of each other’s plans, discuss different types of provision, and to share experience and examples.

What follows is a rough summary of the notes that I took during the day, which were augmented by having a quick look at some of the slides that were prepared for the workshop (OU staff link). The aim of these notes are to help me remember what happened, and to provide a future reference for anyone who might be interested in the teaching and learning of programming at the OU. Since there was a 6 month gap between the event and the writing of the blog, I’m sure I’ve forgot some important elements and aspects, but I hope they are both pretty accurate and useful.

Introduction

The event was introduced by Michel, who said that the day was split into two parts, a morning ‘supply side’ section (which included a series of talks), and an afternoon ‘the demand side’ section, which included networking and workshop discussions. Michel kicked off the event by talking about OpenLearn materials that contain programming.

OpenLearn materials

OpenLearning is an Open University website that offers free online short courses for anyone who might be interested. It is sometimes used to share excerpts of real OU modules but it also contains self-contained short courses. If you have an interest in an academic subject, the chances are that there will be an OpenLearn course that might tell you a little bit about it. It is, perhaps, not much of a surprise that there are OpenLearn resources about programming. 

Simple Coding

Michel introduce us to something called an ‘hour of code’ introduction to programming using Python 2, also known as Simple Coding (OpenLearn). Simple coding introduces students to the fundamental concepts of variables, expressions, loops, if, lists, and function calls. It contains one problem throughout: keeping and maintaining a restaurant bill.

I made a note that this was a part of the BBC Make It Digital season. To complement this, Michel has written a short blog post about Trinkets. Finally, students are also encouraged to share their code on social media.

Learn to Code for Data Analysis

Another OpenLearn resource is called Learn to Code for Data Analysis (OpenLearn). This course started life as a 4-week 20 to 30 hour Futurelearn MOOC. It makes use of Python 3, function definitions and loops. It also makes use of the R-like Pandas library which is used for data analysis. It also uses (I’m copying from my notes here) Jupyter notebooks with Anaconda or cocalc.com.

The courses applies something called First Principles of Instruction and adopts a problem-driven approach, where students are given a weekly project to clean data, merge data and manipulate data. Students are asked to manipulate authentic real open data from organisations such as the World Health Organisation and the UN. 

TM112 Introduction to Computing and IT 2

TM112 Introduction to Computing and IT 2 was introduced by Paul Piwek, module chair. Paul explained that TM112 builds on TM111 and prepares students for level 2 study where students go onto study M250 (which uses Java) and M269 (which makes use of Python), before making their way to TM351 (which is mentioned later).

The module has three themes: essential information technologies, problem solving with Python, and information technologies in the wild.  There are 3 spiral bound books, so students can put them down next to their computer, and practice typing in code.

Students will be using turtle graphics with Python 3, Baby Pandas (a library that is used for data processing and analysis), Jupyter notebooks and an editor like IDLE. The module places particular emphasis on the teaching of problem solving skills and the construction of algorithms. Students are given programming practice and assessment by using data from the Office of National Statistics.

There are also formative quizzes with CodeRunner, which are marked for engagement to help students build mental models of what happens at an abstract level when programs are run.

SM123 Physics and Space

Jimena Gorfinkiel introduced SM123 Physics and Space which is studied after students have completed S111 Questions in Science.

Students are given 4 weeks of Python 2 programming that is based on the science they are learning. Currently, there are no other programming at level 2 and level 3 physics or astronomy pathways. The aim is to help students get a feel for programming and data analysis is all about. There is no expectation of developing specific competencies, but the aim is to help students understand principles of algorithm design.

The module design is built on ideas from other introductory materials, i.e.it makes use of Trinket (trinket.io) and the teaching approach is to scaffold the student’s learning by providing activities and examples.

TM129 RobotLab

Jon Rosewell introduced TM129 Technologies in practice, a module that has three different 10 point bits: a section on programming, a section on networking, and a section about the Linux operating system.

The programming bit has a simulator for a small Lego robot which is called RobotLab and robotics is a used as a way to introduce students to programming and to provide a useful context. It introduces basic control structures but doesn’t introduce students to data structures. Students are asked to run and watch the running of code, adapt code, and complete an open challenge.

Like Scratch, RobotLab is a drag and drop environment, but the environment can also create text programs which students see when they are expose to Python code. A comment I noted was that practical labs are important: ‘If you have simulation, and you do it well, there are opportunities for learning’.

An issue with the approach is that RobotLab is not a recognised language and is now showing its age. Support for RobotLab will finally end with the February 2019 presentation of TM129.

M250 Object-Oriented Java Programming

Anton Dil introduced M250 Object-Oriented Java Programming. In some cases, students study M250 in parallel to M269, which will be described in the next section.

M250 uses Java and adopts an ‘objects first’ approach. Students are introduced to key object-oriented (and Java) concepts, such as protocols and attributes, classes, inheritance, composition, interfaces, access levels and the catching and throwing of errors. Other topics include collections, file input and output. There are also optional sections on design by contract and assertions.

Students use a range of different tools, such as the Java Development Kit (JDK) 7 and a graphical object-interaction environment, called BlueJ which enables students to manipulate objects and visualise relationships between classes. Some of the teaching makes good use of examples, such as illustrating methods using bank accounts, demonstrating classes by creating unexpected types of frogs, and demonstrating a marionette that is made from simple shapes.

Like other OU modules, Coderunner is used for interactive computer marked assessments. An important part of the assessment is, of course, through a series of TMAs that have increasing weighting. Looking towards the future, a future assessment principle may be to have less reading and more writing code and to encourage the social dimension in programming. On this point I made a note to myself about whether the concept of pair programming might be something that could be introduced; doing it virtually and at a distance may provide some interesting but unique challenges.

M269 Algorithms, data structures, computability

Michel Wermelinger introduced M269 Algorithms, data structures, computability, a module that gets to the heart of computer science. It introduces students to data structures, queues, how searches work, sets, binary trees, hash tables, graphs, generic techniques, approximation, complexity, big O notation, heuristics, and genetic algorithms. Needless to say, it’s also all about programming. 

The tools used in M269 includes Python 3, Komodo edit, and Coderunner is used for all the TMA questions. For students who haven’t come up through TM112, it contains a Python crash course in week 2.

Given its challenging subject matter, M269 is a marmite course; some students respond well to the challenges it presents, whereas others offer more robust opinions. From a personal perspective, I remember studying a similar module when I was an undergraduate in the 1990s. I found it a challenging subject, but I later appreciated its importance and value when I became a professional software developer.

Open University Summer of Code

Neil Smith introduced an initiative called the Advent of code. Advent of Code is described as: “a series of small programming puzzles for a variety of skill levels. They are self-contained and are just as appropriate for an expert who wants to stay sharp as they are for a beginner who is just learning to code. Each puzzle calls upon different skills and has two parts that build on a theme.” Neil also told us about that there is something called the Google Summer of Code, which students can apply to.

Computing and Communications students are invited to take place in a voluntary programming challenge called the Summer of Code that is designed to give students programming practice. Students are sent a two part problem, every Monday to Friday for two weeks. All in all, there will be ten problems. An interesting observation is that if students do 2, they will invariably do all 10. Another observation was that some students were passing programming assessments but not being able to solve these problems; perhaps practice is the key and problem solving can and should be taught explicitly.

TM351 Data management and analysis

Alistair Willis introduced TM351 Data Management and Analysis. M250 and M269 are prerequisites for TM351. TM351 isn’t a programming module as such, but it does expect programming competence that is commensurate with level 3 student. The module explores the data lifecycle: the acquisition, preparation, analysis and presentation of data. Python is used for acquiring and cleaning data, and databases are used for storage. The module also demonstrates simple machine learning, statistical analysis and graph plotting.

TM351 uses Python 3, PostgreSQL, MongoDB, Pandas, Mathplotlib and Jupyter notebooks. A point that I clearly noted was that students needed to learn how to use a library and not just a language.

Like M269, it is also a ‘marmite module’ and offers students with some particular challenges. It requires students to combine different techniques together to form solutions. In some cases students don’t have adequate coding skills and may also lack critical skills so they can apply the right techniques.

An interesting point I noted was that the Python requirements for TM351 are less than what is required for A-level. Another comment I note down was: perhaps more needs to be done to help students to prepare for this module, or the preparation needs to be done differently. In some respects, this is where TM112 Introduction to computing and information technology 2 will play an important role.

Python programming in S818

Andrew Norton and Mark Jones introduced S818 Space science which is a 60 point module that forms Stage 1 of the MSc in Space Science and Technology (F77). The module presents an introduction to Space Science and Technology, Apollo 11, Gaia  and Rosetta probes, and the Curiosity Mars rover.

S818 is linked to the OpenSTEM lab. The programming that is carried out as a part of the module is linked to the physics that is applied; Python is used as a tool to work through data. Students are directed to “Learn to Code for Data analysis” on OpenLearn, that was previously mentioned by Michel.

During Weeks 1 to 6, students are exposed to Jupyter notebooks and Pandas. Examples include a section on space weather and looking at data from space weather satellites. In addition to these activities, students are asked to carry out straight line fitting to data (SciPy, matplotlib), plot data of increasing complexity (using matplotlib) and a numerical solution of Kepler’s equation in orbital dynamics (I’m not sure what this means).  Students are also expected to use Python to handle and present results, even when they aren’t explicitly asked to do so. 

Python and accompanying tools

Tony Hirst from the School of Computing and Communications gave a talk about the different tools and technologies that could be used with Python. One thing Tony did was to explain that Jupyter is an ecosystem of related bits, based on Python. One of those bits is known as iPython

Echoing earlier presentations, Tony emphasised the importance of libraries and packages. There were packages that could be used to define and simulate circuits. There were packages that related to chemistry, where users could type in the name of a compound and software would ask the web for the structure. There were packages about astronomy and also packages about music, which could work with musical representations and create playable midi files.

We were told about V-REP a Virtual robot experimentation platform, and Binder, a way to connect Jupyter notebooks to GitHub version control software.

I made a note that Tony had also been looking at running software on OpenStack, which is an important part of TM352 Web Mobile and Cloud technologies.

The demand side

After a break for lunch, it was onto a series of short 2 minute presentations by ‘various artists’ that were broadly entitled ‘the demand side’ for the simple reason: these may be modules or module that need to apply programming in some way. 

SXPA288 Practical science: physics and astronomy

Sheona Urquhart spoke about second level physics and astonomy module, SXPA288 Practical science: physics and astronomy. I made a note of some interesting words: “the thing that freaks them out is the terminal window” and “this is not a programming course … Excel is just grim”. I’m assuming that this comment is linked to the need to perform data analysis.

T312 Electronics

T312 Electronics, which was introduced by Jane Bromley, is a new module that has just started production. I noted down that there might be an opportunity to draw on the Python electronics libraries that Tony had mentioned, and Python might also be used for hands on experience of signal processing.

M346 Linear statistical modelling

This module was introduced by Karen Vines, and is currently going through a rewrite. The earlier version used to use some software called Genstat (if I’ve made a note of this correctly), but there is a plan to move to the R programming language (wikipedia) which was said to be ‘command line’. The emphasis on this module is said to be the statistical techniques rather than the software

M373 Optimization

Optimisation was introduced by Tim Lowe. The module is all about numerical computing techniques, where ‘students use commands written by module team which implement methods’. I’ve made a note that this is a module that is needed to support a new data sciences degree. 

Physical Sciences Level 3

Ulrich Kolb introduced the BSc in Physics and mentioned that students needed programming skills. Students are required to carry out some simple Python coding and carry out simple tasks for data analysis. Modules are split into 10-15 credit chunks, and these could be linked to programming.

Delivering programming tutorials

This bit of the workshop was delivered by yours truly, where I spoke from the perspective of a staff tutor. I introduced a popular model called TPAC, which categories different types of knowledge in a simple way: there is pedagogical knowledge, technical knowledge about how to use tool, and knowledge about the content or the subject that is taught. I also mentioned that tools such as screen sharing could be really useful in the teaching of programming. I can’t quite remember, but I must have also spoken about the university group tuition policy.

PG Bioinformatics and cheminformatics

The final presentation of the day was by Mark Hirst who briefly spoke about the requirements bioinformatics and cheminformatics modules. There was a need to develop data handling, data analysis and data mining skills. Perhaps where was also an opportunity to use data from genome databases and a subject that could be called ‘advanced coding for the biosciences’.

Discussion notes

The event ended with a wide ranging discussion. One theme was about whether there was the need to explicitly teach different programming paradigms and the subject of comparative programming languages (I have to confess that I might have raised this as a subject, since it was one of my favourite subjects as an undergraduate, and one that I have found really helpful as a professional programmer). Another point being it is important to acknowledge important tensions between the needs of education and the needs of training.

There were differences: one colleague insisted that we could all use C++, another said that we should use FORTRAN, and a further colleague suggested that Pascal should be used for the simple reason that strongly typed language encourages good programmer behaviours. This wide range of opinions suggested that there isn’t one language that can suit our needs. 

One interesting point was that our students are, of course, changing. There is now a new computing curriculum for schools, which is something that everyone needs to be aware of.

I also noted down the words: ‘the pedagogy of teaching computing across students is something that is common across school, and this is something that can be learnt from each other’. I made another note was about the broad subject of the teaching of programming and how students move from a novice to an expert, namely that expertise is something that you acquire by doing, and this is a point that links back to my own practical presentation about the importance of delivering programming teaching.

Some concluding questions were: ‘how do we teach programming in a cost effective way?’ and ‘should we set up a working group to co-ordinate the teaching of programming?’ A further point is that associate lecturer development is important, and as is collaboration between different development communities. 

Reflections

I learnt a lot from this event and I got thinking about different ways of doing things. Not only did I learn about virtual robots that might be used in modules like TM129, I started to wonder about the possibility of teaching through robotic kits (The Pi Hut). I also learnt about the importance of R, and emphasised the flexibility and richness of libraries.

When I worked in industry, I did some serious coding in C, C++, Visual Basic and have even enjoyed confusing myself with the very many ways to write the same expressions in Perl, but I have yet to seriously get my hands dirty with Python. Thanks to all the presentations that were made during the day, I came away feeling inspired; I felt that I now need to do more to update my programming and development skills.

Acknowledgements

The words shared in this blog ultimately come from each of the presenters. A big shout out to Michel Wermelinger who did a brilliant job putting this event together.


Permalink Add your comment
Share post
Christopher Douce

HEA new to teaching workshop

Visible to anyone in the world

On 19 February 2015, I went to something called the HEA new to teaching workshop which was a part of a larger HEA ‘transitions conference’.  Now, I’ll be the first to admit that I’m not new to teaching, but there was a bit of the workshop that I was really interested in: a section that was about how to teach introductory programming.  The reason for my interest is that teaching programming is pretty difficult: some students excel, whereas other students struggle.

The session was facilitated by Karen Fraser from the HEA.  I’ve met Karen numerous times before, but I have never been to a session that Karen ran entirely on her own.  Instead, her role has been always to facilitate and introduce other speakers.

The aims of the day were to think about and reflect on our teaching practice, consider different ways of teaching, consider what things we are doing well, and share practice between each other.  This is a quick blog summary of the event.  I’ve written it for a number of reasons: it’s a set of notes that also contains links to useful resources, a way to tell my line managers what I’m getting up to, and to share some personal reflections about the event with Open University and other colleagues.

Professions

Karen opened the session by asking us a couple of questions: ‘is computing a profession?’ and ‘is academia a profession?’  My immediate response to the first question is: yes, because there’s a body called the British Computer Society (BCS.org) which aims to develop the professionalism of those working within the computing and IT industry.

I noted down that the purpose of the HEA is to enhance professionalism in higher education.  There are a number of issues that it addresses: reward and recognition, career progression, and continuing professional development.  In some respects, these areas can be connected to something called the Professional Standards Framework (PSF) where HE professionals can apply to gain different levels of professional recognition.  Karen briefly summarised the PSF, telling us that it contained six aspects of core knowledge, five areas of activity, and four professional values.

Returning to the original question, did we hold the view that higher education is a profession?  From memory, I believe the consensus was that higher education should be viewed as one.  It was also interesting to hear that the HEA has applied for a charter to become a professional society in the same way that the BCS is.

Teaching and learning

It might sound obvious, but one of the key aspects of professionalism in higher education is the need to foster and continually update knowledge and understanding about how students learn, both generally, and within their subject or disciplinary areas.

These key points led us to a discussion about the different types of teaching techniques that we could use in our discipline.  These ranged from the use of role play, applying a technique called action learning and demonstrating, such as showing students what code looks like in a debugger.  At this point I had a thought about the virtues of animations.  When I was industry I learnt a lot when I watched another more experienced programmer at work.  This short discussion was immediately making me think about what might help students to get to grips with the fundamentals of computing.

In my notes, I made the comment: ‘pair programming: advantages and disadvantages’.  I’m not exactly sure what I meant by this, but gently picking apart this theme immediately suggests a broad range of different issues: the importance of continuous learning within the computing and IT industry, the question of what skills industry is looking for and what universities can do to help, and the importance of soft skills in subjects such as IT and computing.

Teaching introductory programming

The next part of the session moved from considering the academic as a profession to the specifics about teaching of introductory programming.

We discussed some of the problems and challenges: students need to know about different programming languages and tools, and there is also the necessity to develop problem solving skills and increase student’s awareness of strategies of programming design and implementation.  A key point was that programming is a creative exercise; it’s all about the solving of new problems.  A perpetual challenge is how to map (or translate) real world problems into code.

Karen showed us a slide that asked a single question: ‘where do students struggle and what do you think their problems are?’  During the resulting discussion, I made a note of the following points: as lecturers we can’t teach programming, we can only help students to learn it; students need to put ‘the hours in’ (since programming is like any skilled activity).  Also, the context for learning is important: we need to explain why we’re doing what we’re doing.

Other key points were: the importance of expectations, inexperience and confidence; the importance of how to decompose problems, and asking students whether they are fundamentally interested in the subject.  An interesting question (to students) could be: ‘are you prepared to be confused?’ and asking students to reflect on their own experience with computing devices and their knowledge of hardware and software.

Some other interesting points related to the activity (or exercise) of ‘making a cup of tea’, to learn the idea of problem decomposition (this, incidentally, is an exercise that some of our Open University TU100 tutors use at a programming day school).  Other skills might include the ability to identify sequences, patterns and steps (along with understanding how to do, and translate into computer code basic arithmetic).  Finally, MOOCs (free on-line courses) were mentioned as possible way to allow students to acquire background knowledge.  One thought was that perhaps a MOOC might help students transition between A-level and degree level study.

Another question was introduced: why is teaching (or learning) programming so difficult?  Some tentative suggestions were that programming is a skill, and it is something that is learnt by doing, and also programming isn’t a prerequisite for computing modules.  There is also the challenge of dealing with the syntax (or structure) of programming languages, and that students might not have experience of important concepts, such as data typing.

We were taken through two other slides: thoughts about problems with students (a lack of analytical, reasoning and planning skills?), and thoughts about problems with lecturers (a disconnect in terms of communication and issues surrounding instructional materials, teaching methods and teaching strategies). These problems can have impacts.  These might be students becoming disillusioned and the waning of enthusiasm which could lead to a failure to attend practical classes.   But what could we (as lecturers or educators) do?  Some thoughts related to the importance of learning design, early identification when students get lost, the importance of fast feedback, and the encouragement of reflection.

All this led to a discussion that had the title: ‘what delivery techniques could we use to engage students and help them understand difficult concepts?’  The group came up with the following thoughts (amongst others):  Use of the institutional virtual learning environment and flipped classrooms (Wikipedia),  the use of small groups, facilitated debates, use of media stories, the creation of animation to demonstrate ideas, translation of algorithms into ‘physical theatre’ (pretending students are different values based on height), use of robots, the idea of ‘code as a performance’ (as something the lecturers, or students create front of others, potentially also demonstrating failure), application of peer assessment, use of in-class question and response systems, helping students to create their own resources, and inviting students to present different ideas to each other.

An important point was made: research (I’m not sure which research!) has shown that students have a hierarchy of pedagogical preferences when it comes to learning programming: students like programming lab sessions more than they like working on projects.  Lectures, it seems, isn’t viewed as an effective way to teach programming.  Thinking back to my own experience as an undergraduate (when I had to learn a programming language called Pascal), I can completely relate to this.

On the subject of peer assessment, we were introduced to a system called Peerwise (University of Auckland).   I hadn’t heard of this system before.  We were shown a brief introductory video about the system (Peerwise website).   I have heard of other peer assessment systems, such as WebPA which (I understand) used to be funded by JISC.  An idle thought is that it would be interesting to do a comprehensive review of these peer assessment systems (since I seem to think there are a few other systems out there).  

After lunch…

A provocative question was posed in the first session after lunch: should programming be taught in the first year of a degree?  An alternative perspective was that perhaps we ought to first teach other subjects, such as data structures and algorithms before moving onto programming.  This way, students get the opportunity to understand more about some of the fundamental concepts of software and computing.  My own view is one that connects back to earlier discussions, namely, that since programming is a fundamental skill, and it’s something that takes a long time to master, we need to give students the experience of what is meant by programming early on in the curriculum.

The next session was about sharing good practice in lecturers.  One of the biggest take away tips from the day was the idea of changing something every fourteen minutes: divide an hour lecture into different sections that are punctuated by videos, run discussion activities or question and answer sessions, get willing students to come to the front of the class, or change the entire tenor (or tone) of a lecture by telling a story or an anecdote, or take a bit of time to introduce other resources, such as MOOCs.   Another thought is to ask students to prepare something for a tutorial (but always remember that you’ve done this! On the subject of videos, we were offered an example: a clip entitled The Friendship Algorithm (YouTube) from the comedy series The Big Bang Theory.

The workshop ended with a chat about a range of different issues.  We chatted about the importance of reflection so we can understand more about our performance as lecturers, and also the importance of reflecting on what our students have learnt from our teaching practice.  Another topic was the importance of feedback, and how feedback is perceived by students. 

A final take away point was a reference to a paper by Chickering and Gamson entitled Seven Principles for Good Practice in Undergraduate Education (Washington news centre, PDF).  Although there isn’t anything in this paper that struck me as substantially new (which was published around twenty five years ago), it does represent a neat set of principles that can be fairly easily remembered and internalised.  When I was looking through the paper, one thought was: ‘how might these principles be translated or adapted to the on-line distance education context?’ or ‘what attributes of a module design might adhere to these principles?’

Final thoughts

Not long after I joined this session, Karen said to me, ‘you’re not new to teaching, are you Chris?’ In some respects this question was a challenge.  It was also a challenge that immediately led to a reflection.  The answer was: ‘I’m not new to teaching, but I’m here to see if there is anything new I can learn’.  

I was there for two reasons.  The first reason is that one of my jobs is to help to induct new tutors to the university and to help to run associate lecturer development sessions, which means it would be useful to know how the HEA does things.  Secondly, as mentioned earlier, I have an interest in the teaching of introductory computing and programming.

The whole day turned out to be useful: Karen’s discussion about the professionalisation of higher education was interesting and informative, and the day turned out to be a useful opportunity to share teaching practice and to learn about new resources.  By the end of the day I ended up coming out of the session with more questions than I went in with.  This, of course, is a sign of a good day.

Permalink Add your comment
Share post
Christopher Douce

Psychology of Programming Interest Group : work in progress meeting: Day 2

Visible to anyone in the world
Edited by Christopher Douce, Monday, 10 July 2023, 16:43

Teaching programming at a distance

The first presentation of the second day was by yours truly.  I gave a short talk about a university funded project that aims to understand more about the teaching experience of Open University associate lecturers who are tutoring on the TT284 Web Technologies  module.

One of the purposes of the project was to understand issues particular to the learning (and teaching) of programming.  An area of particular interest is the transition between the first level modules (which use some visual programming language) to the second level modules which require students to pay more attention to other issues, such as language syntax.

I wasn’t able to present any firm findings at this stage (since I keep getting sucked into the idiosyncrasies of my day job), except that three themes that were emerging were that some students can struggle with understanding what PHP is and how it works, confusion about Javascript, and the perpetual battle to understand regular expressions (which is, I believe, an issue that pretty all developers, expert or novice, seem to have)

The question and answer session was interesting.  There was some chat about a coding DoJos (group coding sessions), the use of MOOCs (FutureLearn and the Kahn Academy), and how get students talking to each other.

Holistic programming teaching at Middlesex

Franco Raimondi’s talk was rather different to all the others: he showed us some robots (real hardware!) that he used in his teaching.  They had an interesting design, using both Raspberry Pi devices that were connected to Arduino microcontrollers.

One question was: why use both?  The Arduinos are used to control analogue input, but the heart of the control is managed (as far as can understand it) by the Raspberry Pi devices.  Students then have the challenge of how to design and implement a communication protocol between the Pi and the sub-component.  I personally think this is a great approach: students are exposed to different devices and learn more about their purpose.  When I had a job in industry, one thing that I had to do is figure out how to get one embedded controller talking to another: Franco’s robots would have helped me a lot to figure out how to do this.  More information about the robots can be found by taking a look at the MIddlesex Robotic plaTfOrm: MIRTO website

Okay, so there is an interesting robot, but how are they used in practice?  Franco described a series of lectures, design workshops, programming workshops and physical computing workshops.  In the workshops (if my notes are correct!) students are asked to solve different problems, such as to write a line following algorithm where the robot has to cater for 90 degree turns, and to complete different line circuits as fast as possible.  Students could also implement control algorithms, such as PID controllers (Wikipedia) (which again takes me back to the days when I worked in industry for a while), remote control and managing the taking of cameras by controlling the Raspberry Pi.

What I found really interesting was that the platform (and the workshops) made use of a programming language called Racket (Wikipedia).  Racket is a language that I had not heard of before, but apparently it has roots in the Lisp language.  In some respects, I commend the choice (because it’s great to expose students to different programming paradigms), but on the other hand, there is something to be said for getting to grips with tools that are used in industry.  I guess this just goes to show that whenever you come along to workshop like these, you always learn new stuff.

Towards the end of Franco’s session, he spoke about a system to record Student Observable Behaviours, which then led onto a discussion about learning objectives.  Apparently, the use of ‘observable student behaviours’ is something that Middlesex use, perhaps as a part of their assessment strategy.  We were shown a web-based tool that lecturers can use to gather evidence of student engagement and activity.

I don’t know what this relates to, but I also made a note of a place called The Crystal  (Crystal website), which was also described as the Siemens technology centre.  As soon as I looked into it, I realise that I had once seen it before: on a cable car ride across the Thames.  I now know how to get to The Crystal if ever I need to visit it!

I enjoyed Franco’s session: he covered a lot of ‘tech stuff’ in a very short time.  Students at Middlesex are clearly challenged and are clearly kept busy! 

One thought is that different computing courses and degrees cover different topics and perspectives.  When I was heading home from the workshop I remembered that The Open University covered a bit about robots too on a first year undergraduate module that has the code: TM129 Technologies in Practice (OU website).  Students are also presented with the challenge of creating a line following robot.  Rather than using real robots, a simulated one is used (but, students can get to see real ones if you come along to an engineering day school).

Measuring programming achievement after a first course

The next presentation was by Ed Currie who presented what were described as ‘thoughts and preliminary research’.  One of the key thoughts (and one that I found most interesting) was why some students find programming so difficult.  One note that I have made is that we can’t teach it, students can only learn programming.  It’s not up to us; it’s up to them, and our job (as lecturers and teachers) it to facilitate the learning.

Ed mentioned the idea of Threshold concepts (Wikipedia) by Meyer and Land.  I’ve made a note of the point that ‘sequence is a threshold concept’ (when it comes to programming).  I remembered hearing the phrase before from a colleague who was doing what I think was some research to see what happened when students grappled with key ‘threshold concepts’.

Two great phrases that I’ve noted down are ‘neo-piagetian stages’ and ‘flip classroom’ (Wikipedia).  In some respects the OU has always been doing ‘flipped classrooms’, i.e. students study some material and the go to a face to face tutorial to apply what has learnt, either in terms of solving a problem, or through facilitated discussions.

I don’t know what the context was or where this came from, but I also made the note ‘sharing of learning stories’.  This might have just been an idle idea during Ed’s talk, or something that Ed had said.  When it comes to learning how to do computer programming, I’ve got my own story (which might well be insufferably dull!), and I’m sure that other people have their stories.  Perhaps something could be gained (in terms of learning strategies and approaches) if we find the space to discuss and share how we know what we know.

Reflections on teaching design patterns

The final presentation of the day and final presentation of the workshop was by Carl Evans, who is a lecturer at Middlesex.  Carl talked about his work on an MSc module and his experience of creating and presenting a module about software design patterns.

In computer science and software engineering, Design Patterns is one of my favourite topics.  A couple of points that Carl made really resonated with me.  One was that ‘industry needs architects, not just programmers’.  Another great point (and one that I totally subscribe to) is that there is an increasing expectation (from industry) that graduates can work with frameworks as well as know how to use programming languages. In some ways, this point connected up with Thomas’s keynote.

Carl mentioned Sun/Oracle certifications, the use of layered architectures, and frameworks called Spring (Wikipedia) and Hibernate (Wikipedia) that I have heard of, but have never used in anger.  A quick look into these frameworks quickly shows that design patterns feature pretty prominently.

A really questions are: how do we best teach patterns, and where do we start?  Is there a pattern about how to teach patterns?  I noted down that a refresher about the object-oriented approach is useful, before taking students through different categories of patterns, such as object creation patterns, enterprise patterns, data access patterns, and compound patterns (I was writing everything down pretty quickly at this point, so I might not have managed to the nuances of everything that was said).

Carl also told us about a site called Design Patterns Library which I don’t think I’ve seen before.  One book that was referenced was Head First Design Patterns by O’Reilly.  There seems to be a claim going around that says that these ‘head first’ books are based on ‘neuroscience’ (but I’ve yet to find out exactly what exactly this means: claims like that immediately make me sceptical!)  Either way, anything that helps to make important technical concepts understandable is a good thing.

Final thoughts

I don’t know how many Psychology of Programming Work In Progress events I’ve been to, but it’s been quite a few.  This might have been my fourth or fifth.   I have enjoyed every single one, and I enjoyed this one at Middlesex University too.  It was well organised, friendly and thought provoking.  The talks were really interesting, covering distance learning, errors, notation, robots, challenge of teaching object-oriented programming and a whole load of other subjects too.  The great thing about these events is that you never know what you’re going to get, which means that you never really know what you’re going to learn (and this can be, invariably, a very good thing too).

From my perspective, the event helped to strengthen an opinion I have, which is that we need to figure out how to help students (and practicing programmers) how to best understand and work with software frameworks.  This issue is not only a computing education issue, but also, significantly, a psychology of programming issue.  The first subject that I studied when I discovered this subfield of computing (or of psychology, depending on your ‘home’ discipline) was the topic of program (or software) comprehension.  It’s clear from this short workshop that this continues to be (for me) an important topic. 

Permalink Add your comment
Share post
Christopher Douce

Psychology of Programming Interest Group : work in progress meeting: Day 1

Visible to anyone in the world
Edited by Christopher Douce, Monday, 10 July 2023, 16:42

On 8 January 2015 I went to a mini-workshop: the Psychology of Programming Interest Group (PPIG) Work in Progress meeting.  I’ve had an affiliation with PPIG for what must be at least fifteen years and I try to visit their meetings whenever I can.  In some sense, returning to the PPIG meetings is like returning to a comfortable academic home: you regain enthusiasm for the research interests that you once held (and have an opportunity to say hello some familiar faces too).

This 2015 WIP event (as it is colloquially known) was held at the University of Middlesex in Hendon Town Hall and skilfully organised by Richard Bornat, who is a PPIG community regular. This short series of two blog post aims to summarise what were my own highlights.

Keynote talk: Thomas Green

The opening keynote was by Thomas Green, who is one of the founders of the group.  Thomas told us what the group was about and emphasised the point that the group doesn’t just discuss research into programming, but also the activities that surround programming (and psychology too).  Subjects for investigation have involved pair programming and explorations into the sociological and anthropological.  Other research subjects have included computer science education and pedagogy, and studies into the relationship between personality and programming.

Thomas is known for creating (or discovering) the cognitive dimensions of notations framework (Wikipedia).  His framework can be used to help us think about programming language design and user interfaces.  Thomas described it as ‘ambitious in scope, [and a framework that] addresses any kind of information artefact’. Simply put, Cognitive Dimensions is a set of principles that helps us to think about stuff.

To explain, Thomas gave us a couple of examples. One of the dimensions is viscosity (which is my personal favourite!)  Viscosity is, of course, an attribute of liquids: the higher the viscosity, the harder it is to push you hand through a liquid (for example) – I think I’ve got that the right way round!   When it comes to the dimensions, this can be understood in terms of ‘changes’ to something (or, to move a system from one state to another).  To make one change (to an information artefact), you might have to do a whole bunch of smaller changes before you get to your desired outcome.

Another dimension is: hidden dependencies. An example of this is the links or connections that might exist between the different cells of spreadsheets. You can’t immediately see what the connections between different cells might be, but you need to understand them if you’re going to understand and work with a spreadsheet.

This is all very well and good, but how does this relate to programming that we find in the real world?  Thomas gave us a number of examples of computer code used with a content management system (CMS).  Why study content management systems?  Thomas had some good answers: they were widely used, often are pretty difficulty to get your head around (which is certainly true!), and they haven’t been studied very much.

If you’re interested in content management systems, this Wikipedia page presents an amazing list of different content management systems (Wikipedia).  On the same subject of geek lists, this is another favourite of mine: comparison of web application frameworks (Wikipedia).  You can spend hours looking though these different pages.  These two summary links clearly show how big the ‘CMS space’ is.  (As an aside, if you have too much time on your hands, there’s also a List of Cakes and a List of Lists)

An interesting point that Thomas made (which is one that resonates with my own experience), is that they all claim they are ‘easy to use’.  Two examples that were spoken about were Wordpress (Wikipedia) and Drupal (Wikipedia).  For the purposes of Thomas’s presentation, we looked at the Perch (CMS website), a CMS that I had never heard of before.  The point was clear: Thomas’s framework can be used applied to study CMS’s and web frameworks.

After being mildly baffled with screens filled with code that used lots of angle brackets, there was a brief question and answer session.  I think I made a comment that I chose a CMS based on the quality of the on-line tuition videos.  My decisions were not based on language efficiency, but how easily I could see how to create something that similar to what I wanted to do.  (I’ve often wondered about whether we could look to the murky world of media studies to learn about why some tools become more popular than others: there’s a whole other dimension of CMS systems that could be explored).  There was also brief discussion about design patterns, since many of them make use of the model-view controller pattern.

It was pretty thought provoking stuff.  When it comes to content management systems, I can’t help but think there’s an opportunity to use them as a vehicle to conduct research into the creation, development and sustainability of software communities.

Active error: examining error detection and recovery in software development

Tamara Lopez gave the first talk of the day, and within minutes of starting her talk, I had started to remember some research I looked at over fifteen years ago.  Tamara’s research was all about human error and programming.  As Tamara was speaking, I thought to myself, ‘I wonder whether she has heard of a researcher called James Reason.  The answer came within minutes: of course she had.

Reason wrote a book called Human Error and carried out research into active errors (mistakes that happen in ‘real time’) and latent errors (which remain undetected within a system or product for considerable time).  Have you ever bought a chocolate bar, unwrapped it from its wrapper and then thrown the chocolate bar away?  Have you ever walked into a room and immediately thought, ‘why am I here?’  I recently put my keys in my fridge for no apparent reason.  These, I guess, are examples of active errors.

The aim of Tamara’s research was to perform a naturalistic observation of error in programming, and gather reports of error occurrence.  Understanding the characteristics of error can, of course, allow us to understand more about it and why error arises.

Tamara used a great method.  She was studying pair programming data videos that had been published on the internet through a website called Pairwith.us (website).  The developers were working on a project to adapt some kind of testing tool.  Tamara analysed the errors in terms of incidents and themes.  Some keywords that I picked up from Tamara’s talk were temporal, material and social.  A good talk and interesting research.

Visual Analytics as End-User Programming

The second research talk was by Advait Sarkar who had travelled from the University of Cambridge.  Advait gave a demonstration of some software that he had put together.  The focus of his prototype appeared to relate to the area of data analytics, specifically, how the area of machine learning might be connected to a spreadsheet environment.

Following this session (and Advait’s demonstration) there was there quite a bit of discussion about different machine learning approaches such as decision trees and neural networks; subjects that I hadn’t really touched on or explored in any great depth since I was an undergraduate.  Advait’s presentation wasn’t really in my area of expertise but it’s good to be exposed to different areas.

SQ and EQ and programming, revisited

The next talk was by Melanie Coles from Bournemouth University.  I remember Melanie from other PPIG events, so it was great to see her again.  It was interesting to hear that her talk related to some earlier research that she presented at PPIG back in 2007 (if I’ve understood this correctly).

The title of her talk was: ‘SQ and EQ and programming’ So, what exactly does SQ and EQ mean?  I understand them as rough and broad measures of personality.  EQ is an abbreviation for Empathy Quotient. Simply put, EQ is a measure of someone’s drive to identify with other peoples’ feelings and emotions. SQ, on the other hand, means Systemising Quotient. It is the extent to which people have a drive to understand rules governing things.

I’ve made a very rough note that Melanie related both measures to work carried out by Simon Baron-Cohen, who works in the area of autism research.  I’ve made another note in my notepad that there are tensions between these traits, along with the sentence, ‘scientists score higher in AQ than non-scientists’.

Some studies seem to suggest a correlation between the role of programming and these traits.  Other studies, on the other hand, don’t show anything.  The message coming through is that you don’t have to be high on the AQ scale to become a programmer.

I don’t know that this means, but I’ve made another note that reads ‘polite grumpiness about sterotypes’ which might have been scribbled down during the question and answer session.  I have no idea who was expressing polite grumpiness, or which stereotypes were being discussed.  I do, however, feel that this expression should still stand and has some validity.  A sensible rule is that if you’re going to take issue with stereotypes, you’ll go a lot further if you politely disagree rather than go around shouting about them.  I should also add, that I have no idea how this paragraph relates to Melanie’s very good (and very clear) presentation.  All this said, some really interesting ideas and (some exceedingly polite) discussions.

A great end to the first day!

Permalink Add your comment
Share post
Christopher Douce

OU e-learning Community – Considering Accessibility

Visible to anyone in the world
Edited by Christopher Douce, Sunday, 4 May 2014, 17:36

On April 23, I visited the Open University campus to attend an event to share lessons about how the university can support students who have disabilities. The event, which took place within a group called the ‘e-learning community’ had two parts to it: one part was about sharing of research findings, and the other part was about the sharing of practice.

This blog aims to summarise (albeit briefly) the four presentations that were made during the day.  It’s intended for a couple of audiences: colleagues within the university, students who might be taking the H810 accessible online learning (OU website) module that I tutor, and anyone else who is remotely interested.

Like many of these blogs, these event summaries are written (pretty roughly) from notes that I made during the sessions.  (This is a disclaimer to say that there might be mistakes and I’m likely to have missed some bits).

Academic attainment among students with disabilities in distance education

Professor John Richardson, from the OU’s Institute of Educational Technology gave the first presentation of the day.  John does quantitative research (amongst a whole load of other things), and he began by staying that there is an increase in knowledge about our understanding of the attainment of students who have disabilities, but the knowledge is fragmented.  John made a really important point, which was that it is patent nonsense to consider all disabled students as a single group; everyone is different, and academic performance (or attainment) is influenced by a rich combination of variables.  These include age, gender, socio-economic status, prior qualifications (and a whole bunch of others too).

When we look at qualitative data, it’s important to define what we’re talking about.  One of the terms that John clearly defined was the phrase ‘a good degree’.  This, I understand, was considered to be a first or an upper second class honours degree.  John also mentioned something that is unique about the OU; that it awards degree classifications by applying an algorithm that uses scores from all the modules that contribute towards a particular degree (whereas in other institutions, the classification comes from decisions made by an examination board).

We were given some interesting stats.  In 2009 there were 196,405 registered students, of which 6.8% of students declared a disability.  The most commonly disclosed disability was pain and fatigue, followed by dyslexia.  Out of all disabled students, 55% of students declared a multiple disability.

In 2012 the situation was a little different. In 2012 there were 175,000 registered students, of which 12% (21,000) students declared (or disclosed) a disability.  John said that perhaps this increase might be an artifact of statistics, but it remains a fact.  He also made the point (raised by Martyn Cooper, a later speaker on the day) that this number of students represents the size of an average European university.  From these statements I personally concluded that supporting students with disabilities was an activity that the university needs to (quite obviously) take very seriously.

If I’ve got this right John’s research drew upon a 2009 data set from the OU.  There were some interesting findings.  When controlling for other effects (such as socio-economic class, prior qualification and so on), students who had declared pain and fatigue and autistic spectrum disorders exhibited greater levels of gaining good degrees that non-disabled students.  Conversely, students who had disclosed dyslexia, specific learning disabilities or multiple disabilities gained a lower percentage of good degrees when compared with non-disabled students.

I’ve made a note of a couple of interesting conclusions.  To improve completion rates, it is a good idea to somehow think about how we can more readily support students who have disclosed mental health difficulties and mobility impairments.  To improve degree levels, we need to put our focus on students who have disclosed dyslexia and specific learning disabilities.  One take away thought relates to the university’s reliance on text (which is a subject that crops up in a later presentation).

Quantitative research can only tell us so much; it can tell us that an artifact exists, but we need to use other approaches to figure out the finer detail.  Qualitative research, however, can provide detail, but the challenge with qualitative approaches lies with the extent to which findings and observations can be generalised.  My understanding was that we need both to clearly create a rich picture of how the university supports students with disabilities. 

Specific learning differences, module development and success

The second presentation was a double act by Sarah Heiser (a colleague from the London region), and Jane Swindells, who works in the disability advisor service.  Jane introduced the session by saying that it was less about research and more about sharing a practitioner perspective.  I always like these kind of sessions since I find it easy to connect with the materials and I can often pick up some useful tips that you can use within your own teaching.

An important point is that dyslexia has a number of aspects and is an umbrella term for a broader set of conditions.  It can impact on different cognitive processes, such as the use of working memory, speed of information processing, time management, co-ordination and automaticity of operations.  It can also affect how information is received and decoded. 

On-line or electronic materials offer dyslexic learners a wealth of advantages; materials can be accessed through assistive technologies, and users can personalise how content is received or consumed.  An important point that I would add is that the effectiveness of digital resources depends on the user being aware of the possibilities that it gives.  Developing a comprehensive awareness of the strategies of use (to help with teaching and learning) is something that takes time and effort.

Sarah spoke about a project where she has been drawing out practice experience from associate lecturers through what I understand to be a series of on-line sessions (I hope I’ve understood this correctly).  Important themes to include challenges that accompany accuracy, text completion, following instructions, time, and the importance of offering reassurance.

I’ve made a note of the term ‘overlearning’.  When I had to take exams I would repeat and repeat the things I had to learn, until I was sick of them.  (This is a strategy that I continue to use to this day!)

One point that I found especially interesting relates to the use of OU live recordings.   If a tutor records a session, a student who may have dyslexia can go over them time and time again, choosing to pick up sections of learning at a time and a pace that suits them.  This depends on two points: the first is the availability of the resource (tutors making recordings), and students being aware that they exist and know how they can access them.

Towards the end of the session, Sarah mentioned a tool called Language Open Resources on-line, or LORO for short.  LORO allows tutors to share (and discover) different teaching resources.  I was impressed with LORO, in the sense that you can enter a module code and find resources that tutors might (potentially) be able to use within their tutorial sessions.

SeGA guidance: document accessibility/accessible methods and other symbolic languages

The third presentation of the day was from Martyn Cooper, from the Institute of Educational Technology.  Martyn works as a Senior Research fellow, and he has been involved with a university project called SeGA, known as Securing Greater Accessibility.  A part of the project has been to write guidance documents that can help module teams and module accessibility specialists.  An important point is that each module should have a designed person who is responsible for helping to address accessibility issues within its production.  (But, it should also be argued that all members of a module team should be involved too).

The documents are intended to provide up to date guidance (or, distilled expertise) to promote consistency across learning resources. The challenge with writing such guidance is that when we look at some accessibility issues, the detail can get pretty complicated pretty quickly.

The guidance covers a number of important subjects, such as how to make Word documents, PDFs, and pages that are delivered through the virtual learning environment as accessible as possible.  Echoing the previous talk, Martyn made the point that electronic documents have inherent advantages for people who have disabilities – the digital content can be manipulated and rendered in different ways.

Important points to bear in mind include the effective use of ALT texts (texts that describe images), the use of scalable images (for people who have visual impairments), effective design of tables, use of web links, headings and fonts.  An important point was made that it’s important to do ‘semantic tagging’, i.e. design a document using tags that describe its structure (so it becomes navigable), and deal with its graphical presentation separately.

I noted down an interesting point about Microsoft Word.  Martyn said that it is (generally speaking) a very accessible format, partly due to its ubiquity and the way that it can be used with assistive technologies, such as screen readers.

Martyn also addressed the issue about how to deal with accessibility of mathematics and other symbolic notations.  A notation system or language can help ideas to be comprehended and manipulated.  An important point was that in some disciplines, mastery of a notation system can represent an important learning objective.  During Martyn’s talk, I remembered a lecture that I attended a few months back (blog) about a notation scheme to describe juggling.  I also remember that a good notation can facilitate the discovery of new ideas (and the efficient representation of existing ones).

One of the challenges is how to take a notation scheme, which might have inherently visual and spatial properties and convert it into a linear format that conveys similar concepts to users of assistive technologies, such as screen readers.  Martyn mentioned a number of mark-up languages that can be used to represent familiar notations: MathML and ChemML (Wikipedia) are two good examples.  The current challenge is these notations are not supported in a consistent way across different internet browsers.  Music can be represented using something called music braille (but it is also a fact that only a relatively small percentage of visually impaired people use braille languages), or MIDI code.

A personal reflection is that there is no silver bullet when it comes to accessibility.  Notation is a difficult issue to grapple with, and it relies on users making effective use of assistive technologies.  It’s also important to be mindful that AT, in itself, can be a barrier all of its own.  Before one can master a notation, one may well have to master a set of tools.

The question and answer session at the end of Martyn’s talk was also interesting.  An important point was raised that it’s important to embed accessibility into the module production process.  We shouldn’t ‘retrofit’ accessibility – we should be thinking about it from the outset.

Supporting visually impaired students in studying mathematics

The final presentation of the day was by my colleague Hilary Holmes, who is a maths staff tutor.  A comment that I’ve made (in my notebook) at the start of Hilary’s presentation is that the accessibility of maths is a challenging problem.  Students who are considering studying mathematics are told (or should be told) from the outset that maths is an inherently visual subject (which is advice that, I understand, is available in the accessibility guide for some modules).

Key issues include how to describe the notation (which can be inherently two dimensional), how to describe graphs and diagrams, how to present maths on web pages, and how to offer effective and useful guidance to staff and tutors.

First level modules make good use of printed books.  Printed books, of course, present fundamental accessibility challenges, so one solution to the notation (and book accessibility) issue is to use something called a DAISY book, which is a navigable audio book.  DAISY books can be created with either synthesised voices, or recorded human voices.  The university has the ability to record (in some cases) DAISY books through a special recording facility, which used to be a part of disabled student services.  One of the problems of ‘speaking’ mathematical notation is that ambiguities can quickly become apparent, but human readers are more able to interpret expressions and add pauses and use different tones to help convey different meanings.

Another approach is to use some software called AMIS (AMIS project home), which is an abbreviation for Adaptive Multimedia Information System. AMIS appears to be DAISY reader software, but it also displays text.

Diagrams present their own unique challenges.  Solutions might be to describe a diagram, or to create tactile diagrams, but tactile diagrams are limited in terms of what they can express.  Hilary subjected us all to a phenomenally complicated audio description which was utterly baffling, and then showed us a complex 3D plot of a series of equations and challenged us with the question, ‘how do you go about describing this?’  I’ve made a note of the following question in my note book: ‘what do you have to do to get at the learning?’

Another approach to tackle the challenge of diagrams is to use something called sonic diagrams.  A tool called MathTrax (MathTrax website) allows users to enter in mathematical expressions and have them converted into a sound.  The pitch and character of a note change in accordance with values that are plotted on a graph.  Two important points are: firstly, in some instances, users might need to draw upon the skills of non-medical helpers, and secondly (as mentioned earlier), these tools can take time to master and use.

A final point that I’ve noted down is the importance of offering tutors support.  In some situations, tutors might be unsure what is meant by the phrase ‘reasonable adjustment’, and what they might be able to do in terms of helping to prepare resources for students (perhaps with help from the wider university).  Different students, of course, will always have very different needs, and it is these differences that we need to be mindful of.

It was really interesting to hear that Hilary has been involved with something called a ‘programme accessibility guide’.  This is a guide about the accessibility of a series of modules, not just a single module.  This addresses the problem of students starting one module and then discovering that there are some fundamental accessibility challenges on later modules.  This is certainly something that would be useful in ICT and computing modules, but an immediate challenge lies with how best to keep such a guide up to date.

Reflections

It was a useful event, especially in terms of being exposed to a range of rather different perspectives and issues (not to mention research approaches).  The presentations went into sufficient detail that really started to highlight the fundamental difficulties that learners can come up against.  I think, for me, the overriding theme was about how best to accommodate differences.  A related thought is that if we offer different types of resources (for all students), there might well be a necessity to share and explain how different types of electronic resources and documents can be used in different ways (and in different situations).

The Languages Open Resources Online website was recently mentioned in a regional development conference I attended a month or two back.  Sarah’s session got me thinking: I wondered whether it could be possible to create something similar for the Maths Computing and Technology faculty, or perhaps, specifically for computing and ICT modules (which is my discipline).  Sharing happens within modules, but it’s all pretty informal – but there might be something said for raising the visibility of the work that individual tutors do.   One random through is that it could be called: TOMORO, with the first three letters being an abbreviation for: Technology Or Mathematics. There are certainly many discussions to be had. 

 

Permalink Add your comment
Share post
Christopher Douce

European Innovation Academy

Visible to anyone in the world

In July I went to something called the European Innovation Academy. The idea behind the academy was to get groups of students together with the intention of creating a product or solution to a problem.  (By product, think of ‘mobile app’ or digital service of some kind).  As a part of a three week programme, students were taught about what is meant by innovation, introduced to concepts such as user centred design and different business models, before being presented with some talks about how to further develop their ideas.  At the end of the third week, participants were encouraged to write a short pitch to sell their product, solution or service, to potential investors, with a view to securing further funding.

Making skills visible

A couple of months earlier, I went to a UK Higher Education Academy event (blog) that was all about how best to go about the teaching of programming to those students who want to learn how to develop software for mobile devices.  What struck me was the point that if students want to get ahead, a really good idea is to create some kind of product that could be sold through vendor app stores (such as Google Play, for instance).  The advantage of doing this is that you advertise your skills in a very direct way and can clearly describe what you’ve done and achieved on your CV.

A substantial part of the academy was all about creating something.  As far as I understand it, there was time on the programme to allow students to not only learn about different platforms and tools, but also time to try (as best as possible) to create some prototype software that could be demonstrated to others.  Creating an artefact, as far as I could see, was considered to be a really important aspect.

Taking software further

A number of years ago, I used to have a job as a professional software developer.  It was thinking back to these times that I asked myself a fundamental question: ‘what on earth could I potentially say to the participants to help them appreciate some of the challenges inherent in creating software systems and products?’  I’ll put my hand up and say that I’ve always had one foot more firmly in the technology side of things than the business side.

I struck on an idea to not only talk about software, but also some of the more human sides of software development.  Software is, of course, a creative product, and there are things that we can do (in terms of structuring things to help people work together) to get things done. The things that we choose to do, however, are fundamentally affected by the types of product that we’re creating.  Some products or solutions require us to use different methods.

So, what did I talk about?  I had three hours to fill!  Below is a quick summary of what I considered to be the highlights.  The participants might have different views.

Challenges

First of all, I asked the groups some questions to help them consider what they considered to be the most important or significant challenges that they felt they needed to address.  When you’re going head long into a development, I thought it might be useful to find a bit of time to step back and ask the participants about the problems that they were facing, and whether they might be able to share some advice about how to solve some of their problems and how to manage some of the risks that each project group might face.

Interaction design

Since the participants were creating prototypes, I talked a bit about the process of interaction design and the ideas of different types of prototypes (i.e. horizontal prototypes and vertical prototypes).  I also spoke about the necessity to consider the user, the task and the environment, since considering all these aspects are really important when considering the final usability of a system (and usability will fundamentally influence whether or not a product or idea is accepted).

Processes

You could argue that interaction design is all about process.  I also introduced the idea of software development processes, notably, agile development which emphasises regular and constant communication between both developers and stakeholders.  I made the fundamental point that constant communication is a necessity since software is an intangible product; the only way to make software real is to talk about it.  Agile methods facilitates that talking.

Testing

In some software development cultures (and each culture is slightly different), software testing can be an integral component, but it is a subject that can be very easily overlooked.  Software testing is a pretty big subject, covering a huge variety of different techniques and approaches.  When we move from the small to the large, we fundamentally need to make sure that things work as they should be (since if things go wrong, then our customers don’t get a good customer experience).  I spoke about two important aspects to testing (and highlighted a bunch of others).  These were: different types of usability testing, and test driven development.

Abstraction

Abstraction is, perhaps, one of the most important and fundamental concepts in computing.  An abstraction could be described as an essence of a concept which doesn’t contain any superfluous detail or ideas.  When our abstractions are right, our software becomes easy to work with.  Abstractions represent a really important way to manage complexity.  We need abstractions within our code because programmers can only deal with a limited amount of stuff and connections between parts of a program at any one time.

One approach to creating software is to create our code in different layers.  Software developers constantly use code libraries as well as consume data from other information sources.   When talking about abstractions I also introduced the idea of design patterns.  These represent templates of common solutions to coding (and software design) problems that have been shown to occur time and time again.   Coming back to the point of processes and the need for constant communication, if we can put a name against our various types of abstraction (which is something that the concept of a design pattern does for us), this can make the communication between developers a whole lot easier.

Version management

When you’re working with code things can get very complicated very quickly.  There’s multiple files, different versions of libraries, you might include a whole bunch of different graphics or change database structures or web services... and then the bugs start to creep in and give both you (and your customers) a whole set of headaches.

I felt that it was important to saying something about version control and configuration management.  When we’re in the zone of high productivity (when we’re at one with the problem and our tools), creating new products and services, we can quickly lose our own history in the path of continual change.  Version management systems (or whatever you choose to call them) enable some aspects of development history to be captured and saved.  One challenge that we need to be aware of is that the use of these tools requires discipline.

Technologies

To create any software of substance, you’ve got to use some technology that already exists.  If you’re creating apps, you’re going to use some kind of integrated development environment (which consists of programming languages, debuggers, code profilers, and a whole bunch of other goodies).  Another subject that I wanted to mention was that there is a whole set of other technologies that developers could use.

One really useful concept is the software framework.  In essence, frameworks can be considered as a set of high level abstractions that allow developer to more efficiently solve common problems.  A framework can allow you to work more quickly (and hopefully more efficiently) by building on the work of other developers.  Two challenges include: figuring out which framework to choose (and whether it really would help you or not), and then understanding how a framework might work.

Another broad set of technologies that developers might utilise is web services.  Web services can now be used to store data and host applications.  Rather than having to manage their own servers and systems, an app developer might be able to use services that have been developed and deployed by other companies.  The challenge lies in figuring out what they are and making choices between different possibilities.

Community

In terms of software, the word community can be interpreted and understood in a number of different ways.  There might be a user community or a developer community, for instance.  You might want to share information about an emerging product through blogs and direct interested users to these updates through Twitter updates.  My point was that community, whatever form this might take, is fundamentally important.  Although technology is a necessity, technology won’t develop, change or improve if there isn’t a community of users or developers that are keen on using or enhancing a system.

Another notion of community lies with the area of open source software.  I understand that during earlier parts of the academy, students were introduced to different types of business models.  Some business models work through the use, application and development of open source software.  In some situations, open sourcing a development might be a part of a wider strategy.  If so, then it is fundamentally important to consider how to support and nurture a community that makes use of any software (or service) that is made available to others.

A final connection to the notion of community that came to mind was the importance of partnerships. Creating effective software and services is something that requires a lot of specialist skills and expertise. I remember one story from a HEA event that I attended some time ago.  I remember being shown an example, a collaboration between a graphics artist and a programmer, that led to the development of a really nice product; an interesting and playable mobile game.  A fundamental point was that sometimes, the best work that we do is when we work with others. 

Reflections

A personal reflection is that putting together these series of talks seemed to take up quite a lot of time, but it was pretty good fun thinking about what to include and what not to include.  I asked myself a really simple question, which was, ‘if I was there, being a delegate as a part of this programme, what would I really want to know?’  In retrospect, I fear I might have perhaps crammed in too much material, perhaps covering too many ideas or too many technologies in what was a very short space of time.  On the other hand, I think this was the point of the programme: to introduce people to new concepts and ideas, and to allow those on the programme to be fundamentally challenged.

One thing that struck me was that some of the teams gave the impression that they needed more developers; more people who were able to use the software development environment to create new products.  If you’ve never seen an integrated development environment before, the learning curve is practically vertical - it takes time to appreciate their intricacies and idiosyncratic ways.   Three weeks is an impossibly short time to come up with a new innovative idea that actually does something if working with technology isn’t something that you do all the time.

Since I attended the programme during the third week, I wanted to positively tantalise the participants.  I wanted to say to them, ‘you know, all this tech that you’re playing with, and all these cool prototypes that you’re creating using tools that you’ve never used before? Well… this is only the beginning – there’s a whole bigger world of software tech out there ready for when your ideas and inventions become real.’  I hope I managed to expose some of that bigger world of software to some of them.

Permalink Add your comment
Share post
Christopher Douce

BCS Lecture: The Power of Abstraction

Visible to anyone in the world
Edited by Christopher Douce, Friday, 10 Aug 2018, 14:41

When I was a graduate student at the University of Manchester (or the bit of it that was once known as UMIST) I was once asked to show some potential computer science students around the campus.  At the end of the tour I ushered them to lecture which was intended to give the students a feel for what things would be like if they came to the university.

The lecture, given by one of the faculty, was all about the notion of abstraction.  We were told that this was a fundamental concept in computing.  In some respects, it felt less of a lecture about computing, but more of a lecture about philosophy.  I had never been to a lecture quite like it and it was one that really stuck in my mind.  When I left the lecture, I thought, 'why didn't I have this kind of lecture when I was an undergraduate?'  As an undergrad I had spent many a hour creating various kinds of computer programs without really being told that there was an essential and fundamental idea that underpinned what I was doing.

When I saw the British Computer Society (BCS) advertising a lecture that was about the 'power of abstraction', I knew that I had to try to make time to come along. The lecture, by Professor Barbara Liskov, was an annual BCS lecture (the Karen Spärck Jones lecture) that honours women in computing research.

All this sounds great, right?  But what, fundamentally, is abstraction?  An 'abstract' at the top of a formal research paper says, in essence, what it contains.  Abstraction, therefore, can be thought of as a process of creating a representation of something, and that something might well be a problem of some kind.  Admittedly, this sounds both confusing and vague...

Barbara began her lecture by stating that abstraction is the basis of how we implement computer software.  The real world is, fundamentally, a messy place.   Since computers are ultimately mathematical machines, we need a way to represent problems (using, ultimately, numbers) so that a computer can work with them.  As a part of her lecture, Barbara said that she was going to talk through some developments in the way that people (or computer programmers) could create and work with abstractions.  I was intrigued; this talk wasn't just about a history of programming languages, it was also a history of thought.

So, what history was covered?  We were immediately taken back to the 1970s.  This was a period in computing history where the term 'software crisis' gained currency. One of the reasons was that it was becoming increasingly apparent that creating complex software systems was a fundamentally difficult thing to do.  It was also apparent that projects were started, became excruciatingly late and then abandoned, costing astronomical amounts of money. (It might be argued that this still happens today, but that's a whole other debate which goes beyond this pretty short blog post).

One of the reasons why software is so fundamentally hard to create is that it is 'mind stuff'.  Software isn't like a physical artefact or product that we can see. The relationships between components can easily become incredibly complicated which can, in turn, make things unfeasibly difficult.  Humans, after all, have limited brain capacity to deal with complexity (so, it's important that we create tools and techniques that help us to manage this).

We were introduced to a number of important papers. The first paper was by Dijkstra, who wrote a letter to the Communications of the ACM entitled, 'Goto considered harmful'.  'Goto' is an instruction that can help to create very complicated (and unfathomable) software very quickly.  Barbara described the difficulty very clearly. One of the reasons why software is so hard is that there is a fundamental disconnect between how the program text might be read by programmers and how it might be processed or executed by a machine.  If we can create a program representation that tries to bridge the difference between the static (what is described should happen) and the dynamic (what actually happens when software does its stuff), then things would be a whole lot easier.

Another paper that was mentioned was Wirth's 'program development by stepwise refinement'. Wirth is famous for the design of two closely related languages: Pascal and Modula-2. It certainly is the case that it's possible to write software without the 'goto' instruction, but Barbara made the interesting point that it's also possible to write good, well-structured software in bad languages (providing that you're disciplined enough). The challenge is that we're always thinking about trade-offs (in terms of program performance and code economy), so we can easily be lured into doing clever things in incomprehensible ways.

Barbara spoke about the importance of modules whilst mentioning a paper by Parnas entitled, 'information distribution aspects of design methodology'. One of the great things about modules, other than that they can be used to group bits of code together, is that they enable the separation of the implementation and the interface.   This has reminded me of some stuff from my undergrad days and time spent in industry: modules are connected to the term 'cohesion'.  Cohesion is, simply, the idea that something should do only one thing.  A function that has one name and does two or more things (that are not suggested in its name) is a recipe for confusion and disaster.  But I fear I'm beginning to digress from the lecture and onto one of my 'coding hobbyhorses'.

Through a short mention of a language called Simula-67 (Wikipedia) we were then introduced to a paper by Liskov and Zilles entitled, 'programming with abstract data types'.  We were told that this paper represented a sketch of a programming language which eventually led to the creation of a language called CLU (Wikipedia), CLU being short for Clusters.

There is one question Barbara clearly answered, which is: why go to all the trouble of writing a programming language?  It's to understand whether an idea works in practice and to understand some of the barriers to performance.  Also, whenever a language designer describes a language in natural language there are always going to be some assumptions that the compiler writer must make. Only by going through the process of creating a working language are language designers able to 'smoke out' any potential problems.

Just diverting into programming language speak for a moment, CLU implemented static type checking, used a heap, and doesn't support concurrency, the goto statement or inheritance.  What it does implement is polymorphism (or the use of generics), iterators and exception handling.

Barbara also mentioned a very famous language called Smalltalk, developed by Alan Kay.  Different developments at different times and at different places have all influenced the current generation of programming languages.  Our current object-oriented languages enable programmers to define abstractions, or a representation of a problem in a way that wasn't possible during the earlier days of software.

Research directions

Barbara mentioned two research topics that continue to be of interest.  The first was the question of what might be the most appropriate design of a programming language for novices?  In various years, these have been BASIC (which introduced the dreaded Goto statement), Pascal, and more recently Java.  Challenges of creating a language that helps learners develop computational thinking skills (Wikipedia) include taking account of programming language design trade-offs, such as ease of use vs. expressive power, and readability vs. writeability, and how to best deal with modularity and encapsulation.

Another research subject is languages for massively parallel computers.  These days, PCs and tablets, more often than not, contain multiple processor cores (which means that they can, quite literally, be doing more than one calculation at once).  You might have up to four cores, but how might you best design a programming language that more efficiently allows you to define and solve problems when you might have hundreds of processors working at the same time?  This immediately took me back to my undergrad days when I had an opportunity to play with a language called Occam (Wikipedia).

There was one quote from Barbara's lecture that stood out (for me), and this was when she said, 'you don't get ideas by not working on things'. 

Reflections

I should say at the point that I haven't done Barbara's speech justice.  There were a whole lot of other issues and points that were mentioned but I haven't touched on.  I really enjoyed being taken on a journey that described how programming languages have changed.  I liked the way that the challenges of coding (and the challenge of using particular instructions) led to discussions about modules, abstract data types and then to, finally, object-oriented programming languages.

It's also possible to take a broader perspective to the notion of abstraction, one that has been facilitated by language design.  During Barbara's lecture, I was mindful of two related subjects that can be strongly connected to the notion of abstraction.  The first of these is the idea of design patterns.

Design patterns (Wikipedia) take their inspiration from architecture. Rather than design a new building from scratch every time you need to make one, why not buy a pre-existing design that has already solved some of the problems that you might potentially come up against?  There is a strong parallel with software: developers often have to solve very similar problems time and time again.  If we have a template to work from, we might arguably get things done more quickly and cheaply.

Developers can use patterns to gain inspiration about how to go about solving common problems.  By using well understood and defined patterns, the communication between programmers and developers can be enhanced since abstract concepts can be readily named; they permit short-cuts to understanding.

In some cases, patterns can be embedded into pre-existing code that can be used by developers to kick-start a development.  This can take the form of a framework, software code that solves well known problems that ultimately enables developers to get on and solve the problems that they really need to solve (as opposed to dealing with stuff such as reading and writing to databases).

Abstraction has come a long way in my own very short career as a developer. One of the biggest challenges that developers face is how to best break down a problem into structures that can be represented in a language that a machine can understand.  Another challenge lies with understanding the various tools that developers now have at their disposal to achieve this.

Note: The logo at the top of the blog is used to indicate that this blog relates to a BCS event and this post is not connected with the BCS in any other way. All mistakes and opinions are my own, rather than that of the OU or the BCS.

Permalink Add your comment
Share post
Christopher Douce

UCL : Introducing engineering and computing

Visible to anyone in the world

On 12 February 2013 I volunteered at a joint Open University and UCL event on 12 February 2013 which aimed to introduce aspects of computing and engineering to school students.  This was the first time I had been involved with this type of event.  I have started to view outreach (in the broadest sense) as something that is something that is increasingly important to do (and this is something that I have written about in an earlier blog).  So, if you're interested in hearing some about the outreach stuff that I've recently heard about, the previous blog I've posted might (or might not!) be of interest.

Structure

I learnt about this event by a colleague who was canvassing for volunteers.  Upon accepting his challenge I quickly discovered that I was to play a tiny part of what was a much bigger event and soon heard rumours that students were coming to UCL to hear about other subjects such as chemistry and engineering.  My own role was to offer some support and guidance to students who wished to learn a little bit about computing and information technology.

Not only was this, for me, my first ever time being involved in an outreach or engagement event, it was also my first ever time on the UCL campus: it was massive!  I found myself being ushered into a large computer suite in the basement of one of UCL's impressive buildings.  Within moments, our lead facilitator and lecturer, Arosha Bandara, started to outline the plan for the day.

The focus on the day was the programming language Sense, a language that is used with the Open University module TU100 My Digital Life which is a first level undergraduate module in computing.  One of the key aspects of Sense is that it works with a bit of electronics that allows different types of measurements to be made.  Arosha talked us through a program that simulated a simple etch-a-sketch game.  Students would be asked to make a change to the program so that it would work properly - they were required to do some software maintenance!  During the second part of the day, students were then required to get together in groups to think of how to the language and the sensors to do something fun.

The talking bit...

The morning began with Arosha outlining the broad concept of Ubiquitous computing (Wikipedia), namely, that computers can be everywhere, can contain sensors and can be embedded within the environment.  Arosha then introduced a programming problem (in the form of an etch-a-sketch game).  Everyone was taken through different parts of the Sense programming environment.  Key elements such as buttons, instruction palettes and sprites (graphics) were introduced.

Students were then directed to some key parts of the game that accepted inputs from Sense hardware.  Students were then shown, step-by-step, how to make a change to the game to modify the behaviour of an on-screen pen.  They could immediately see the effect of changes to their programs.  Further modifications included adding some conditions that enabled their game programs to respond to noises (such as clapping!)

The projects bit...

There were loads of things to take in during the first part of the morning.  There was a whole new programming environment, there was the concept that a computer can receive and work with signals from the outside world, and the idea that a program can be formed out of groups of instructions.

The second part of the day was all about being imaginative, thinking about the different kinds of inputs and outputs that the electronics allow, and trying to think of some kind of application or demonstration.  Students were assigned to small groups and were encouraged to come up with different ideas.

The group that I was assigned to came up with the idea of trying to build some kind of 'human sensor', perhaps creating an infra-red trip wire (the Sense board came with a number of different sensors and outputs - one of them being an infrared transmitter or detector).  We collectively thought about the different cables and sensors that we had at our disposal before beginning to play with what kinds of signals (or numbers) we could detect from the outside world.  We got a fair way with this task before our time was up.

Reflections

It was a fun day!  Although there was limited time to do real stuff, the tiny team that I was allied to wrote some simple program code that allowed a heat sensor to work.  The Sense board represented a connection between the magical world of code and software to the physical world, where measurements could be made.

One of the biggest challenges of the day was to convey such a lot of (often quite difficult) theory in such a short amount of time.  Arosha was charged with telling our students something about the different types of programming constructs, variables and graphics.  Although this was necessary to get to the point where we could all do some fun stuff (modify our program), the way that hardware was used with software certainly facilitated engagement and helped to focus our attention.

I liked the way the idea of ubiquitous computing was used as an introduction, but one additional might have been to emphasise the extent that we are surrounded by computers.  The moment you receive a telephone call, there is an unknown number of computers all working together to deliver your telephone call.  There's the computer in your mobile phone, there's a computer in the base station which speaks to other computers... at the other end, there is a similar situation.  Also, turning on the TV means starting up a pretty powerful computer that is performing millions of instructions a second which coverts signals from one format to another.  Their ubiquity and invisibility is astonishing.

What is also astonishing is that the fundamental principles of computer programming that are exposed by the Sense programming language is also shared amongst all these devices and systems.  In the same way we have ubiquitous computing, we also have ubiquitous code; computer software that run anywhere.

Being involved in this day took me back in time to the days when I first got my hands on a computer.   Although the form of a computer has changed immeasurably, some things have not changed.  Computers remain very particular and pedantic - they require patience.  It's also important to remember that to learning how to work with code can and should be fun.  But when you've created a world out of code and you understand how things work, working with them can be immeasurably rewarding too.

Permalink Add your comment
Share post
Christopher Douce

Psychology of Programming Interest Group 2012 workshop: London Metropolitan University

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 14 Oct 2020, 11:41

The 24th Psychology of Programming Interest Group workshop was held at London Metropolitan University between 21st and 23rd November 2012.  I wasn't able to attend the first day of the workshop due to another commitment, but was able to attend the second and third days (this is a shame since I've heard from the other delegates that the first day was pretty good and yielded a number of very thought provoking presentations and discussions).  This blog post is a summary of the days I managed to attend.  I'm sharing this post with the hope that this summary might be useful to someone.

Day 2: Expertise, learning to program, tools and doctorial consortium

Expertise

The first presentation of the day was entitled, 'Thrashing, tolerating and compromising in software development' by Tamara Lopez from the Open University.  I understand thrashing to be the application of problem solving strategies in an ineffective and unsystematic way, and tolerating to be working with temporary solutions with the intention of moving a solution along to another state, and compromising: solving a problem but not being entirely happy with its solution.  An interesting note that I've made during Tamara's presentation relates to the use of feelings.  I have also experienced 'thrashing' in the moments before I recover sufficient metacognitive awareness to understand that a cup of tea and a walk is necessary to regain perspective.

The second presentation of the day was by Rebecca Yates, from LERO based at the University of Limerick.  Rebecca's talk was entitled, 'conducting field studies in software engineering: an experience report' and her focus was all about program comprehension, i.e. what happens when programmers start a new job and start to learn an unfamiliar code base.  I made a special note of her points about the importance of going out into industry and the importance of addressing ethical issues. 

One of the 'take away' points that I got from Rebecca's talk was that getting access to people in industry can be pretty tough - the practical issues of carrying out programming research, such as time, restrictions about access to intellectual property and the importance of persuasion (or making the aim of research clear to those who are going to play a part in it) can all be particularly challenging.

Learning to program

Louis Major, from the University of Keele, started the second session with a paper entitled, 'teaching novices programming using a robot simulator: case study protocol'.  Louis told us about his systematic literature review before introducing us to his robot simulator which could be used to create programs to do simple tasks such as line following and line counting.  Louis also spoke about his research method, a case study approach which applied multiple methods such as tests and interviews.

Louis also spoke about the value of robots, that they were considered to be appealing, enjoyable, exciting and robotics (as a whole subject) had a strong connection with STEM disciplines (science, technology, engineering and mathematics).  The advantage of using simulations is that there are fewer limitations in terms of space, cost and technical barriers.

A couple of months after the workshop I was reminded about the relevance of Louis's research after having been tangentially involved in an introductory Open University module, TM129 Technologies in Practice, which also makes use of a robot simulator.  Students are also given the challenge of solving simple problems, including the challenge of creating line following robots. 

The second talk in this part of the workshop was by PPIG regular, Richard Bornat.  Richard's talk, entitled 'observing mental models in novice programmers' built on earlier work that was presented at PPIG where Richard and his colleague Saeed had designed a test that was claimed could (potentially) predict whether students were able to grasp some of the principles of programming. 

An interesting observation was that when it comes to computer programming the results sometimes have a bi-modal distribution.  What this means that if student pass, they are likely to pass very well.  On the other hand, there is also a peak in numbers when it comes to students who struggle.  During (and after) his talk, he presented that some students found some of the concepts that were connected to programming (such as the assignment operator) fundamentally difficult.

Paul Orlov, who joined us all the way from St. Petersburg, spoke about 'investigating the role of programmers peripheral vision: a gaze-contingent tool and experimental proposal'.  Paul's talk connected with earlier research where experimental tools, such as a 'restricted focus viewers', were used in conjunction with program comprehension experiments. Paul's talk inspired a lot of debate and questions.  I remember one discussion which was about the distinction between attention and seeing (and that we can easily learn not to attend to information should we choose not to).

Ben Du Boulay, formerly from the University of Sussex, was our discussant.  Ben mentioned that when it came to interdisciplinary research conducting systematic literature reviews can be particularly difficult due to the number of different publication databases that researcher have to consider.  Connecting with Richard's paper, Ben asked the question about what might be the fundamental misunderstandings that could emerge when it comes to computer programming.  Regarding Paul's paper which connects to the theme of perception and attention, Ben made the point that we can learn how to ignore things and that attention can be focussed depending on the task that we have to complete.  Ben also commented on earlier discussions, such as the drive to change the current computing curriculum in schools.

One thing that learning programming can do for us is help to teach us problem solving skills.  There is a school of thought that learning programming can be viewed as how Latin was once viewed; that learning to program is inherently good for you. Related points include the importance of task and the relationship to motivation.

Tools

Fraser McKay from the University of Kent presented, 'evaluation of subject-specific heuristics for initial learning environments: a pilot study'.  In human-computer interaction (or interaction design), heuristics are a set of rules of thumb that help you to think about the usability of a system.  General heuristics, such as those by Nielsen are very popular (as well as being powerful), but there is the argument that they may not be best suited to uncovering problems in all situations. 

Fraser focused on two environments that were considered helpful in the teaching of programming: Scratch (MIT) and Greenfoot.  Although this was very much a 'work in progress' paper, it is interesting to learn about the extent to which different sets of heuristics might be used together, and the way in which a new set of heuristics might be evaluated.

Mark Vinkovitis presented the work of his co-authors, Christian Prause and Jan Nonnen, which was entitled, 'a field experiment on gamification of code quality in Agile development'.  Initially I found the term 'gamification' quite puzzling, but I quickly understood it in terms of, 'how to make software development into a game, where the output can be appreciated and recognised by others'.

The idea was to connect code development with the use of quality metrics to obtain a score to indicate how well developers are doing.  This final presentation gave way to a lot of debate about whether developers might be inclined to develop software code in such a way to create high rankings.  (There is also the question of whether different domains of application will yield different quality scores).  I really like the concept.  Gamification exposes of different dimensions of software development which has the potential to be connected to motivation.  It strikes me that the challenge lies with understanding how one might affect the other whilst at the same time facilitating effective software development practice.

Doctorial consortium presentations

Before the start of the workshop on Wednesday, a doctorial consortium session was held where students could share ideas with each other and discuss their work with more experienced (or seasoned) researchers.  This session was all about allowing students to share their key research questions with a wider audience.

Presentation slots were taken by Louis Major, Frazer McKay, Michael Berry, Alistair Stead, Cosmas Fonche and Rebecca Yates (my apologies if I've missed anyone!)  Other research students who were a part of the doctorial consortium included Teresa Busjahn, Melanie Coles, Gail Ollis, Mark Vinkovits, Kshitij Sharma, Tamara Lopez, Khurram Majeed and Edgar Cambranes.

Day 3: Tools and their evaluation and keynotes

Tools and their evaluation

The first presentation of the final day was by Thibault Raffaillac who presented his research, 'exploring the design of compiler feedback'.  I enjoyed this presentation since the feedback that software tools offer developers is fundamental to enabling them to do the job that they need to do.  A couple of questions that I've noted from Thibault's presentation included the question of 'who is the user?' (of the feedback), and what is their expertise.  Another note is that compilers (and other languages) always tend to give negative points and information.  It strikes me that languages offer an opportunity for programmers to interrogate a code-base.  Much food for thought!

Luis Marques Afonso gave the next talk, entitled 'evaluation application programming interfaces as communication artefacts'.  Understanding API usability has a relatively long history within the PPIG community.  The interesting aspect of Luis's work is that three different evaluation techniques were proposed:  semiotic inspection method (which I had never heard of before), cognitive dimensions of notations (Wikipedia) and discourse analysis (Wikipedia).  It was interesting to hear of these different methods - the advantage of using multiple approaches is that each method can expose different issues.

The final paper presentation, entitled 'sketching by programming in the choreographic language agent' was given by Luke Church, University of Cambridge.  Luke described working amongst a group of choreographers.  It was interesting to hear that the tool (or language) that had been created wasn't all about representing choreography, but instead potentially enabling choreographers to become inspired by the representations that were generated by the tool.  Luke's presentation created a lot of interest and debate.   

Keynote: extreme notation design

A computer programming language is a form of notation.  A notation is a system that can be used to represent ideas or actions and can be understood by people (such as music) or machines (as in computer programming), or both.  Thomas Green proposed a set of 'dimensions' or characteristics of notation systems which relate to how people can work with them.  These dimensions can be traded-off against each other depending upon the nature of the particular problem that is to be solved.

One challenge is: how can we understand the characteristic of trade-offs?  Alan Blackwell gave a keynote talk about a programming language that was controversially described as being a hybrid of Photoshop and Excel.

Palimpsest used the idea of different layers which could then contain different elements which could interact with each other (if I understand things correctly).  Methodologically speaking, the idea of creating a tool or a language that aims to explore the extremes of language design is an interesting and potentially very powerful one.  My understanding is that it allows the language designer to gain a wealth of experience, but also provides researchers with an example.  Perhaps there is an opportunity for someone to write a paper that compares and collates the different 'extremities' of language design.

Panel: coding and music

The final session of the workshop was all about programming, music and performance.  We were introduced to a phenomena called 'live coding', which is where programmers 'perform' music by writing software in front of a live audience. The three presentations which were contained within this final part of the day were all slightly different but all very connected.

Alex Mclean

Alex Mclean from the University of Leeds presented two demonstrations and talked about the challenges of live coding.  These included that manipulating and working with music through code is an indirect manipulation.  Syntactic glitches can interrupt the flow of performance and there is the possibility that being wrapped up within the code has the potential to detract from the music.

Live coders can also improvise with musicians who play 'non-programming language' (or 'real') instruments.  Since the notion of 'live' can have different meaning (and can depend on the abstractions that are contained within a language), challenges include the negotiation of time and harmony.  Delays can exist between the having a musical idea and realising it.

Alex mentioned Scheme Bricks, which has been inspired by Scratch (and Sense) which allows you to drag and drop portions of code together.  This also made me realise that if there are two live coders performing at the same time they might use entirely different 'instruments' (or notation systems) to each other. 

Thor Magnusson

Thor Magnusson from the University of Brighton introduced us to a language called ixi that has been derived from SuperCollider (Wikipedia).  Thor set out to make a language that could be understood by an audience.  To demonstrate this, Thor quickly coded a changing of drum and sound loops using a text editor using a notation that has come clear and direct connections to music notation.  Thor spoke of polyrhythms and code to change amplitude, to create harmonics and sound that is musically interesting. 

What I really liked was the metaphor of creating agents which 'play' fragments of code (or music).  Distortions can be applied to patterns and patterns can be nested within other patterns.  Thor also presented some compelling description of the situations in which the language is used; 'programming in a nightclub, late at night, maybe you've had a few beers; you're performing - you've got to make sure the comma is in the right place'.  For those who are interested, you can also see a video recording of Thor giving a live coding performance (YouTube).  In my notebook I have written something that Thor must have said: 'I see code as performance; live coding is a link between performance and improvisation'.

Sam Aaron

When Sam began his short talk, I couldn't believe my eyes - he was using a text editor called Emacs! (Wikipedia).  The last time I used Emacs was when I was a postgraduate student where it persistently confused me.  Emacs, however, uses a language called Lisp which is particularly useful for live coding, since it is a declarative language. 

During his talk Sam gives a brief introduction to Overtone.  You can see a video of a similar introduction to overtone through Vimeo.  One thing that did strike me was way in which aspects of music theory could be elegantly represented within code.

Discussion

This final part of the workshop gave way to quite a lot of energetic debate.  There appeared to be a difference between those who were thinking, 'why on earth would you want to do this stuff?' and, 'I think this stuff is really cool!'  When it comes to live coding there is the question of who is the user of the language - is it the performer, or is it the listener, or viewer (especially if a live coding notation is intended to be understandable by a non-musician-coder)?

But what of the motivations of the people who do all this cool stuff?  When it comes to performance there is the attraction of 'being in the moment', of using technology in an interesting and exciting way to creating something transitory that listeners might like to dance to.  It certainly strikes me that to do it well requires skill, time, persistence and musicality; all the qualities that 'traditional' musicians need.  Live coders can also face the fundamental challenge of keeping things going when things begin to sound a bit odd, to create new and creative code structure on-the fly, and an ability to move from one semi-improvised (by means of programming and musical abstraction) to another.

Other than the performance dimension, there is the intellectual attraction of changing and challenging people's perceptions of how software and programming languages are thought of.   Another dimension is the way that technology can give rise to a community of people who enjoy using different tools to create different styles of music.  All of the tools that were mentioned within the final part of the day are free and open source.  Free code, it can be said, can lead to free musical expression.

Reflections

Like other PPIG workshops this workshop had a great mix of formal presentations, more informal doctorial sessions mixed with many opportunities for discussion.  I think this was the first time that the workshop was held at London Metropolitan University.  Yanguo Jing, our local conference chair, did a fabulous job at ensuring that everything ran as smoothly as possible.  Yanguo also did a great job at editing the proceedings.  All in all, a very successful event and one that was expertly and skilfully organised.

There are two 'take home' points that have stuck in my mind.  The first is that programming languages need not only about programming machines; through their structures code can also be used as a way to gain inspiration for other endeavours, particularly artistic ones.  

The second point is that programming can be a performance, and one that can be fun too.  The music session with certainly stick in my mind for quite some time to come.  Programming performances are not just about music - they can be about education and creation; code can be used to present and share stories. 

Permalink Add your comment
Share post
Christopher Douce

First Open University Sense Programming Workshop

Visible to anyone in the world
Edited by Christopher Douce, Tuesday, 8 Oct 2013, 12:23

The first Open University Sense Workshop was held at the London School of Economics on Saturday 11 November 2012.

Sense is a computer programming language that has been derived from Scratch, a language that was developed by Massachusetts Institute of Technology.   The aim of the Sense workshop was to allow TU100 My Digital Life students to become more familiar with the Sense environment helping them to learn some of the fundamental principles of computer programming.

This blog post is intended as a summary of the first ever Sense workshop.  It has been written for both students and tutors. If you feel that anyone might find this summary useful, please don't hesitate to distribute widely.

Introductions

The phase 'computer programming' is one that can easily elicit an anxious response.  Programming is sometimes seen as something that is done through a set of mysterious tools.  The good news is that once you have gained some understanding of the fundamental principles of programming (and how to tackle problems and debug programs), the skills that you learn in one language can be transferred between other languages.

Sense is a programming language that uses the same fundamental concepts of languages that are used in industry (such as C++ and Java) but Sense makes the process of writing computer programs (or code) easier by allowing programs to be created from sets of visual building blocks. In some ways, Sense is a visual programming language that is completely analogous to many other languages.  The fundamental difference between Sense and other languages is that it helps students to focus on the fundamental bits of programming by shielding new programmers from the difficulty of writing program instructions in a language that can be quite cryptic and difficult to understand.

The overarching intention of the Sense workshop day (that is described here) was to demystify Sense and encourage everyone to have fun.  The Sense environment allows programming instructions to be manipulated as a series of lego-like blocks.  These snap together to form 'clumps' of instructions which can be attached to either a background (or stage, where things can more about on), or sprites (which are, in essence, graphical objects).  Through Sense it is (relatively) straightforward to create sets of instructions to build simple animations and games.

The workshop is divided up into three different sections.  The first is a broad overview of some of the ideas about programming, followed by a demonstration about how to use the Sense environment.  The second section was a presentation which contained some useful guidance about how to complete an assignment.  The third section was more open... but more of this later.

The lecture bit - stepping towards programming...

The workshop kicked off by a talk by one of our Open University tutors, Tammy.  Tammy made a really good point that 'we can't teach you programming'.  The implication is that only a student can learn how to do it.  The best way to learn how to do it is, of course, to find the time to play with a programming environment and to tackle, head on, the challenge of grappling with a problem.

Tammy asked a couple of people to come up and draw some shapes on the whiteboard.  Different participants drew very different shapes despite being given exactly the same instructions.  The point of the exercise was clear: that it is absolutely essential to provide sets of instructions that are both completely clear and unambiguous (as otherwise you may well be surprised with the results that you come back with).

Tammy talked about the different categories of program instruction, which were: sequence instructions, selection instructions and iteration instructions.  Pretty much all programs are composed of these three different types of operations.  Put simply, a sequence of instructions is where you do one thing after another.  A selection operation is where you make a choice to do something depending upon the status of a condition (for example, if you are cold, you might turn the heating on).  An iteration operation is where you do something either a number of times.

These sets of operations can be used to describe every day actions, such as making a cup of coffee, for instance.  This simple activity can be split into a sequence of steps, which can include iterations where we check to see if the kettle is boiling.  (We might also do some parallel processing, such as making some toast whilst the kettle is boiling, but multi-threading is a whole other issue!)

The main points were (1) programming cannot be taught, it can only be learnt by those who do it, (2) there are some fundamental building blocks that can be combined together and nested within each other; you can have a sequence of steps within an iteration, for instance, and (3) programming requires things to be defined and described unambiguously.

The demonstration bit - creating an animation...

The second part of the morning was hosted by Leslie.  Building on Tammy's summary of programming Leslie showed us what it meant to actually 'write' a program using the Sense environment.

In some respects, you can create anything within the Sense environment.  It provides a set of tools and it is up to you to come up with an idea and figure out how to combine the pieces together to do what you want to do.  In some respects (and getting slightly philosophical for a moment), you can define a whole universe or a world in software.  You can, in effect, define your own laws of physics.  I can't remember who said it, but I have always remembered the phase, 'the universe is mathematical'.  Given that computers only understand numbers, the Sense environment allows you to create and represent your own universe (and interact with it in some way).

Leslie's universe was a fishtank.  She began by drawing the tank, including water weeds.  She then went onto draw a set of different fish characters.  Script was then added to move the fish around the screen (in the tank), first in one direction (from left to right), and then in both directions (from side to side).  Leslie then added more characters and defined interactions between them using something called the 'broadcast' feature to alert some of the virtual fish that a bigger and more dangerous fish had arrived in the tank.

What was really great was how she demonstrated how to connect different instructions together (to create sequences), to have sequences of instructions operate when certain conditions are met (which represent selections), and introduce repeat loops (which represent iterations; carrying out the same instructions over and over again).

The bit about the assignment...

The final 'lecture' part of the day was by Open University tutor Dave, who took everyone through the structure of the forthcoming assignment (without giving any of the answers).  Dave talked about the use of the on-line discussion forums and this gave way to an interesting discussion about the importance of referencing.  Other points that were mentioned included the importance of things such as including word counts (on the TMA), and the learning objectives that are used by the module.

The programming bit...

During the afternoon, we all split into two different groups and got together into small groups of between two and four people.  The intention of the second part of the day was to try to create a small Sense project by huddling around a single laptop on which the Sense environment had been installed. We would then work on something for an hour, and then we would present what we had done to the other groups, describing some of the problems and challenges that we had encountered along the way.

Not having had much experience at using Sense, I was very happy to play an active role within one of the groups.  One of my main intentions at coming along for the day was to learn more about how to use the language and discover more about what it was capable of.  Our group came up with two different ideas: a representation of a car race track and some kind of athletic game or animation.  We settled on the athletic theme and decided we would try to animate a man running around a very simple athletics track.  (Our track became a square as opposed to an oval shape since we decided that re-discovering the mathematics of the circle was probably going to be quite tricky to master in about an hour!)

Within an hour we had drawn some stick figures, got our character doing a really simple 'run' animation and had our figure run around a really simple athletics track.  From memory, one of the challenges was figuring out how to represent program state and have it shared between different scripts that were running within the same sprite (apologies for immediately going into Sense-speak!)  Another challenge was to figure out how to represent state with Boolean variables and have those embedded within a continuous loop (but given enough time, I'm sure that we would have cracked it!)  A final challenge (and surprise) was to understand that the Sense environment automatically 'remembered' how much a character had been rotated between the different times that we 'ran' our scripts.  (We had instances where our running character ran off the side of the screen, much to our surprise!)

After our time was up, we were all asked to demonstrate and talk through our various projects.  I can remember a simple etch-a-sketch game, a demonstration of some bouncing balls (which bounced at different speeds), a space invader game (where the invader was a cat), a Tom and Jerry animation where Tom chased Jerry across a screen, and an animation that involved a balloon and a plane.   It was great to see very different projects since when we were coding our own, we can easily get into the mindset of just solving our own problem; seeing the work of others is something that is very refreshing.  It was inspiring to see what could be created after an hours of programming.

Reflections

The whole day reminded me of the time when I first tried to learn computer programming and I still remember that it was a pretty difficult challenge (in my day!)  I always wanted to rush ahead and solve the bigger more exciting problems but I was often tripped up because I needed to understand the operation of the fundamental instructions and operators (and the way a language worked).  In my own experience the only way to really understand how things work is to find the time to play, to explore the various operators and instructions, but finding both the time and the confidence to do this is perhaps a challenge itself.

All in all, the first Sense Workshop was a fun day.  I certainly got a lot out of it and I hope that everyone did too.  I certainly hope this is going to become a bi-annual event for all our TU100 students.  From my 'I've never really used Sense before to do anything other than to run a demo program' perspective, I certainly came out learning a lot more than I did when I started.  Large parts of Sense was demystified, and I certainly had a lot fun attending.

Additional resources

After sharing a link to this post my colleague Arosha (who also came along to the Sense workshop) has written a short blog post.  Arosha is loads more skilled when it comes to Sense programming and has re-created one of the projects that were demonstrated on the day.  Thanks Arosha!

Permalink
Share post
Christopher Douce

Mobile Application Development: from curriculum design to graduate employability

Visible to anyone in the world
Edited by Christopher Douce, Tuesday, 19 Oct 2021, 11:26

I had never visited the University of Buckingham before.  It was on the morning of Tuesday 15 May 2012 that I found myself travelling to Milton Keynes railway station to meet with a pre-booked taxi that would whisk me into the unknowns of the Buckinghamshire countryside towards an event that was intended to share practice about the teaching of mobile technology.  Although I had never visited Buckingham, I have heard it being spoken of many times before; a radical institution which was founded at approximately the same time as another radical institution, the Open University. 

As well as sharing practice about the teaching of mobile application development another really important theme was the subject of employability and the open question of whether universities are 'teaching the right stuff' to enable graduates to immediately make a contribution in the workplace.

This blog post is a summary of a visit to a HEA event entitled 'Mobile Application Development: from curriculum design to graduate employability'.  If I've missed any key points, I encourage the fellow participants and delegates to add comments below.

Industry keynote

Lee Stott, an academic evangelist from Microsoft kicked off the day with a really interesting keynote.  Lee is from a part of Microsoft that works with university departments (Microsoft Faculty pages).

Lee emphasised the point that users expect connectivity.  I made a note of an interesting quote that went 'mobility plus cloud equals opportunity'.  It's easy to imagine (or even remember) situations where one gained access to information whilst travelling, solving problem, such as finding an address of a location or accessing some urgently needed information.

Lee also made the point that mobile devices are our predominant work tool (or tools).  A tool, of course, might be a phone or a laptop.  This is certainly true in my case; I often haul my laptop between the OU's headquarters in Milton Keynes and my home, sometimes using the dead time on a train to do some marking.  Another thought that comes to mind is whether mobility is causing work time to encroach on our personal time, but this is a whole other debate (and one that I hope to connect with by writing another blog post about a recent seminar).

The usefulness of an app depends on a combination on its functionality, the functionality of a device and the availability of a network.  To be useful, apps need to be useful but also graphically appealing.  Lee emphasised the importance of designers, not just software designers, but graphic designers.  This connects to an important point which is that creating good apps is an interdisciplinary activity - a combination of technology, business and art. Writing commercial apps isn't just about writing software that works - they need to be 'hardened'; tested thoroughly and be checked for vulnerabilities.

Microsoft, along with other mobile platform vendors (such as Google and Apple) have their own ecosystem of tools, technologies and platforms.  Microsoft is but one of many platforms that educators can choose from.

I have to confess (for my sins) that I used to be a software developer who mostly specialised in Microsoft technologies.  I used to use .NET, MS SQL and a bunch of other stuff.  It has been, however, a few years since I've done this.  Lee introduced new technologies that were entirely new to me, such as Microsoft Azure (wikipedia) and Microsoft XNA (wikipedia) for Xbox.  Lee also mentioned other software that was on the near horizon, such as Windows 8 (wikipedia) which can be used on 'slate' (or tablet) devices.

Lee also touched upon the important subject of recruitment.  Lee emphasised that it is important to encourage students to build apps and sell them through apps market places to create a portfolio which can be shown to potential employers.

The question and answer session was interesting.  There was some discussion about cross platform approaches to development and the fact that when you go cross platform, developers lose some functionality from the original host operating system of a mobile device (or phone).  The subject of native code versus multi-platform code was a debate that arose on a number of occasions throughout the day.  HTML 5 (wikipedia) was regularly mentioned, along with a platform such as PhoneGap (PhoneGap website).

Another tension that exists particularly when industry representatives and university representatives debate curriculum, is the difference between education and training.  Industry wants people who are fully trained (and ideally want universities to do this), but the real role of universities when it comes to technology (in my opinion) is to enable students to effectively know how best to learn and adapt to new tools and situations.  Lee made the point that the teaching of fundamentals is essential.  I agree.  Conveying principles through the use of vendor specific tools whilst presenting concepts in a general way to enable other technologies to be understood is a difficult thing to achieve.

Mobile application development: a journey thus far

Harin Sellakewa from the University of Buckingham gave a presentation that described how mobile technology came to be taught, in its current form, at Buckingham.  Harin described how some of the curriculum had changed and outlined the introduction of new modules.  The use of mobile technology had been explored by a number of various projects, including those that were funded by the EU.

Some of the key learning objectives of a module on mobile software was mentioned: how to design applications (or apps), understanding different components and learning about various guidelines and specifications.  All these learning objectives could then contribute to making an application that could be sold on the free market.

Harin also gave us a number of useful tips.  Any module must (of course) satisfactorily complement any existing modules, also aim to get people involved, speak to different vendors, start with student projects, attend training events that are run through industry and take the time to network.

A number of different topics were exposed through the question and answer session.  As well as a discussion about different technologies, an industry representative mentioned the importance of candidates having a portfolio of work to demonstrate to prospective employers.  One point that stuck in my mind was that an unfinished application has the potential to work against an applicant; showing something polished and complete is necessary.

Developing Apps in Schools

Aaron Peck teaches computing and ICT at the Royal Latin School, Buckingham, a school just around the corner from the university.  Aaron began by speaking about wider discussions about the GCSE computing curriculum, mentioning the OCR GCSE which was said to contain three key components: programming, a research project and an examination.

Aaron emphases fun and mentions the use of the MIT Scratch (Scratch website) environment.  He also went onto speak about mobile devices, a technology that the pupils are invariably likely to be familiar with.   Here lies an obvious collision of ideas: why not teach programming through the use of mobile devices?

Scratch has, of course, some distinct advantages - it is immediate and gets around the tyranny of fiddly syntax by providing students a graphical environment in which they can play.  Another programming environment that has a graphical world is the MIT App Inventor (App Inventor website) which allows users to create apps for Android phones.

Students are encouraged to create small projects, which may include a simple calculator, a recipe book or a hangman game.  The creation of apps has the potential to open up further discussion of wider issues, such as how such developments might be commercialised.  I remember an anecdote from Aaron, where he was asked by a student about how much an app programmer might earn; a testament to his ability to instil enthusiasm and engaging choices of technology.

There were some advantages to using App Inventor; it can be used on multiple development platforms, it is relatively simple to install and given that students may have used Scratch during earlier studies, making the graphical nature of the programming environment to be (potentially) more easily grasped by students.

Aaron isn't stopping at creating apps with App Inventor.  He mentioned his intention to try to work with Lego Mindstorms Robots through the Android SDK, where it might be possible to create a 'remote control' app using Bluetooth radio.  Aaron also mentioned that there was also opportunity to share the workings of HTML and Javascript with his students.  If my memory isn't playing tricks on me, I also seemed to recall that he mentioned that one of his students was inspired enough to use C++.

The question and answer session led us to subjects and technology such as Microsoft Kodu and Micrcosoft Gadgeteer.  Other important issues include addressing the gender imbalance, and how to motivate all student groups, including those who may not have a strong technical bias.

I really enjoyed this talk.  Two big parts of tech were familiar to me: Scratch (or as I know it, Sense), and App Inventor.  Both products are used as a part of different Open University computing modules, TU100 My Digital Life and TT284 Web Technologies.  It was an eye opener, for me, to see how these products could be used a way to inspire students at GCSE level. 

Mobile Assessment

The use of mobile technology to help teaching and learning seems to be a hot topic at the moment.  Joan Lu gave a presentation about the use of mobile technology for assessment and also mentions the use of student response systems making reference to an EU funded project entitled Do-IT.  Joan is from the XDIR research group at the University of Huddersfield which has carried out research  projects related to mobile technology.

Designing the mobile syllabus to enhance student employability

Yanguo Jing from London Metropolitan University gave a presentation about his first hand experiences of teaching about mobile technology to his postgraduate students.  It was a really interesting presentation that was packed with useful tips, not just about teaching but also about industrial engagement too.

Returning to the subject of multiple platforms and environments, Yanguo said that initially he tried to teach a little bit about all the major toolsets.  He came to the conclusion that this was less than ideal.  Although students might be given breadth, getting to the 'depth' is always a challenge.  It was decided, therefore, to focus on one particular platform and use the experience with the platform to make points that are important in other platforms too.  This is a very sensible practical decision; there is only so much detail that a lecturer can hold in his or her head at any one time.

Understanding mobile isn't just about understanding technology and the fundamentals of creating some executable code that runs on a device, it is also about understanding the surrounding business and economic area.  Connecting back to the ideal of creating marketable Apps that Harin touched upon in his earlier presentation, Yanguo said something about how he encourages his students to enter application competitions, or Appathons.  He also mentioned that students were also encouraged to attend an industry conference, DroidCon, to gain first hand experience about what is happening within industry.  It was interesting to hear that Yanguo is a part of an industry liaison group.  Not only does this facilitate a connection between academics and industry, it can also act as a connection between industry and students too.

Finally, it is also perhaps worth mentioning that Yanguo is helping to organise a related HEA event on mobile technology on 15 June 2012, entitled Workshop on Teaching and Learning Programming for Mobile and Tablet Devices.  It sounds like it's going to be a great event!

Programming with iOS

Gordon Eccleston from Robert Gordon University, Aberdeen shared some of his experiences of teaching using Apple's iOS.  This platform enabled students to learn something about HCI principles and also about object-oriented programming (through the use of Objective-C).

Gordon offered a key tip which echoed earlier discussions in the event.  He said, 'keep your modules as generic as possible'.  Inspiration and information that informed the creation of his module included looking at different text books and short courses that were designed for industry.  Studying the documentation provided by the vendor can be a very useful source of materials that can help to guide or inform the creation of aspects of a module.

Gordon spoke about lab based teaching (in a lab containing lots of Apple kit) and student course work.  Gordon then went onto present a brief overview of a number of different student projects.  The use of projects cannot be understated.  A good project connects the technology with broader issues of business and also helps to give the student some good materials that can be immediately demonstrated to a potential employer (I have this image of an interviewee handing their phone to an interviewer whilst saying, 'this is what I've done).  One project that stuck in my mind was an app that illustrated a fashion portfolio which demonstrates a connection between apps and marketing.

Gordon's session inspired a really interesting question and answer session.  One point was that PC (or Mac) based simulators are all very well, but it's also important (as well as rewarding) to allow students to run their software on actual devices (such as an iPod touch).  For one thing, it allows the developers to gain access to device only peripherals, such as accelerometers and other sensors that they wouldn't otherwise have access to.

Reflection of curriculum design and delivery in mobile computing

Khawar Hamed from the University of Staffordshire spoke about his experiences of curriculum design.  Khawar's presentation reminded me an app is at the top of a technology pyramid.  Along with the operating system of a device, apps are perhaps the most visible software artefact that users interact with.  Underneath the app and beyond the phone there is a sophisticated digital infrastructure that enables devices to work.  Some of the modules that Khawar mentioned allow students to begin to study these underlying technologies.  Another point is that mobility isn't just about technology, it's also about enabling organisations to achieve their objectives.

Khawar touched upon other issues such as the importance of getting the right name for a course or programme.  Since the names and phrases used to describe technology can change relatively quickly, perhaps the names of modules and programmes should be prepared change too?   An important point was to always seek industrial involvement wherever possible.  Connecting to this point, Khawar mentioned an organisation called The Wireless University Forum.

One really interesting debate that emerged from this presentation centred upon whether an institution should provide devices that students can transfer code to.  The answer was a resounding 'yes'.  Not everyone will have an Android phone, or an iPhone (or even a smartphone, although this is something that is changing).  Plus, providing a device delineates between what is a 'learning' device and what is a 'personal' device.

Mobile app development - creativity, skills and evidence

The final talk of the day was a second keynote.  Andrew Lapham, from Yell Labs gave an enthusiastic presentation about the work that his team carries out and what characteristics in potential employers he is looking for.  Key points include the ability to be creative and generate new and interesting ideas, strong communication skills (the ability to communicate those ideas and to persuade others of their merit), and an underlying enthusiasm for technology and what it might be able to achieve.

The notion of having a portfolio of evidence was also touched upon.  Whilst demonstration of apps or talking through a pet project is impressive, what is more impressive is having evidence that your own product or code has been marketed.  This might include having a blog about a product, and also gathering some evidence about how your customers view your product.

Reflections

There was one thing that surprised me about this day which was an exceptionally strong focus on apps.  In retrospect, it shouldn't have been a surprise at all.  Apps are the way to consume software on mobile devices.

I certainly sense that teaching programming for mobile devices isn't easy.  Each platform comes attached to ecology of tools (and a whole set of accompanying vocabulary) and techniques.  Teaching everything just isn't an option, but teaching in depth is surely the right way to go.  Educators will therefore have to choose a platform and figure out how to connect a technology choice to wider principles to enable graduates to more readily get to grips with the new environments they will inevitably face.

One really interesting question is whether mobility and the technology that goes with it is changing software engineering?  It's not a question seems to have an easy answer, but perhaps user based apps require different design methods than the lower level software that support the networking infrastructure and perhaps those who have stronger connections with the industry would be able to comment.

A final reflection relates to the creation of a portfolio that can help during the recruitment process.  The importance of a personal portfolio was emphasised in a recent HEA event at the University of Greenwich about gaming and animation.  Employers like to see what applicants have done.    Furthermore, it offers opportunities to allow employers to find out about the difficulties that applicants face and how they were overcome.

When it comes to being an app developer, the message was clear: a portfolio of well-crafted working apps was clearly something that employers would like to see.

Congratulations to Buckingham for running a fun and thought provoking event!

Permalink Add your comment
Share post
Christopher Douce

Exploring Sense

Visible to anyone in the world
Edited by Christopher Douce, Monday, 24 Mar 2014, 14:13

Last weekend I attended an event known as a Sense development session, hosted at the Open University in the South East offices in East Grinstead.  Sense is, of course, the graphical programming language that is used to teach the fundamentals of programming in a new module entitled TU100 My Digital Life

Whilst TU100 discusses a whole range of issues (such as privacy, mobility and ubiquity) and allows different skills to be developed, programming remains an important subject and one that some students find difficult. 

The main objective of the event was to enable associate lecturers to get together to share their experiences about using of the Sense software.  Before the main Sense session, another tool was demonstrated and discussed: Jing.

Jing

The Open University provides and supports a number of different digital tools, such as its Moodle based virtual learning environment, synchronous discussion tools  and image sharing software (such as the kind of software used on TU100, as well as other modules such as U101 Design Thinking and T189 Digital Photography).  Sometimes, however, it is possible to make use of freely available tools that are just 'out there' (on the cloud) to facilitate teaching and learning.

Jing is one of those tools.  At the start of the session, Graham Eaton demonstrated how Jing (Techsmith website) can be used to create simple and effective demonstrations to show students how to make use of different applications.  One of the really nice features of an application such as Jing is that it also allows you to make voice recordings: you can talk through how you use something.  When you are done, you can also share your digital recording to others by uploading the results to a shared website.

Graham went further than just saying that 'Jing is a tool that allows you to quickly make screen casts'.  Using MS Paint, a graphics tablet and Jing, Graham demonstrated that it is possible to create customised 'chalk board' animations which can be used to explain simple mathematical principles.

There are, of course, some drawbacks: cost.  The demo version (which is free to use) doesn't permit editing and has a limit of five minutes.  These five minutes, however, may make the difference between understanding a principle and not understanding a principle. 

An important (implicit) point was that we have different tools at our disposal, and it's up to us to find a blend of the different tools that we may feel comfortable using.   Educational practice sessions such as these may inspire us to consider investigating and deciding upon our own blend of tools (and allow us to think differently about new possibilities).

Introducing Sense

The Sense part of the day was facilitated by Diane Brewster and Michelle Dewey.  Diane kicked off the first activity to try and answer the question, 'what were the problems of teaching programming to novices?'  From three groups we arrived at a number of answer, which I'll do my best to summarise.

Firstly there were the broad skills, such as thinking algorithmically and being able to 'abstract' the essence of a problem so it can be translated into code.  This was connected to the challenge of looking (and understanding) the logic of problems.  The issue of syntax was also mentioned, along with the acquiring the knowledge (and understanding) of different programming structures and how they might be used. 

Knowing how (and where) to look things up was considered to be an important skill, as was techniques (and strategies) for testing and debugging.  A final general point that was discussed was that some students who had learnt how to program using one programming paradigm (Wikipedia) might find it difficult to learn a programming language that uses a different paradigm.

Diane took us through a presentation that aimed to answer the question 'why has Sense been developed and what is its pedigree?'  We were told about the Scratch language (MIT), a programming language called Alice (Alice website), and a microcontroller called the Arduino (Wikipedia).  Sense is, of course, a version of Scratch that the Open University has modified.  The differences being is that it has a small number of different programming constructs, and can also be interfaced with some Arduino based physical hardware.

Towards the end of this first session, we were then assigned into mixed groups and asked to consider how to write a small program using different coloured post-it notes.  (Some of us were programmers, others were not!)

Playing with Sense

Before we were allowed into a lab filled with computers, we were introduced to a number of other Sense concepts, such as the notion of 'broadcast', or sending messages from one component of a Sense program to another.  There was some discussion about the stage metaphor, and a presentation of a simple maze game.  In keeping with this metaphor, something new for me was the idea that a sprite (a graphical object on the screen) can have different costumes.

The final part of the day was dedicated to about an hour of 'tinkering'.  It is felt that Sense is one of those things that you can only get to grips with properly if you spend a bit of time 'messing around' with.  By messing around, this might mean creating new programs, or changing existing programs.

Not having had much time to tinker before (and being a former software developer), some of the constructs (and graphical palettes that held these constructs) soon became familiar to me.  What was apparent was that I had to do quite a bit of looking and searching, but by the end of the hour, I roughly knew where I needed to look (and what colour of programming construct to look for) to do the things that I wanted to do.

Final points

I took away a number of things points this session.  The first was a reminder about how the teaching and learning of programming is not just about programming itself.  It is all very well knowing about different programming constructs and understanding what they do but it is a whole other challenge to know how to decompose a problem into discrete steps that a computer can execute. 

Researchers who have studied the psychology of programming have explored the notion of a programmer's cognitive strategy.  As well as a programming strategy there is also the conception of a programmer's tactic, which can be considered in terms of something that a programmer might do to help them understand or get to grips with a problem, or understand what a computer is doing when faced with a buggy program.

Teaching programming isn't only about teaching the constructs, but also about exposing and sharing (or even 'bootstrapping', to take a computing analogy) these tactics.  I clearly remember a discussion about using something called the Plan Do Check Act, or PDCA cycle (Wikipedia) to help users of Sense understand what needs to be done.

Another important point (and one that I've mentioned before) is the need for both students and tutors alike to find the time to 'tinker', to explore what is possible within a programming language or environment.  Tinkering facilitates the development of strategies and tactics.

My own view is that programming isn't something that is just about making sets of instructions to get a machine to do stuff; it is also about facing up to the sometimes difficult challenge of problem solving.  Programming is an intrinsically creative activity, and this is something that is easily forgotten.  To be creative, we need to find the time to play and tinker.  This is something that is easily forgotten too.

Permalink Add your comment
Share post
Christopher Douce

PPIG 2011

Visible to anyone in the world
Edited by Christopher Douce, Friday, 1 Dec 2017, 14:21

 I recently had the pleasure of attending the PPIG 2011 workshop between 6 and 8 September.  As I might have mentioned in an earlier blog, PPIG is an abbreviation for the Psychology of Programming Interest Group.  There used to be an American equivalent which was called ESP (Empirical Studies of Programmers), but this community seem to have disappeared.  PPIG, on the other hand, is going strong.

This year it was held in the University of York computer science department.  The department had moved since I was last there, forcing me to circumnavigate the campus and arrive at a time when almost all the tea and sandwiches had disappeared.  Thomas Green gave an opening address, and then it was swiftly on to the first presentation.

Mathematics and Visual Impairments

Alistair Edwards gave a talk entitled 'new approaches for mathematics in blind students'.  Mathematics, of course, relies on visual notations.  These notations, it was argued, are an integral part of working with maths.

Alistair holds that view (or, should I say that I understand that he holds the view) that there is more to it than just using an appropriate notation, or having that notation converted into another form.  We externalise parts of our working memory by using pen and paper.  Also, the idea of cancelling (or balancing) an equation by crossing of similar teams from both sides can be viewed as a visual manipulation.

Alistair mentioned a couple of projects, one of which was called Lambda, an abbreviation for Linear Access to Mathematics for Braille Device and Audio-synthesis.  Here, the challenges began to be clear: I didn't know this but different countries have different braille notation for mathematics.  There is, of course, the issue that using an interface to a notation immediately places cognitive barriers in the way that has the potential to make things more difficult to understand.  Users, it was argued, need more direct forms of interaction when working with mathematics.

All in all, a very thought provoking introduction, and it made me wonder how Green's cognitive dimensions of notations framework might be used to analyse interactions to notational systems (such as mathematics or programming languages) by users who may have different modality preferences (i.e. auditory over visual).

New Tech

The first main session of the workshop was called New Tech.  I'll do a very quick run through of the papers from each session.  Jon Rimmer presented the shyness project, and introduced the beguiling ambient computing device called the 'subtle stone', whereby class participants can communicate their emotional state to the lecturer by a click of a juggling ball.  Jon deftly directed us to a series of interesting papers about shyness.

The following paper was closely connected.  It was entitled 'self-reporting emotional experiences in a computing lab', by Judith Good (et al).  Us programmers go through a whole spectrum of different emotions during a programming session, from delight, through to wishing to jump on our laptop (although I don't recommend this).  This is an interesting direction: emotion connects to motivation, and traditionally the PPIG community have more focussed upon the cognitive.

Chris Roast, from Sheffield Hallam University, took us on a tour of a tool that helped to create different internationalised film posters; software localisation through abstraction.

One of the papers that I most enjoyed was by Chris Martin from the University of Dundee, who took dancing robots on a road trip to a number of different schools to further explore whether a robot dance workshop can inspire interest in a technical subject such as computer programming.  Or, does using robots 'sugar coat' a difficult subject, or is there more to it than this?  By robots, imagine small buggies.

In fact, Chris's robots use the same Arduino microcontroller that is central to the TU100 Senseboard. By dance, this is an activity that has a low threshold to success, is created, transcends culture, and where performance can be valued over competition (I made notes during this bit of Chris's talk).

The two other papers in this section described how to make music with a dry marker pen, a whiteboard and a mobile phone by defining your own musical notation (which was pretty cool), and considered the challenges that users of mobile spreadsheets face.

Human and algorithmic complexity

The next presentation was a 'work in progress' paper by yours truly.  I talked about a project (that has been making very slow progress, due to a myriad of different reasons) to explore whether it is possible to link measurements of program complexity to physiological measurements that are known to indicate human cognitive load. 

The workshop participants gave me a whole number of things to think about, including the names of a number of researchers, as well as a clear message of: 'stop procrastinating, just get on with it'.  I'm certainly going to take that last piece of advice.

The following presentation was about the challenge of working with test data.  A subject that can easily cause terror to many a software developer.

Language Formalities

Wednesday yielded a change of plan.  Thomas Green ran a session entitled, 'how to design a notation'.  A programming language is, of course, a form of notation.  Thomas posed interesting questions, like, 'how might we invent a different type of musical notation?', which led onto other questions such as its level of abstraction (what each element of a notation can represent), what you might wish to do with a notation, it's overall purpose, and how you might off load to external representations.

A theme underpinning all this debate was one that is familiar to many human-computer interaction and interaction design researchers: the idea of trading one thing off against another.

Giora Alexandron then presented his paper entitled 'programming with the user in mind' which led us towards considering something called live sequence charts, which is an extension of UML sequence diagrams.  We were introduced to the Play Engine and the PlayGo IDE.

This was followed by a presentation by Ahmad Taherkhani who spoke about 'automatic algorithm recognition based on programming schemas'.  I really liked this paper, since it was a new angle on some very early psychology of programming themes.  I particularly like how Ahmad was attempting to use existing theories to engineer a solution that may not only have practical use (in terms of providing tools to help educators to understand the programming code that students write), but also the act of programming a solution has the potential to allow us to learn more about the theories that are being applied.

The final paper in this session was a work in progress paper by Khuong A Nguyen who used a human-computer interaction technique called cognitive walkthough to learn more about the NXT-G visual programming language that can be used with the Lego Mindstorms hardware.

HCI for the future

Following a tour of a HCI and accessibility lab (which resembles a small apartment), a short panel discussion took place, with Nicholas Merriam and Luke Church taking centre stage.  Nicholas's perspective was especially welcome, since he spoke about the challenge of working with low level embedded software and the role that timing visualisation tools may play when attempting to solve programming problems that seem particularly intractable.  This linked back well to the earlier discussions of notation.

Learners and language design

This session contained two papers.  The first was a paper by Ahmed Alardawi, from Sheffield Hallam, aimed to explore the effect that object-oriented programming language class structure has on the comprehension of computer programs.  What was great about this research was that it clearly made use of existing work, and partially replicated earlier studies, thus adding to evidence.  Babak Khazaei, also from Sheffield Hallam, presented an empirical study on the influence of OCL, an abbreviation for Object Constraint Language (wikipedia). 

Motivation and affect

This part of the workshop contained a single presentation, by Rein Sach, from the Department of Computing at the Open University. 

Rein's paper was entitled, 'what makes software engineers go that extra mile?'  Rein asked software engineers what it was about their work that they enjoyed.  This gave way to an interesting discussion about the perception of the nature of programming work.  Even though developers might have to battle with misbehaving operating systems and maintain servers from time to time, perhaps these activities need to be considered as work rather than nuisances that get in the way of the real task, which is doing the intrinsically rewarding and creative work of creating code.

Invited paper

Thursday kicked off with a presentation by Gerrit van der Veer, from the Open University in the Netherlands and the Free University, Amsterdam.  Gerrit's presentation was about different aspects of design, how technology might be used to gather debate surrounding the artefacts that are created during the design process through a system called CAM, meaning Co-operative Artefact Memory.  A barcode sticker could be attached to different artefacts, such as a sketch or a physical prototype.  Design groups could then use these stickers, with mobile phones, to access a shared Twitter stream that relates to each object, allowing views and ideas to be shared.

Two thoughts came to mind during Gerrit's presentation.  Firstly, I wondered the extent of similarities between design practice (in different disciplines) and what occurs within software development practices that use agile methods, such as eXtreme Programming.  Secondly, CAM reminded me of a tool called OpenDesignStudio used for an Open University design course called U101, Design Thinking.

Another part of Gerrit's presentation was all about service design (i.e. design which yields a product that is not a tangible item).  Gerrit pointed us to a number of resources, such as the Service Design Tools site, and mentioned the importance of culture, by referring to the work of Hofstede (which is studied in the M364 Interaction Design Open University course). 

Cognitive considerations

The final session of the workshop contained two presentations that related to the cognitive dimensions of notations framework, papers by Anna Bobkowska, Maria Kutar and John Muirhead.  Anna introduced a language for the processing of multimedia streams through the use of a visual language.  Maria and John's presentation explored how a cognitive dimensions questionnaire might be used by non-experts.

Miguel Monteiro went on to speak about the cognitive foundations of modularity.  Miguel referred to a programming paradigm called Aspect-oriented programming (wikipedia), a subject that I have heard of many times, but one that I have not explored in a great deal of depth.  Learning more about AOP is certainly something to do at some point.

Qualitative and Quantitative

The final presentation of the workshop was by Gordon Fletcher from the University of Salford.  Gordon's presentation was entitled 'methods and practicalities of qualitative research', but it was so much more than this.  Gordon spoke about data collection in different communities, and mentioned the concept of biographical research (which made me wonder if anyone has thought about applying this technique, perhaps with regards to exploring motivation or software related careers.

I came away with a number of messages, namely, it can be relatively easy to gather qualitative data, but figuring out what to do with it is a whole other issue.  Also, both quantitative and qualitative research can be both systematic and rigorous; these different approaches to research have a lot in common.  An interesting quote was, 'the method has to fit the researcher even more than it has to fit the research'.

Gordon's presentation gave way to a memorable debate on the use of terms.  Undoubtedly the use of language will remain a perpetual challenge when carrying out multidisciplinary research.

Themes

A number of diverse themes were evident within the PPIG '11 workshop, representing its broad membership.  There was a strong theme of computing education and pedagogy.  Programming and educational motivation was also apparent, mixed in with the use and design of visual programming languages.  This connected to the important theme of cognitive dimensions, notational systems and notational design. 

Two interesting inclusions were links to the broader subject of design, and accessibility.  Human-computer interaction and interaction design remains important theme too.  A final theme is (perhaps that isn’t as strong as in previous years) is the application of ethnographic methods to further understand the activity of programming.  It was great to hear from a broad spread of presenters who are exploring many different research areas.

Permalink Add your comment
Share post
Christopher Douce

Psychology of Programming

Visible to anyone in the world
Edited by Christopher Douce, Friday, 10 Aug 2018, 14:39

Ever since July 2001 I have edited (off and on) the Psychology of Programming Interest Group newsletter.  The group, known as PPIG has been in existence since 1987.  Its purpose is to provide an interdisciplinary academic forum for researchers who are interested in the intersection between computer programming and psychology.

PPIG can be described as a pretty broad church.  On one hand there are those who aim to explore program comprehension and the relationship between notation systems and programming languages, whereas other researchers have been performing ethnographic studies and considering the different types of research methods that could be used.

Some of the questions that the PPIG community have been exploring resonated strongly with my doctoral research which was all about understanding how computer programmers go about maintaining computer software. 

I will probably always remember the moment when I started to be interested in the link between computer programming and psychology, particularly cognitive psychology.  I studied computer science as an undergraduate.  Our lecturers asked us to do a time limited summative programming assignment.  What I mean by this is that myself and my cohort were all corralled into a tired computer lab, given a sheet of program requirements, a Pascal compiler, and were told to get on with it (and, no, we couldn't talk to each other).

When we had finished our programs, we had to print them out using a dot matrix printer (which was, of course, situated within its own sound proof room), and give the fruits of our labour to our instructor who exuded a unique mixture of boredom and mild bewilderment at the stress that he had caused.

What struck me was that some students had finished and had left the laboratory to go to the union bar within twenty minutes, whereas others were pulling out their hair four hours later and still didn't have a working program.  This made me ask the questions, 'why was there such a difference between the different programmers?', and 'what exactly do we do when we write computer software?'

I seem to remember that this was in our first year of our degree.  Our computing lecturers had another challenge in store for those of us who made it to our second year: a software maintenance project.

The software maintenance project comprised of one third role play, and two thirds utter confusion.  Our team of four students were presented with a printout of around forty thousand lines of wickedly obscure FORTRAN code, and given another challenging requirements brief.  We were then introduced to a fabulous little utility called GREP, and again told us to get on with it.

This project made me ask further questions of, 'how on earth do we understand a lot of unfamiliar code quickly?', and 'what is the best way to make effective decisions?'  These and other questions stuck with me, and eventually I discovered PPIG.

So, during my week on study leave I compiled the latest edition of the PPIG Newsletter.  The next annual workshop is to take place at the University of York, and I hope to attend.  If I have the time, I'll try to write a short blog post about it and the themes that emerge.

Work-in-Progress Paper

When I was done with the newsletter I turned my attention to a research idea I have been trying to work on for well over the past year with an esteemed collaborator from Royal Holloway, University of London.

As well as studying a number of different programming languages during my undergraduate years I was also introduced to the all-encompassing subject of software engineering. In engineering there is a simple idea that if you can measure something, that something can be controlled. One of the difficulties of managing software projects is that software is intrinsically intangible: it isn't something you can physically touch or see. It's impossible to ascertain, at a glance, how your developers are getting along or whether they are experiencing difficulties. To get round the problem researchers have proposed software complexity metrics.

Having a complexity metric can be useful (to some extent).  If you apply a complexity metric to a program, the bigger the number, the more trouble a developer might be faced with (and more money spent).  Researchers have proposed a number of different metrics which measure different aspects of a program. One counts the number of linguistic parts of a program, another metric counts the number of unique paths of execution that a program might have.

Another type of metric, called spatial complexity metrics, have sprung from an understanding that programmers use different types of short term memory during program comprehension and development. The idea behind this metric, which was first published in a PPIG workshop back in 1999, was to try to propose a metric which is inspired by the psychology of a programmer.

The work in progress paper describes a number of experiments that aims to explore whether there might be a correlation between different software complexity metrics, and empirical measurements of cognitive load taken from an eye tracking programming comprehension study. The research question is: are program complexity metrics psychologically valid? 

Of course, writing up a research idea is a lot easier than carrying it out!  This said I do hope to share some of the research materials that may be used within the studies through this blog when they are available.

Permalink 2 comments (latest comment by Christopher Douce, Tuesday, 16 Aug 2011, 14:37)
Share post
Christopher Douce

Aegis Project : Open accessibility everywhere

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 21 July 2010, 18:20

Aegis project logo: Open accessibility everywhere - groundwork, infrastructure, standards

I recently attended a public dissemination event that was held by the AEGIS project, hosted by the European headquarters of the company that developed the Blackberry, Research in Motion.

The Aegis project has the strapline that has three interesting keywords: groundwork, infrastructure and standards.  When I heard about the project from a colleague, I was keen to learn what lay hidden underneath these words and how they connect to the subject of accessibility.

The Aegis project website states that it 'seeks to determine whether 3rd generation access techniques will provide a more accessible, more exploitable and deeply embeddable approach in mainstream ICT (desktop, rich Internet and mobile applications)' and goes on the say that the project will explore these issues through the development of an Open Accessibility Framework (or OAF).  This framework, it is stated, 'provides embedded and built-in accessibility solutions, as well as toolkits for developers, for "engraving" accessibility in existing and emerging mass-market ICT-based products'.  It goes on to state that the users of assistive technologies will be placed at the centre of the project.

The notion of the 'generations' of access techniques is an interesting concept that immediately jumped out at me when reading this description (i.e. what is the third generation and what happened to the other two generations?), but more of this a little later on.

Introductory presentations

The dissemination day began with a couple of contextualising presentations that outlined the importance of accessibility.  A broad outline of the project was given by the project co-ordinator who emphasised that the point that the development of accessibility required the co-operation of a large number of different stakeholders, ranging from expert users of assistive technology (AT), tutors, and developers.

There was a general call for those who are interested in the project to 'become involved' in some of the activities, particularly with regards to understanding different use cases and requirements.  I'm sure the project co-ordinator will not be offended if I provided a link to the project contacts page.

AT Generations

The next presentation was made by Peter Korn of Sun Microsystems who began by emphasising the point that every hour (or was it every second?) hundreds of new web pages are created (I forget the exact figure he presented, but the number is a big one).  He then went on to outline the three generations of assistive technologies.

The first generation of AT could be represented by the development of equipment such as the Optacon (wikipedia), an abbreviation for Optical to Tactile Converter.  This is the first time I had heard of this device before, and this represented the first 'take away' lesson of the day.  The Wikipedia page looks to be a great summary of its development and its history.

One thing that is missing is the lack of an explicit link to a personal computer.  The development of a PC gave way to a new second generation of AT that served a wider group of potential users.  This generation saw the emergence of specialist AT software vendors, such as companies who develop products such as screen readers and screen magnifiers.  Since computer operating systems are continuing to develop and hardware is continually changing (in terms of increases in processing power), this places unique pressures on the users of assistive technology.

For some AT systems to operate successfully, developers had have to apply a number of clever tricks.  Imagine a brand new application package, such as a word processing program, that had been developed for the first generation of PCs, for example.

The developers of such an application would not be able to write code in such a way that allows elements of the display to be presented to users of assistive technology.  One solution for AT vendors is to rely on tricks such as the reading of 'video memory' to convert on-screen video displays that could be presented to users with visual impairments using synthetic speech.

The big problem of this second generation of AT is that when there is a change to the underlying operating system of a computer it is possible that the 'back door' routes that assistive technologies may use to gain access to information may become closed, making AT systems (and the underlying software) rather brittle.  This, of course, leads to a potential increase in development cost and no end of end user frustration.

The second generation of AT is said to have existed between the late 1980s to the early 2000s.  The third generation of AT aims to overcome these challenges since operating systems and other applications begin to providing a series of standardised Accessibility Application Programming Interfaces (AAPIs).

This means that different suppliers of assistive technology can write software that uses a consistent interface to find out what information could be presented to an end user.  An assistive technology, such a screen reader, can ask a word processor (or any other application) questions about what could be presented.  An AAPI could be considered as a way that one system could ask questions about another.

Other presentations

Whilst an API, in some respects can represent one type of standard, there are a whole series of other standards, particularly those from the International Organization for Standardization (ISO) (and other standards bodies) that can provide useful guidance and assistance.  A further presentation outlined the complex connections between standards bodies and underlined the connection to the development of systems and products for people with disabilities.

A number of presentations focussed on technology.  One demonstration used a recent release of the OpenSolaris operating system (which makes use of the GNOME desktop system) to demonstrate how the Orca screen reader can be used in conjunction with application software such as OpenOffice.

With all software systems, there is often loads of magic stuff happening behind the scenes.  To illustrate some of this magic (like the AAPI being used to answer questions), a Gnome application called Accerciser was used.  This could be viewed as a software developer utility.  It is intended to help developers to 'check if an application is providing correct information to assistive technologies'.

OpenOffice can be extended (as far as I understand) using the Java programming language.  Java can be considered as a whole software development framework and environment.  It is, in essence, a virtual machine (or computer) running on a physical machine (the one that your operating system runs on).

One of the challenges that developers of Java had to face was to how to make its user interface components accessible to assistive technology.  This is achieved using something called the Java Access Bridge.  This software component is, in essence, 'makes it possible for a Windows based Assistive Technology to get at and interact with the Java Accessibility API'.

On the subject of Java, one technology that I had not heard of before is JavaFX.  I understand this to be a Java based language that has echoes of Adobe Flash and Microsoft Silverlight about it, but I haven't had much of a time to study it.  The 'take home' message is that rich internet applications (RIA) need to be accessible too, and having a consistent way to interface with them is in keeping with the third generation approach to assistive technologies.

Another presentation made use of a Blackberry to demonstrate real time texting and show the operation of an embedded screen reader.  A point was made that the Blackberry makes extensive use of Java, which was not something that I was aware of.  There was also a comment about the importance of long battery life, an issue that I have touched upon in an earlier post.  I agree, there is nothing worse than having to search for power sockets, especially when you rely on technology.  This is even more important if your technology is an assistive technology.

Towards the fourth generation

Gregg Vanderheiden gave a very interesting talk where he mentioned different strategies that could be applied to make systems accessible, such as making adaptations to an existing interface, providing a parallel interface (for example, you can carry out the same activities using a keyboard or a mouse), or providing an interface that allows people to 'plug in' or make use of their own assistive technology.  One example of this might be to use a software interface through an API, or to use a hardware interface, such as a keyboard, through the use of a standard interface such as USB.

Greg's talk made me think about an earlier question that I had asked during the day, namely 'what might constitute the fourth generation of assistive technologies?'  In many respects this is an impossible question to answer since we can only identify generations when they have passed.  The present and especially the future will always remain perpetually (and often uncomfortably) fuzzy.

One thought that I had regarding this area firmly connects to the area of information pervasiveness and network ubiquity.  Common household equipment such as central heating systems and washing machines often continue to remain resolutely unfathomable to many of us.   I have heard researchers talking about the notion of 'networked homes', where it is possible to control your heating system (or any other device) through your computer.

I remember hearing a comment from a delegate who attended the Open University ALPE project workshop who said, 'the best assistive technologies are those that benefit everyone, regardless of disability, such as optical character recognition'.  But what of a home of networked household goods which can potentially offer their own set of wireless accessible interfaces?  What benefit can such products provide for users who do not have the immediate need for an accessible interface?

The answer could lie with increasing awareness of the subject of energy consumption and management.  Washing machines, cookers and heating systems all consume energy.  Exposing information about energy consumption of different products could allow households to manage energy expenditure more effectively.  In my view, the need for 'green' systems and devices may facilitate the development and introduction of products could potentially contain lightweight device level accessibility APIs.

Further development directions

One of the most interesting themes of the day was the idea of the accessibility API that has made the third generation of assistive technologies what they are today.  A minor comment that featured during the day was the question of whether we might be able to make our software development tools and environments accessible.  Since accessibility and usability are intrinsically connected, the question of, 'are the current generation of accessibility API's as usable as they can be?'

Put another way, if the accessibility APIs themselves are not as usable as they could be, this might reduce the number of software developers who may make use of them, potentially reducing the accessibility of end user applications (and frustrating the users who wish to make use of assistive technologies).

Taking this point, we might ask, 'how could we test (or study) the accessibility of an API?'  Thankfully, some work has already been carried out in this area and it seems to be a field of research that is becoming increasingly active.  A quick search yields a blog post which contains a whole host of useful resources (I recommend the Google TechTalk that is mentioned).  There is, of course, a presentation on this subject that I gave at an Open University conference about two years ago, entitled Connecting Accessibility APIs.

It strikes me that a useful piece of research to carry out is to explore how to conduct studies to evaluate the usability of the various accessibility APIs and whether they might be able to be improved in some way.  We should do whatever we can to try to smooth the development path for developers.  Useful tools, in the form of APIs, have the potential to facilitate the development of useful and accessible products.

And finally...

Towards the end of the day delegates were told about a site called RaisingTheFloor.net (RTF).  RTF is described as a consortium of organizations, projects and individuals from around the world 'that is focused on ensuring that people experiencing disabilities, literacy problems, or the effects of aging are able to access and use all of the information, resources, services and communities available on or through the Web'.  The RTF site provides a wealth of resources relating to different types of assistive technologies, projects and stakeholders.

We were also told about an initiative that is a part of Aegis, called the Open Accessibility Everywhere Group (OAEG).  I anticipate that more information about OAEG will be available in due course.

I also heard about the BBC MyWebMyWay site.  One of the challenges for all computer users is learning and knowing about the range of different ways in which your system can be configured and used.  Sites like this are always a pleasure to discover.

Summary

It's great to go to project dissemination events.  Not only do you learn about what a project aims to achieve, but sometimes the presentations can often inspire new thoughts and point you toward new (and interesting) directions.  As well as learning about the Optacon (which I had never heard of before), I also enjoyed the description of the different generations of assistive technologies.  It was also interesting witness the various demonstrations and be presented with a teasing display of the complexities that lie very often hidden amidst the operating system of your computer.

The presentations helped me to connect the notions of the accessibility API and pervasive computing.  It also reminded me of some research themes that I still consider to be important, namely, the usability of accessibility APIs.  In my opinion, all these themes represent interesting research directions which have the fundamental potential of enhancing the accessibility and usability of different types of technologies.

I wish the AEGIS project the best of luck and look forward to learning more about their research findings.

Acknowlegements

Thanks are extended to Wendy Porch who took the time to review an earlier draft of this post.

Permalink
Share post
Christopher Douce

Source code accessibility through audio streams

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 28 June 2023, 10:28

A screenshot of some source code being edited by a software developer

One of my colleagues volunteers for the Open University audio recording project.  The audio recording project takes course material produced by course teams and makes audio (spoken) equivalents for people with visual impairments.  Another project that is currently underway is the digital audio project which aims to potentially take advantage of advances in technology, mobile devices and international standards.

Some weeks ago, my colleague tweeted along the lines of 'it must be difficult for people with visual disabilities to learn how computer programs are written and structured' (I am heavily paraphrasing, of course!)  As soon as I read this tweet I began to think about two questions.  The first question was: how do I go about learning how a fragment of source code works? and secondly, what might be the best way to convert a function or a 'slice' of programming code into an audio representation that helps people to understand what it does and how it is structured?

Learning from source code

How do I learn how a fragment of source code works?  More often than not I view code through an integrated development environment, using it to navigate through the function (or functions) that I have to learn.  If I am faced with some code that is really puzzling I might reach for some search tools to uncover the connections between different parts of the system.

If the part of the code that I am looking at is quite small and extremely puzzling, I might go as far as grab a pen and paper and begin to sketch out some notes, taking down some of the expressions that may appear to be troubling and maybe split these apart into their constituent components.  I might even try to run the various code fragments by hand.  If I get really confused I might use the 'immediate' window of my development environment ask my computer to give me some hints about the code I am currently examining.

When trying to understand some new source code my general approach is to try to have a 'conversation' with it, asking it questions and looking at it from a number of different perspectives.  In the psychology of programming literature some researchers have written about developers using 'top down' and 'bottom up' strategies.  You might have a high level hypothesis about what something does on one hand, but on the other, sections of code might help you to understand the 'bigger picture' or the intentions behind a software system.

In essence, I think understanding software is a really hard task.  It is harder and more challenging than many people seem to imagine.  Not only do you have to understand the language that is used to describe a world, but you also have to understand the language of the world that is described.  The world of the machine and the world of the problem are intrinsically and intimately connected through what can sometimes seem an abstract collection of words and symbols.  Your task, as a developer, is to make sense of two hidden worlds.

I digress slightly... If learning about computer programming code is a hard task, then it is possible that it is likely to be harder for people with visual impairments.  I cannot imagine how difficult it must be to be presented with a small computer program or a function that has been read out to you.  Much of the 'secondary notation', such as tabbing and white space can be easily lost if there are no mechanisms to enable them to be presented through another modality.  There is also the danger that your working memory may become quickly overwhealmed with names of identifiers and unfamiliar sounding functions.

Assistive technology for everyone

The tasks of learning the fundamentals of programming and learning about a program are different, yet related.  I have heard it said that people with disabilities are given real help if technologies are created that are useful for a wide audience.  A great example of this is, for example, optical character recognition.  Whilst OCR technology can save a great deal of cost typing, it has also created tools that enable people with low vision to scan and read their post.

Bearing the notion of 'a widely applicable technology' in mind, could it be possible to create a system that creates an interactive audio description that could potentially help with the teaching of some of the concepts of computer programming for all learners?

Whenever I read code I immediately begin to translate the notion of code into my own 'internal' notation (using different types of memory, both internal and external - such as scraps of paper!) to iteratively internalise and make sense of what I am being presented with.  Perhaps equivalents of programming code could be created in a form that could be navigated.  Code it not something that you read in a linear fashon - code is something you work with.

If an interesting and useful (and interactive) audio equivalent of programming code could be created there then might be the potential that these alternative forms might be useful to all students, not only to learners who necessarily require auditory equivalents.

Development directions

There are a number of tools that could help us to create what might amount to 'interactive audio descriptions of programming code'.  The first is the idea of plan or schema theory (wikipedia) – the notion that your understanding of something is drawn from previous experience.  Some theorists from the Psychology of Programming have extended and drawn upon these ideas, positing ideas such as key lines of code such as beacons.

Another is Green's Cognitive Dimensions framework (wikipedia).  Another area to consider looking at is the interesting sub-field of Computer Science Education research.  There must be other tools, frameworks and ideas that can be drawn upon.

Have you got a sec?

Another approach that I sometimes take when trying to understand something is that I ask other more experienced people for help.  I might ask the question, 'what does this section represent?' or, 'what does this section do?'  The answers from collegues can be instrumental in helping me to understand the purpose behind fragments of programming code.

Considering browsing

I can almost imagine what could be an audio code browser that has some functionality that allows you to change between different levels of abstraction.  At one level, you may be able to navigate through sets of different functions and hear descriptions of what they are intended to do and hope to receive by way of parameters (which could be provided through comments).  On another level there may be summaries of groups of instructions, like loops, with descriptions that might sound like, 'a foreach loop that contains four other statements and a call to two functions'.  Finally, you may be able to tab into a group of statements to learn about what variables are manipulated, and how.

Of course this is all very technical stuff, and it could be stuff that has already been explored before.  If you know of similar (or related) work, please feel free to drop me a line!

Acknowledgement: random image of code by elliotcable, licenced under creative commons, discovered using Flickr.

Permalink
Share post
Christopher Douce

Exploring Moodle forums

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 21 July 2010, 18:08

A set of spanners loosely referring to moodle tools and debugging utilities

Following on from the previous post, this post describes my adventures into the Moodle forums source code.

Forums, I understand, can be activities (a Moodle term) that can be presented within individual weeks or topics. I also know that forums can be presented through blocks (which can be presented on the left or right hand side of course areas).

To begin, and remembering the success that I had when trying to understand how blocks work, I start by looking at what the database can tell me and quickly discover quite a substantial number of tables.  These are named: forum (obviously), forum_discussions, forum_posts, forum_queue, forum_ratings (ratings is not something that I have used within the version of Moodle that I am familiar with), forum_read, forum_descriptions, forum_subscriptions and forum_track_prefs.

First steps

Knowing that some of the data tables are called, I put aside my desire to excitedly eyeball source code and sensibly try to find some documentation.

I begin by having a look at the database schema introduction page (Moodledocs), but find nothing that immediately helps.  I then discover an end user doc page that describes the forum module (and the different types of forum that are on offer in Moodle).  I then uncover a whole forum documentation category (Moodledocs) and I'm immediately assaulted by my own lack of understanding of the capabilities system (which I'll hopefully blog about at some point in the future – one page that I'll take note of here is the forum permissions page).

From the forums category page I click on the various 'forum view pages', which hints that there are some strong connections with user settings.

Up to this point, what have I learnt?

I have learnt that Moodle permits only certain users to carry out certain actions to Moodle forums.  I have also learnt that Moodle forums have different types.  These, I am lead to believe (according to this documentation page) are: standard, single discussion, each person posts one discussion, and question and answer.  I'm impressed:  I wasn't expecting so much functionality!

So, can we discover any parallels with the database structures?

The forum table contains fields which are named: course, type, name, description followed by a whole other bunch of fields I don't really understand.  The course field associates a forum with a course (I'm assuming that somewhere in the database there will be some data that connects the forum to a particular part or section of a course) and the type (which is interestingly, an enumerated type) which can hold data values that roughly represents the forum types that were mentioned earlier.

A brief look at the code

I remember that the documentation that I uncovered told me that the 'forums' was a module. In the 'mod' directory I see notice a file called view.php.  Other interesting files are named: post.php, lib.php, search.php and discuss.php.  View.php seems to be one big script which contains a big case statement in the middle.  Post.php looks similar, but has a beguiling sister called post_form which happens to be a class.  Lib, I discover, is a file of mystery that contains functions and fragments of SQL and HTML.  Half of the search file seems to retrieve input parameters, and discuss is commented as, 'displays a post, and all the posts below it'.

Creating test data

To learn more about the data structures I decide to create some test data by creating a forum and making a couple of posts.  I open up an imaginatively titled course called 'test' and add an equally imaginatively titled forum called 'test forum'.  When creating the forum I'm asked to specify a forum type (the options are: single simple discussion, Q and A forum, standard forum for general use).  I choose the standard forum and choose the default values for aggregate type and time period for blocking.  The aggregate type appears to be related to functionality that allows students to grade or rate posts.

When the forum is live, I then make a forum post to my test forum that has the title 'test post'.

Reviewing the database

The action of creating a new forum appears to have created a record in the forum table which is associated to a particular course, using the course id.  The act of adding a post to the test forum has added data to forum_discussions, where the name field corresponds to the title of my thread: 'test post'.  A link is made with the forum table through a foreign key, and a primary key keeps track of all the discussions held by Moodle.

The forum_posts table also contains data.  This table stores the text that is associated with a particular post.  There is a link to the discussion table through a discussion id number.  Other tables that I looked at included forum_queue (not quite sure what this is all about yet), forum_ratings (which probably stores stuff depending on your forum settings), and forum read, which simply stores an association between user id, forum id, discussion id and post id.

One interesting thing about forums is that they can have a recursive structure (you can send a reply to a reply to a reply and so on).  To gain more insight into how this works, I send a reply to myself which has the imaginative content, 'this is a test post 2'.

Unexpectedly, no changes are made to the forum_discussions table, but a new entry is added to the forum_posts table.  To indicate hierarchy a 'parent' field is populated (where the parent relates to an earlier entry within the forum_posts table).  I'm assuming that the sequence of posts is represented by the 'created' field which stores a numerical representation of the time.

Tracing the execution flow

These experiments have given me with three questions to explore:

  1. What happens within the world of Moodle code the user creates a new forum?
  2. What happens when a user adds a new discussion to a forum?
  3. What happens when a user posts a reply?

Creating a new forum

Creating a new forum means adding an activity.  To learn about what code is called when a forum is added, I click on 'add forum' and capture the URL.  I then give my debugger the same parameters that are called (id, section, sesskey and add) and then begin to step through the course/mod.php script.  The id number seems to relate to the id of the course, and the add parameter seems to specify the type of the activity or resource that is to be added.

I quickly discover a redirect to a script called modedit.php, where the parameters add=forum, type= (empty), course=4, section=1, return=0.  To further understand what is going on, I stop my debugger and start modedit.php with these parameters.

There is a call to the database to check the validity of the course parameter, fetching of a course instance, something about the capability, fetching of an object that corresponds to a course section (call to get_course_section in course/lib code).   Data items are added to a $form variable (which my debugger tells me is a global).  There is then the instantiation of a class called mod_forum_mod_form (which is defined within mod/forum/mod_form.php).  The definition class within mod_forum_mod_form defines how the forum add or modification form will be set out.  There is then a connection between the data held within $form and the form class that stores information about what information will be presented to the user.

After the forum editing interface is displayed, the action of clicking the 'save and return to course' (for example) there is a postback to the same script, modedit.php.  Further probing around reveals a call to forum_add_instance within forum/lib.php (different activities will have different versions of this function) and forum_update_instance.  At the end of the button clicking operation there is then a redirect to a script that shows any changes that have been made.

The code to add a forum to course will be similar (in operation) to the code used to add other activities.  What is interesting is that I have uncovered the classes and script files that relate to the user interface forms that are presented to the user.

Adding a new discussion

A new discussion can be added by clicking on the 'Add a new discussion topic' button once you are within a forum.  The action of clicking on this button is connected to the forum/post.php script.  The most parameter associated to this action is the forum number (forum=7, for example).

It's important to note the use of the class mod_frum_post_form contained within post_form.php which represents the structure of the form that the user enters discussion information to.

The code checks the forum id and then finds out which course it relates to.  It then creates the form class (followed by some further magic code that I quickly stepped through).

The action of clicking on the 'post to forum' button appears to send a post back (along with all of the contents of the form) to post.php (the same script used to create the form).  When this occurs, a message is displayed and then a redirect occurs to the forum view summary.  But where in the code is the database updated?  One way to do this is to begin with a search to the redirect.  Whilst browsing through the code I stumble across a comment that says 'adding a new discussion'.  The database appears to be updated through a call to forum_add_discussion.

Posting a reply to a discussion

The post.php script is also used to save replies to discussions (as well as adding new discussions) to the database.  When a user clicks on a discussion (from a list of discussions created by discuss.php), the link to send replies are represented by calls to post.php with a reply parameter (along with a post number, i.e. post.php?reply=4).  The action of clicking on this link presents the previous message, along with the form where the user can enter a response.

Screen grab of user sending a reply to a forum discussion

To learn more about how this code works, I browse through the forums lib file and uncover a function called forum_add_new_post.  I then search for this in post.php and discover a portion of code that handles the postback from the HTML form.  I don't explore any further having learnt (quite roughly) where various pieces of code magic seems to lie.

Summary

The post.php script does loads of stuff.  It weighs in at around seven hundred lines in length and contains some huge conditional statements.

Not only does post appear to manage the adding of new discussions to a forum but it also appears to manage the adding, editing and deletion of forum messages.  To learn about how this script is structured I haven't been able to look at function definitions (because it doesn't contain any) but instead I have had to read comments.  Comments, it has been said, can lie, whereas code always tells the truth.  More functions would have helped me to more quickly learn the structure of the post.php script.

The creation of the user interfaces is partially delegated to the mod and post form classes.  Database updates are performed through the forum/lib.php file.  I like some of the function abstractions that are beginning to emerge but any programming file that contains both HTML and SQL indicates there is more work to be done.  The reason for this aesthetic (and person) opinion is simple: keeping these two types of code separate has the potential to help developers to become quickly familiar where certain types of software actions are performed.  This, in turn, has the potential to save developer time.

One of the central areas of functionality that forum developers need to understand is how Moodle works and uses forms.  This remains an area of mystery to me, and one that I hope to continue to learn about.  Another area that I might explore is how PHP has been used to implement different forum systems so I can begin to get a sense of how PHP is written by different groups of developers.

Acknowledgements: Photograph licenced under creative commons by ciaron, liberated from Flickr.

Permalink
Share post
Christopher Douce

How Moodle block editing works: database (part 2)

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 21 July 2010, 18:05

Pattern of old computer tapes intended to represent databases

This is a second blog entry about how Moodle manages its blocks (which can be found either at a site level or at a course level).  In my previous post I wrote about the path of execution I discovered within the main Moodle index.php file.  I discovered that the version of Moodle that I was using presented blocks using tables, and that blocks made use of some interesting object-oriented features of PHP to create the HTML code that is eventually presented to the end user.

This post has two objectives.  The first is to present something about the database structures that are used to store information about which blocks are stored where, and secondly to explore what happens when an administrator clicks on the various block editing functions.  The intention behind this post is to understand Moodle in greater detail to uncover a little more of how it has been designed.

Blocks revisited

Screen grab of the latest news block with moving and deletion editing icons

Blocks, as mentioned earlier, are pieces of functionality that can sit on the left hand or right hand borders of courses (or the main Moodle site page).  Blocks can present a whole range of functions ranging from news items through to RSS feeds.

Blocks can be moved around within a course page with relative ease by using the Moodle edit button.  Once you click on ‘edit’ (providing it is there and you have the appropriate level of permissions), you can begin to add, remove and move blocks around using a couple of icons that are presented.  Clicking on the left icon moves the block to the left hand margin, clicking the down arrow icon changes its vertical position and so on.

One of my objectives with this post is to understand what happens when these various buttons are clicked on.  What I am hoping to see are clearly defined functions which will be called something like moveBlockUp, moveBlockDown or deleteBlock.

Perhaps with future versions it might be possible to have a direct manipulation interface (wikipedia) where rather than having buttons to press, users will be able to drag blocks around to rapidly customise course displays.  Proposing ideas and problems to be solved is a whole lot more easier than going ahead and solving them.  Also, to happily prove there’s no such thing as an original thought, I have recently uncovered a Moodle documentation page.  It seems that this idea has been floating around since 2006.

Before I delve into trying to uncover how each of the Moodle block editing buttons work, it is worthwhile spending some time to look at how Moodle remembers what block is placed where.  This requires looking at the database.

Remembering block location

I open up my database manipulation tool (SqlYog) and begin to browse through the database tables that are used with Moodle.  I quickly spot a bunch of tables that contain the name block.  One that seems to be particularly relevant is a table called block_instance.

The action of creating a course (and adding blocks to it) seems to create a whole bunch of records in the block_instance.  Block_instance appears to be the table that Moodle uses to remember what block should be displayed and when.

The below graphic is an excerpt from the block_instance data table:

Fragment of the block_instance datatable showing a number of different fields

The field weight seems to relate to the vertical order of blocks on the screen (I initially wondered whether it related to, in some way, some kind of graphical shading, thinking of the way that HTML uses the term weight).  Removing a block from a course seems to change the data within this table.

The blockid seems to link each entry within block_instance to data items held within the  Block table:

Fragment of the blocks table, showing the field headings and the data items

The names held within the name field (such as course_summary) are connected to the programming code that relates to a particular block.  The cron (and the lastcron) relate to regular processes that Moodle must execute.  With the default installation of Moodle everything is visible, and at the time of writing I have no idea what multiple means.

Returning to block_instance, does the pageid field relate to the id used in the course?  Looking at the course table seems to add weight to his hypothesis.

I continue my search for truth by rummaging around in the Moodle documentation, discovering a link to the database schema and uncover some Block documentation that I haven’t seen before (familiarity with material is a function of time!)  This provides a description of the block development system as described by the original developer.

Knowing that these two tables are now used to store block location my question from this point onwards is: how does this table get updated?

Database updates

To answer this question I applied something that I have called ‘the law of random code searching’: if you don’t know what to look for and you don’t know how things work, carry out a random code search to see what the codebase tells you.  Using my development environment I search to find out where the block_instance datatable is updated.

Calls to the database to be spread out over a number of files: blocks, lib, accesslib, blocklib, moodlelib, and chat/lib (amongst others).  This seems to indicate that there is quite a lot of coupling between the different sections of code (which is probably a bad thing when it comes to understanding the code and carrying out maintenance).

Software comprehension is sometimes an inductive process.  Occasionally you just need to read through a code file to see if it can yield any clues about its design, its structure and what it does.  I decided to try this approach for each of the files my search results window pointed to:

Accesslib
Appears to access control (or permission management) to parts of Moodle.  The comments at the top of the file mention the notion of a ‘context’ (which is a badly overloaded word).  The comments provide me no clue as to the context in which context is used.  The only real definition that I can uncover is the database description documentation which states, ‘a context is a scope in Moodle, for example the whole system, a course, a particular activity’.  In AccessLib, there are some hardcoded definitions for different contexts, i.e. CONTEXT_SYSTEM, CONTEXT_USER, CONTEXT_COURSECAT and so on.

The link to the blocks_instance database lies within a huge function called create_context which updates a database table of the same name.  I’ve uncovered a forum explanation that sheds a little more light onto the matter, but to be honest, the purpose of these functions is going to take some time to uncover.  There is a clue that the records held within the context table might be cached for performance reasons.  Moving on…

Moodlelib

Block_instance is mentioned in a function named remove_course_contents which apparently ‘clears out a course completely, deleting all content but don’t delete the course itself’.  When this function is called, modules and blocks are removed from the course.  Moodlelib is described as ‘main library file of miscellaneous general-purpose Moodle functions’ (??), but there is a reference towards another library called weblib which is described as ‘functions that provide web output’.

Blocks
A comment at the top of the blocks.php file states that it ‘allows the admin to configure blocks (hide/show, delete and configure)’.  There is some code that retrieves instances of a block and then deletes the whole block (but in what ‘context’ this is done, at the moment it’s not clear).

Blocklib
The file contains the lion’s share of references to the block_instance database.  It is said to include ‘all the necessary stuff to use blocks in course pages’ (whatever that means!)  At the top there are some constants for actions corresponding to moving a block around a course page.  Database calls can be found within blocks_delete_instance, blocks_have_content, blocks_print_group and so on.  The blocks_move_block seems to adjust the contents of the database to take account of moment.  There also appears to be some OO type magic going on that I’m not quite sure about.  Perhaps the term ‘instance’ is being used in too many different ways.  I would agree with the coder: blocklib does all kinds of ‘stuff’.

Lib files
Reference to block_instance can be found in lib files for three different blocks: chat, lesson and quiz.  The functions that contain the call to the database relate to the removing of an ‘instance’ of these blocks.  As a result, records from the block_instance table are removed when the functions are called.

So, what have I learnt by reading all this stuff?  I’ve seen how the database stores stuff, that there is a slippery notion of a course context (and mysterious paths), and know the names of some files that do the block editing work, but I’m not quite sure how.  There is quite a lot of complexity that has not yet been uncovered and understood.

Digressions

I have a cursory glance through the lib folder to see what else I can discover and find an interestingly named script file entitled womenslib.php.  Curious, I open it and see a redirect to a wikipedia page.  The Moodle developers obviously have a sense of humour but unfortunately mine had failed!  This minor diversion was unwelcome (humour failure exception), costing me both time and ‘head’ space!

Bizarrely I also uncover seemingly random list of words (wordlist.txt) that begins: ‘ape, baby, camel, car, cat, class, dog, eat …’ etc.  Wondering whether one of the developers had attended the famous Dali school of software engineering, I searched for a file reference to this mysterious ‘wordlist’.

It appeared that our mysterious list of words was referenced in the lib\setup.php file, where a path to our  worldlist was stored in what I assumed to be a Moodle configuration variable.  How might this file be used?  It appears it is used within a function called generate_password.

Thankfully the developers have been kind enough to say where they derived some of their inspiration from.   The presence of the wordlist is explained by the need to construct a function to create pronounceable automatically generated passwords (but perhaps only in English?)

This was all one huge digression.  I pulled myself together just enough to begin to uncover what happens when a user clicks on either the block move up, down, or delete buttons when a course is running in edit mode.

Button click action

Returning to the task in hand, I add two blocks (both in the right hand column, and one situated on top of the other) to my local Moodle site with a view to understanding that function code that contributes to the moveBlockUp and deleteBlock functionality.

4815869094_9c27f1aaf4.jpg

I take a look at the links that correspond to the move up and the delete icons.  I notice that the action of clicking sends a bunch of parameters to the main Moodle index.php.  The parameters are sent via get (which means they are sent as a part of the hypertext link).  They are: instanceid (which comes straight out of the block_instance table), sesskey (which reminds me, I really must try to understand how Moodle handles sessions (wikipedia) at some point), and a blockaction parameter (which is either moveup or delete in the case of this scenario).

The question here is: what happens within index.php?  Luckily, I have a debugger that will be able to tell me (or, at least, help me!)

I log in as an administrator through my debugger.  When I have established a session, I then add some breakpoints on my index.php code and launch the index.php code using the parameters for ‘move activity upwards’.

Index.php begins to execute, and a call to page_create_object is made. It looks like a new object is created.  An initialisation function within the page_base class is called (contained within pagelib).  A blocks_setup function is called and block positions from the block_instance database is retrieved.  After some further tracking I end up at a function called blocks_execute_url_action.  The instanceid is retrieved and a call is made to blocks_execute_action where the block action (moveup or delete) is passed in as a parameter with the block instance record that has just been retrieved from the database.

In blocks_execute_action a 'mother of all switch statements' makes a decision about what should be done next.  After some checks, two update commands to the database are issued through the update_record function updated weight values (to change the order of the respective blocks).  With all the database changes complete, a page redirect occurs to index.php.  Now that the database has the correct representation of where each block should be situated index.php can now go ahead and display them.

Is the same mechanism used for course pages?

A very cursory understanding tells me that the course/view.php script has quite a lot to do with the presentation of courses, and at this point gathering an understanding of it is proving to be elusive.  Let’s see what I can find.

4815245535_c6182b86b1.jpg

Initially it does seem that the index.php script controls the display of a Moodle site and course/view.php script does control the course display.  Moving the mouse over the ‘move block up’ icons reveals a hyperlink to the view.php script with get parameters of: id (which corresponds to the course number held within the course data table), instance id (which corresponds to a record within the block_instance table) and sesskey and blockaction parameters (as with index.php).

To get a rough understanding of how things work, I do something similar as before: open up a session through my debugger and launch the view.php with this bunch parameters.  The view.php course is striking.  It doesn’t seem to be very long and nor does it produce any HTML so it looks like there’s something subtle going on.

In view.php, there are some parameter safety checks, followed by some context_instance magic, checking of the session key followed by calls to the familiar page_create_object (mentioned in the earlier section).  Blocks_setup is then called, followed by blocks_get_by_page_pinned and blocks_get_by_page which asks the database which blocks are associated to this particular page (which is a course page).

Like earlier, there is a call to blocks_execute_url_action when updates the database to carry out the action that the administrator clicked on.  At the end of the database update there is a redirect.  Instead of going to index, the redirect is to view.php along with a single parameter which corresponds to the course id.

This raises the question: what happens after the view.php redirect?

Redirect to view.php

When view.php makes a call to the database to get the data that corresponds to the course id number it has been given.  There is then a check to make sure that the user who is requesting the page is logged into Moodle and eventually our old friends page_create_object and blocks_setup are called, but this time since no buttons have been clicked on, we don’t redirect to another page after we have updated the database.

Towards the end of view.php we can begin to see some magic that begins to produce the HTML that will be presented to the user.  There is a call to print_header.  There is then a script include (using the PHP keyword ‘required’) which then creates the bulk of the page that is presented to the user, building the HTML to present the individual blocks.  When running within my debugger, the script course/format/weeks/format.php was included.  The script that is chosen depends on the format of the course that has been chosen.  When complete, view.php adds the footer and the script ends.

Summary

So, what have I learnt from all this messing about?

It seems that (broadly speaking) the code used to move blocks around on the main Moodle site is also used to move blocks around on a course page, but perhaps this isn’t too surprising (but it is reassuring).  I still have no idea what ‘pinned blocks’ means or what the corresponding data table is for but I’m sure I’ll figure it out in time!

Another thing that I have learnt is that the view course and the main index.php pages are built in different ways.  As a result, if I ever need to change the underlying design or format of a course, I now know where to look (not that I ever think this is something that I’ll need to do!)

I have seen a couple of references to AJAX (MoodleDocs) but I have to confess that I am not much wiser about what AJAX style functionality is currently implemented within the version of Moodle I have been playing with.  Perhaps this is one of those other issues that will become clearer with time (and experience).

One thing, however, does strike me: the database and the user interface components are very closely tied together (or closely coupled) which may make, in some cases, change difficult.  One of the things that I have on my perpetual ‘todo’ list is to have a long hard look at the Fluid Project, but other activities must currently take precedence.

This pretty much concludes my adventure into the world of Moodle blocks. There’s a whole load of Moodle related stuff that I hope to look at (and hopefully describe) at some point in the future: groups, roles, contexts, and forums.  Wish me luck!

Acknowlegements: Image from lifeontheedge, licenced under Creative Commons.

Permalink
Share post
Christopher Douce

How Moodle block editing works : displaying a block (part 1)

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 21 July 2010, 17:58

A photograph of a car engine

One of the great things about Moodle (other than the number of databases it can use!) is the way that courses can be easily created and edited.  One of its best features is the edit button that can be found at the top of many pages.  Administrators and course managers can push this button and quickly add and remove functionality to redesign a course or a site within seconds.

This blog post is the first in a series of two (but might even extend to three) that aims to answer the question of: how does the Moodle block editing magic work?  To answer this question I found that it was useful to split this big question into a number of smaller questions.  These are: how are blocks presented to the user?, how are block layouts stored in the Moodle database?, and what happens when the user clicks on the edit button and makes changes to the layout of a site or a course?

There are two reasons for wanting to answer to these questions.  The first is that knowing something about this key part of Moodle might help me to understand more about its architecture which might help me in the future if I have to make any changes as a part of the EU4ALL project.  The second is pure curiosity, particularly regarding the database tables and structures - I would like to know how they work.

There are two broad approaches that I could take to answer these questions: look at things from the top down, or from the bottom up.  I could either look at how the user interfaces are created, or I could have a look at the database to see if I can find data tables that might be used to store data that is used when the Moodle user interface is created.  In the end I used a combination of top down and bottom up approaches to understand a bit of what is going on.

This post will present what I have learnt about how Moodle presents blocks.  The second post will be about what I have found out about the database and how it works (in relation to Moodle blocks) and what happens when you click on the various block editing icons.

There will be load of detail which will remain unsaid and I’ll be skipping over loads of code subtleties that I haven’t yet fully understood!  I’ll also be opinionated, so advance apologies to and Moodle developers who I might inadvertently offend.  I hope my opinions are received with positive spirit, which is the way that they are intended.

Introducing blocks

Blocks are bits of functionality which sit on either side of a Moodle site or course.  They can do loads of stuff: provide information to students about their assignment dates, and provide access to discussion forums.  When first looking at Moodleworld, I had to pause a moment to distinguish between blocks, resources and activities.  Blocks, it might be argued, are pieces of functionality that can support your learning, whilst activities and resources may be a central part of your learning (but don’t quote me on that!)

Screen grab of an administrator editing a Moodle course

Not long after starting looking at the blocks code, I discovered a developer page on the subject.  This was useful.  I soon found out that apparently there are plans to improve the block system for the next version of Moodle.  The developers have created an interestingly phrased bug to help guide the development of the next release.  This said, all the investigations reported here relate to version 1.9+, so things may very well have moved on.

Looking at Index

Blocks can be used in at least two different ways: on the main Moodle site area (which is seen when you enter a URL which corresponds to a Moodle installation) and within individual courses.  I immediately suspect that there is some code that is common between both of them.  To make things easy for myself, I’ve decided (after a number of experiments) to look at how blocks are created for a Moodle site.

To start to figure out how things work the first place that I look at is the index.php file.  (I must confess that I actually started to try to figure out what happened when you click on the editing button, but this proved to be too tough, so I backtracked…)

So, what does the index.php file do?  I soon discover a variable called $PAGE and asked the innocuous question of ‘why are some Moodle variables in UPPERCASE and others in lowercase?’ I discover an answer in the Moodle coding guidelines.  Anything that is in uppercase appears to be global variables.  I try to find a page that describes the purpose of the different global variables, but I fail, instead uncovering a reference to session variables, leaving me wondering what the $PAGE class is all about.

Pressing on I see that there are some functions that seem to calculate the width of the left and the right hand areas where the blocks are displayed.  There is also some code that seems to generate some standard HTML for a header (showing the Moodle instance title and login info).

The index page then changes from PHP to HTML and I’m presented with a table.  This surprises me a little.  Tables shouldn’t really be used for formatting and instead should only be used to present data.  It seems that the table is used to format the different portions of the screen, dividing it unto the left hand bunch of columns, a centre part where stuff is displayed, and a right hand column. 

It appears that the code that co-ordinates the printing of the left and right blocks is very similar, with the only difference being different parameters to indicate whether things appear on the left, and things appear on the right.

The index file itself doesn’t seem to display very much, so obviously the code that creates the HTML for the different blocks is to be found in other parts of the Moodle programming.

Seeding program behaviour

To begin to explore how different blocks are created I decide to create some test data.  I add a single block to the front page of Moodle and position it at the top on the right hand side:

Screen shot showing empty news block

Knowing that I have one block that will be displayed, I can the trace through the code when the ‘create right hand side’ code is executed using my NuSphere debugger to see what is called and when.

One thing that I’m rather surprised about is how much I use the different views that my debugger offers.  It really helps me to begin learn about the structure of the codebase and the interdependencies between the different files and functions.

Trying to understand the classes

It soon becomes apparent that the developers are making use of some object-oriented programming features.  In my opinion I think this is exactly the right thing to do.  I hold the view that if you define the problem in the right way then its solution (in terms of writing the code that connects the different definitions together) can be easy, providing that you write things well (this said, I do come from a culture of Java and C# and brought up, initially, on a diet of Pascal).

After some probing around there seem to be two libraries that seem to be immediately important to know about: weblib and blocklib.  The comments at the top of weblib describes it as ‘library of all general-purpose Moodle PHP functions and constants that produce HTML output’.  Blocklib is described as, ‘all the necessary stuff to use blocks in course pages’.

In index, there is a call to a function called blocks_setup (blocks, I discover, can be pinned true, pinned both, or pinned false – block pinning is associated to lessons, something that I haven’t studied).  This function appears to call another function named   blocks_get_by_page (passing it the $PAGE global).  This function returns a data structure that contains two arrays.  One array is called l and the other is called r.  I’m assuming here that array data has been pulled from the database.

The next function that follows is called blocks_have_content. This function does quite a bit.  It takes the earlier data structure, and translates the block number (which corresponds to which block is to be displayed on the page) and converts it into a block name through a database call.  The code then uses this name to instantiate an object whose class is similar to the block name (it does this by prepending ‘block_’ to the start).

There is something to be cautious about here: there is a dependency between the contents of the database (which are added to when the Moodle database is installed) and the name of the class.  If either one of these were to change the blocks would not display properly.

The class that corresponds to the news block is named ‘block_news_items’.  This class is derived from (or inherits) another class called block_base that is defined within the file moodleblock.class.php.  A similar pattern is followed with other blocks.

Is_empty()

Following the program flow, there is a call to a function called is_empty() within blocklib.php.   This code strikes me as confusing since is_empty should only be doing one thing.  Is_empty appears to have a ‘side effect’ of storing the HTML for a block that comes from a call to get_content to a variable called ‘content’.  Functions should only do what they say they do.  Anything else risks increasing the burden of program comprehension and maintenance.

The Moodle codebase contains several versions of get_content, one for each of the different blocks that can be displayed.  The version that is called depends on which object Moodle is currently working through.  Since there is only one block, the get_content function within block_news_items is called.  It then returns some HTML that describes how the block will be presented. 

This HTML will be stored to the structure which originally described which block goes where.  If you look through the pageblocks variable, the HTML can be found by going to either the left or right array, looking in the ‘obj’ field, then going to ‘content’.  In ‘content’ you will find a further field called ‘text’ that contains the HTML that is to be displayed.

When all the HTML has been safely stored away in memory it is almost ready to be printed (or presented to a web client). 

Calls to print_container_start() and print_container_end() delineate a call to blocks_print_group.  In this function there will then be a database call to check to see if the block is visible and then a call to _print_block() is made.  This is a member function of a class, as indicated by the proceeding underscore.  The _print_block() can be found within the moodleblock.class file.  This function (if you are still following either me or the code!) makes a call to print_side_block function (which is one of those general purpose PHP functions) contained within weblib.php.

Summary and towards part 2

I guess my main summary is that to create something that is simple and easy to use can require quite a lot of complicated code.

My original objective was to try to understand the mechanisms underpinning the editing and customising of course (particularly blocks) but I have not really looked at the differences between how blocks presented within the course areas and how blocks are presented on the main site.  Learning about how things work has been an interesting exercise.  One point that I should add is that from an accessibility perspective, the use of tables for layout purposes should ideally be avoided.

What is great is that there is some object-oriented code beginning to appear within the Moodle codebase.  What is confusing (to me, at least) is the way that some data structures can be so readily changed (or added to) by PHP.  I hold the opinion that stronger data types can really help developers to understand the code that they are faced with since they constrain the actions that can be carried out to those types.  I also hold the view that data stronger typing can really help your development since you give your development tools more of an opportunity to help you (through presenting you with autocomplete or intellisense options), but these opinions probably reflect my earlier programming background and experience.

On the subject of data types, the next post in this series will be about how the Moodle database stores information about the blocks that are seen on the screen.  Hopefully this might fill the gaps of this post where the word ‘database’ is mentioned.

Acknowlegement
: Picture by zoologist, from Flickr.  Licenced under creative commons.

Permalink
Share post
Christopher Douce

Understanding Moodle localisation

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 21 July 2010, 13:19

moodle-logo-small.gif

Another Moodle activity that I've been tasked with is: 'ensure that different users are presented with user interfaces that match their language choices'.

I understand that software localisation (or internationalisation) is an industry in its own right, replete with its own tools and practices. When you scratch the surface of the subject you're immediately presented with different character sets, fonts and issues of text direction (whether text flows from left to right or visa versa).

My question is: how is Moodle localised into different languages, and does it use any approaches that could be considered to be common between other systems?

This post will only scratch the surface of what is an interesting (and often rather challenging) subject. For example, what is the Moodle approach to dealing with plurals, for example? There's also the issue of how internet browsers send their localised settings to web servers and application engines... Before I've even started with this post, I'm heading off topic!

Let's begin by considering three different perspectives: the students perspective, maintainers perspective and the developers perspective.

Students perspective

A student shouldn't really need to concern themselves with their locale settings, since the institution in which they are enrolled are likely use a sensible default setting. But if students wish to change the LMS interface language (and providing your particular Moodle installation permits the changing of user preferences), a student user could click on their name hyperlink that they see after logging on and click on an 'Edit Profile' tab and search for the 'preferred language' drop down box.

In my test installation, I initially had only one language installed: English (en). In essence, my students are not presented with a choice. I might, at some point during my project need to offer 'student users' a choice of four different languages: German, Italian, Greek and Spanish. Obviously something needs to be done, leading us swiftly to the next perspective.

Maintainers perspective

I log out from my test student account and log back in as an administrator and discover something called a 'Language' menu, under which I discover a veritable treasure trove of options.

The first is entitled 'Language Settings'. This allows an administrator to choose the default language for a whole installation and also to do other things such as limit the choice of languages that users can choose.

The second menu option is entitled 'Language Editing'. It appears that this option allows you to edit the words and phrases (or strings) that appear on the screen of your interface. The link between a 'bit on a screen' and a language specific description is achieved by an identifier, or a 'placeholder' that indicates that 'this piece of text should go here'.

What is interesting is that individual strings are held within Moodle programming files. This makes me wonder whether the action of editing the strings causes some internal programming code to change. This process is mysterious, but interesting.

As a useful aside (which relates to an earlier project related post), I click on 'resource.php' to see what identifiers (and text translations) I can find. I see loads of resource types, including names for resource types, which are numbered. Clearly, when adding new functionality, a developer needs to understand how software localisation occurs.

Continuing my user perspective exploration (after being a little confused as to what 'new file created' means after choosing to view the 'resource.php' translation page), I click on the 'Language Packs' option. Here I am presented with a screen that tells me about what language packs I have installed. By default, I only have a single language pack: English (EN). Underneath, I see a huge list of other language packs, along with a corresponding 'download' link. Apparently, because of a problem connecting to the main Moodle site (presumably because one of my development machines is kindly shielded from world from different nasties), things won't install automatically and have to save (unzipped) language packs to a directory called 'moodledata/lang'.

Let's see what happens by downloading and unzipping the language packs I need.

After unzipping the language packs, I hit my browser 'refresh' button. As if my magic, Moodle notices the presence of the new packs and presents you with a neat summary of you have installed.

Developers perspective

So, how does this magic work, and what does a developer have to know about localisation in Moodle?

One place to start is by exploring the anatomy of a downloaded language pack by asking the questions: 'what does it contain, and how is it structured?' Out of all the four packs that I have downloaded the German pack looks by far the most interesting in terms of its file size. So, what does it contain?

The immediate answer is simply: files and directories. In the German pack I see three folders: doc, help and fonts. The doc and fonts folder do not contain very much, mostly readme files, whereas the help folder in turn contains a whole load of subfolders. These subfolders contain what appears to be files containing fragments of HTML that are read using PHP code and presented to the user. At this point I can only assume that Moodle reads different help files (and presents different content to the user) depending upon the language that a user has selected.

At the root of a resource pack I see loads of PHP files. Some of these have similar file names, i.e. some begin with quiz, and presumably correspond to the quiz functionality, and others begin with repository, enrol and so on (my programmer sense is twitching, wondering whether this is the most efficient way to do things!)

A sample of a couple of these PHP files shows that they are simply definitions of localised strings which are stored in an associative array, which is indexed by a name. Translated into 'human speak', there's a fixed 'programming world' name which is linked to a 'language world' equivalent. You might ask the question of why do 'language localisation' this way? The answer is: to avoid having to make many different versions of the same Moodle programming code, which would be more than a nightmare to maintain and keep track of.

A number of questions crawl out of the woodwork. The main one being, 'how are the contents of these resource packs used when Moodle is running?', but there is the earlier question of 'what happens when you make a change to a translation?' that needs to be answered. Both are related.

Moodle has two areas where localisation records are stored. The first can be described as a 'master' area. This is held within the 'programming code' area of Moodle within a directory unsurprisingly named 'lang'. This contains files which contains identifiers and strings for the default language, which is English. The second area is a directory, also called 'lang', which can be found within the Moodledata directory area. Moodledata is a file area that can be modified by the PHP software engine (the software that Moodle itself is written in). Moodledata can store course materials and other data that is easier to store using 'file storage' area as opposed to using the main Moodle database.

As mentioned earlier, language packs are stored to the Moodledata area. If a user chooses to edit a set of localised strings, a new version of the edited 'string set' is written as a new file to a directory that ends with '_local'. In essence, three different language resources can exist: the 'master' language held within the programming area, the installed 'language pack', and any changes made to the edited language pack.

During earlier development work, I created a new resource category called an 'adaptable resource'. After installing the German resource pack, using the 'master language pack', Moodle can tell you whether there are some translations that are missing.

4814610613_9b910ff2c8.jpg

After making the changes, the newly translated words are written to a file. This file takes the form of a set of identifier definitions which are then read by the Moodle PHP engine. Effectively, Moodle writes its own programming script.

Using this framework, developers shouldn't have to worry too much about how to 'localise' parts of their systems, but before stating that I understand how 'localisation' works, there's one ore question to ask.

How does Moodle choose which string to use?

When viewing a course you might see a the 'topic outline' headline. How does Moodle make a choice about which language pack to use? I begin my search by looking through the code that appears to present the course page, 'course/view.php'. There isn't anything in there that can directly help me, so I look further, stumbling upon a file within a 'topics' sub-directory called 'format.php'.

In the format file I discover a function called get_string, which references an identifier called 'topicoutline'. This is consistent with the documentation that I uncovered earlier. The get_string function is the magic function that makes the choice about where your labels come from.

Get_string is contained within a file called 'moodlelib.php' which is, perhaps unsurprisingly, contained within a directory called 'lib'. Moodlelib is a huge file, weighing in at about eight thousand lines. It is described as (in the comments) as a file that contains ‘miscellaneous general-purpose Moodle functions’.

Get_string is a big function. One of the first things it does is figure out what language is currently set by looking at different variables. It then creates a list of places to look where localised strings can be found. The list begins with the location of where language packs are installed to, followed by areas within the Moodle codebase that are installed by default. It then checks to see if any ‘local’ (or edited) versions of the strings that have been created (as a result of user editing the language packs). When the function knows which file the strings are held in, Moodle reads (includes) the file and caches the contents of the 'string file' into a static variable (so Moodle doesn’t have to read the file every time it needs to fetch a string) and returns the matching localised string.

In the middle of this function there is extra magic to present sensible error messages if no strings are found, and other code to help with backwards compatibility with earlier versions of Moodle. It also seems to check for something called 'parent languages', but I've steer clear of this part of the code.

Testing language installation

Has all my messing around the languages worked? Can I now assign different users different languages? (Also, can users choose their own language preferences?) There is only one way to find out. Acting as an administrator I created a new user and set the users default language to Italian. I logged out and logged in using the new user account.

Moodle in Italian

It seems to work!

The one thing that I have not really explored is whether Moodle will automatically detect the language a user has configured on their internet browser. A little poking around, indicates that Moodle can indeed be clever and change its language dynamically by using the hidden 'language' information that is sent to a web server whenever a HTTP request is made.

The 'dynamic language adaptation' functionality is turned on by default, and a switch to turn it on and off can be found within the 'language settings' menu that the administrator can use.

The fact that Moodle can dynamically change in response to browser (and potentially operating system) settings is interesting. One of the things that the EU4ALL project is exploring is whether it might be possible to tell web-based systems whether certain categories of assistive technology are being used. This may open up the possibility of user interfaces that are more directly customised to users individual needs and preferences.

Other 'languages'

I've described (rather roughly) how Moodle takes care of software localisation, but how is it handled in other programming languages. I've used Java and the .NET framework in the past, and each system provides its own way to facilitate localisation.

Java makes use of something called a resource bundle (Sun Microsystems). dotNET, on the other hand, uses something called resource files (Code Project). One question remains: is there a generally recommended approach for PHP, the language on which PHP is based? Like with so many different things in software, there is more than one way to get the same result.

The author of the PHP Cookbook describes another way to think about localisation. This approach differs in the sense that it focuses more on demonstrating localisation by using object-orientation (an approach that Moodle has historically tried to steer away from, but this seems to be changing), and doesn't really address how a user might be able to edit or change their own strings should they not like what they see.

Conclusions

Software localisation, like accessibility, is a subject that software developers and web designers need to be aware of. This rather long post has outlined how software localisation is broadly achieved in Moodle. Much, however, remains unsaid. Issues such as how plurals, right to left scripts and multi-character fonts have been carefully side stepped.

What is clear is that Moodle appears to have a solid infrastructure for localisation which seems to work, and provides site maintainers the ability to add different languages without too many headaches. Also, whilst browsing the documentation site I stumbled across a documentation page that hints at potential future localisation developments.

Although I have mentioned one other way to approach localisation within PHP it might be useful at some point to explore how comparable learning management systems tackle the same problem, perhaps also looking at how localisation is handled in other large projects.

Localisation will always be something that developers will need to address. Whenever new functionality is introduced, developers will obviously make provision to ensure that whatever is developed is understandable to others.

Permalink
Share post
Christopher Douce

Exploring how to call SOAP webservices using PHP (and Moodle)

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 21 July 2010, 13:00

4815188662_bd8a3e24e0_m.jpg

This post describes my second bash at understanding PHP and SOAP webservices, work carried out off and on over the last couple of weeks. The first time I connected PHP to an externally hosted web-service was using a script that I wrote that was external to Moodle. Now my challenge is slightly different: to try to get Moodle calling external web services.

Just to make sure I understand everything, I'm going to present some background acronyms, try and remember what pages I looked at before, then step towards uncovering parts of Moodle that are in some way connected to the magic of web services.

Background information

I'm required to interface to web services that use the SOAP protocol (wikipedia). SOAP is, I am led to believe, an abbreviation for Simple Object Access Protocol. In a nutshell, SOAP allows you to send a message from one computer to another, telling it to do stuff, or asking it a question. In return, you're likely to get a response back that either tells you what you wanted or indicates why your request had failed. SOAP is one of many different techniques that you can use to pass messages to one computer to another over the internet.

Another technique, which is simpler (and faster) but has some limitations that SOAP gets round, is REST (wikipedia). More information on this 'architectural style' can be found quite easily by doing a quick internet search. My focus is, however, SOAP.

So, assuming that one computer exposes (or makes available) a web service to another computer, how do other computers know how to call a service? In other words, what parameters or data does a particular service expect? The answer is that the designers of SOAP service use a language that describes the format of the messages that the SOAP server (or service) will accept. This language is called WSDL, or Web Services Description Language (wikipedia).

Each SOAP server (or service) has a web address. If you need to find out what data a SOAP service requires, you can usually ask it by adding ?wsdl after the service name. This description, which is presented in a computer readable structure, can sometimes help you to build a SOAP call – a request from your computer to another.

Very often (in my limited experience of this area), the production and use of this intermediate language is carried out using layers of software tools and libraries. At one end, you will describe the parameters that you will process, and some magic programming will take your description (which you give in the language of your choice) and convert it into a difficult to read (for humans!) WSDL equivalent. But all this is a huge simplification, of course! And much can (and will) go wrong on the journey to get SOAP web services working.

A web service can be a building block of a Service Oriented Architecture (again, wikipedia), or SOA. In the middle, between different web services you can use the mysterious idea of middleware to connect different pieces of software together to manage the operation of a larger system, but this is a whole level of complexity which I'm very happy to avoid at this point!

Stuff I looked at earlier

The first place that I looked was in a book! Specifically, the PHP Cookbook.

Chapters 14, consuming web services, and 15, building web services looked to be of interest, specifically the sections entitled 'calling a SOAP method with/out WSDL'. Turning to this section I was presented immediately with a number of possibilities of how to make SOAP calls since there are a number of different implementations depending upon the version of PHP that you're using.

Moodle, as far as I understand, can work with version 4.3 of PHP, but moves are afoot to move entirely towards version 5. My reference suggested its perhaps best to use the bundled SOAP extension as opposed to the other (PEAR::SOAP or NuSoap) libraries since they are faster, more compatible with the standards, automatically bundled and exceptions (special case errors) that occur within SOAP are fed into corresponding PHP exception constructs to make programs (theoretically!) easier to read.

Consuming services

On my first attempt to call a web service, I ran into trouble straight after starting! All my code was failing for a mysterious reason and my debugger wasn't deciding to give me anything that was useful. After doing some searching and finding some on-line documentation I gave the PEAR library a try, but ended up just as confused. I ended up asking one of my illustrious colleagues for help who suggested that I should add an additional parameter to original attempts using the PHP extensions to take account of local network setup.

Calling seemed to be quite easy. I could create something called a SOAP client, tell it which address I want to call, give it some options and make a call my sending my client a message which has the same name of the web service which I want to call, optionally loaded up with all my parameters. To see more of what came back, I put some of the client variables in some temporary variables so I could more easily watch what was coming back in my debugger.

Producing services

Now that I (more or less) knew how to call web services using PHP, it struck me that it might be useful to see how it might be possible to present web services using PHP. This was found in the next chapter of the book.

To maintain consistency, I asked the question how might I create some WSDL that describes a service? Unfortunately, there is not an easy answer to this one. Although the integral SOAP libraries don't directly offer support to do this, there are some known techniques and utilities that can help.

One of the big differences between PHP and the WSDL language is that PHP is happy to just go ahead and do things with data without having to know exactly what form (or type) the data takes. You only get into trouble when you ask PHP to carry out operations on a data item that doesn't make sense.

WSDL, on the other hand, describes everything, giving both the name of a data item and its type. Because of this, you can't directly take a PHP data structure and use it to create WSDL. To get round this difference one approach is to provide this additional information in the form of a comment. Although comments are intended to help programmers, they can also be read by other computer programs. By presenting data type information in the form of a comment, an intermediate program can create WSDL structures without too much trouble, saving developer time and heartache. This approach is used by both the NuSoap library and code that works with PHP 5. But I digress...

Moodle web services code

There appear to be some plans to expose some of the Moodle functionality via a series of web services, enabling Moodle to be connected to and used with a range of external applications. There is also a history connecting Moodle with external assessment systems using web services.

A grep through the Moodle codebase (for 1.9) reveals a library called (perhaps unsurprisingly) soaplib. There appears to be some programming logic which makes a decision about which SOAP interface library to use, depending upon the version of PHP: use the native version if PHP 5 is used, otherwise NuSoap.

I'm guessing that the need to use the NuSoap library will gradually disappear at some point, but a guess is totally different from finding out whether this is really going to happen.

One way to find out what is going on and what lies in store for the future is to explore the on-line discussion forums and quickly find a forum that is dedicated to discussing Moodle web services. It appears there are two interesting developments, something called the Moodle NetWork (which allows you to share resources between different instances of Moodle, at a first glance), and non-core Moodle code contribution called the OKTech Web Services. After a little poking around it's possible to find some documentation that describes this development in a little more detail.

I also discovered a documentation page entitled Web services API , but is related to XML-RPC (wikipedia) rather than SOAP. My head is beginning to hurt!

Returning to the Moodle core SOAP library, I ask the question: what uses the soaplib? One way to do this is to search for calls to functions that are contained within this library. I have to confess, I didn't find anything. But, what I did find is a discussion.

It turns out it was added as a result of work carried out at the University of York in the UK for a project called Serving Maths that created something called the Remote Question Protocol (RQP). The initial post mentions concerns about not being able to make use of some of the additional parameters that the PHP 5 library provides. This is a concern that I share.

Next steps

I've more or less finished my whistlestop tour of Moodle components and code that relate to web services type stuff. I'm sure there is more lurking out there that I haven't discovered yet. But what of a conclusion?

Since I'm not planning on using Moodle to expose any web services I can thankfully sidestep some of the more difficult discussions I've uncovered.

Also, since there isn't much in the way of existing SOAP utility code that I can build upon and I roughly know more or less how to call web services using the magic functions that are provided in PHP 5, I'm going to try to more or less directly add some lines of code to Moodle. But before I do this, like every good developer, I'll test things out using a test harness to explore how my target services behave.

Image: modified from wikipedia

Permalink
Share post

This blog might contain posts that are only visible to logged-in users, or where only logged-in users can comment. If you have an account on the system, please log in for full access.

Total visits to this blog: 2273894