Ever since July 2001 I have edited (off and on) the Psychology of Programming Interest Group newsletter. The group, known as PPIG has been in existence since 1987. Its purpose is to provide an interdisciplinary academic forum for researchers who are interested in the intersection between computer programming and psychology.
PPIG can be described as a pretty broad church. On one hand there are those who aim to explore program comprehension and the relationship between notation systems and programming languages, whereas other researchers have been performing ethnographic studies and considering the different types of research methods that could be used.
Some of the questions that the PPIG community have been exploring resonated strongly with my doctoral research which was all about understanding how computer programmers go about maintaining computer software.
I will probably always remember the moment when I started to be interested in the link between computer programming and psychology, particularly cognitive psychology. I studied computer science as an undergraduate. Our lecturers asked us to do a time limited summative programming assignment. What I mean by this is that myself and my cohort were all corralled into a tired computer lab, given a sheet of program requirements, a Pascal compiler, and were told to get on with it (and, no, we couldn't talk to each other).
When we had finished our programs, we had to print them out using a dot matrix printer (which was, of course, situated within its own sound proof room), and give the fruits of our labour to our instructor who exuded a unique mixture of boredom and mild bewilderment at the stress that he had caused.
What struck me was that some students had finished and had left the laboratory to go to the union bar within twenty minutes, whereas others were pulling out their hair four hours later and still didn't have a working program. This made me ask the questions, 'why was there such a difference between the different programmers?', and 'what exactly do we do when we write computer software?'
I seem to remember that this was in our first year of our degree. Our computing lecturers had another challenge in store for those of us who made it to our second year: a software maintenance project.
The software maintenance project comprised of one third role play, and two thirds utter confusion. Our team of four students were presented with a printout of around forty thousand lines of wickedly obscure FORTRAN code, and given another challenging requirements brief. We were then introduced to a fabulous little utility called GREP, and again told us to get on with it.
This project made me ask further questions of, 'how on earth do we understand a lot of unfamiliar code quickly?', and 'what is the best way to make effective decisions?' These and other questions stuck with me, and eventually I discovered PPIG.
So, during my week on study leave I compiled the latest edition of the PPIG Newsletter. The next annual workshop is to take place at the University of York, and I hope to attend. If I have the time, I'll try to write a short blog post about it and the themes that emerge.
When I was done with the newsletter I turned my attention to a research idea I have been trying to work on for well over the past year with an esteemed collaborator from Royal Holloway, University of London.
As well as studying a number of different programming languages during my undergraduate years I was also introduced to the all-encompassing subject of software engineering. In engineering there is a simple idea that if you can measure something, that something can be controlled. One of the difficulties of managing software projects is that software is intrinsically intangible: it isn't something you can physically touch or see. It's impossible to ascertain, at a glance, how your developers are getting along or whether they are experiencing difficulties. To get round the problem researchers have proposed software complexity metrics.
Having a complexity metric can be useful (to some extent). If you apply a complexity metric to a program, the bigger the number, the more trouble a developer might be faced with (and more money spent). Researchers have proposed a number of different metrics which measure different aspects of a program. One counts the number of linguistic parts of a program, another metric counts the number of unique paths of execution that a program might have.
Another type of metric, called spatial complexity metrics, have sprung from an understanding that programmers use different types of short term memory during program comprehension and development. The idea behind this metric, which was first published in a PPIG workshop back in 1999, was to try to propose a metric which is inspired by the psychology of a programmer.
The work in progress paper describes a number of experiments that aims to explore whether there might be a correlation between different software complexity metrics, and empirical measurements of cognitive load taken from an eye tracking programming comprehension study. The research question is: are program complexity metrics psychologically valid?
Of course, writing up a research idea is a lot easier than carrying it out! This said I do hope to share some of the research materials that may be used within the studies through this blog when they are available.