OU blog

Personal Blogs

Christopher Douce

Visit to PPIG 2018

Visible to anyone in the world
Edited by Christopher Douce, Tuesday, 4 Dec 2018, 15:00

On 7 September 2018 I took a break from timetabling and interviewing tutors for a web technologies module and visited a workshop called the Psychology of Programming Interest Group, which was being held in the Art Worker’s guild, London. The workshop took place in an amazing room which was packed with portraits. 

Due to work commitments, I was only able to attend the morning of the 7 September. Due to the shortness of my appearance, I wasn’t going to do a blog, but I was reminded of the event (and one of the presentations) due to an email that was sent to me by the Association of Computing Machinery (ACM). I’ll explain why later in a moment, but what follows is a very quick sketch of what happened in the bit of PPIG that I attended.

Growing Tips, Sprawling Vines and other presentations

The first presentation that I saw was by Luke Church from the University of Cambridge. I noted down the words “what does it feel like to work with the materials of notations?” (which, of course, refers to the idea of working with programming languages). Luke has previously introduced me to languages about chorography. This time he was talking about a programming language called Autodesk Dynamic Studio. He also mentioned a term that I hadn’t heard of before: diachronics in notation design; the link between time and language. I also remember that Luke showed us a series of animations that illustrated the development of software systems.

Two other presentations followed: one was about different forms of data representation, and another was about exploring how whether it may be possible to detect programmer frustration using unobtrusive sensors, so a teaching environment might be able to provide hints and tips.

Conjuring Code

PPIG is often a workshop that produces surprises; this workshop was no exception. The next part of the workshop was presented by two magicians: Will Houstoun and Marc Kerstein. I noted down the phrase: “what kind of tricks can you do in the digital space?” I learnt of a topic called ‘magic theory’. Digital magic could be considered as a combination of the analogue and digital. Code can be used to create a magic effect, or ‘magic’ could influence code in a way that isn’t clear to the viewer. I noted down an important point that was: “the magician goes to an unfeasible amount of effort to make things work”. 

Explicit direct instruction in programming education

The final presentation I attended was by Felienne Hermans. I made a note that Felienne has a PhD in software engineering but had to ‘teach kids in a local community centre’. I also noted down that she said ‘as a learner [of computing], I wasn’t taught in class…’ and also said that she didn’t appreciate how important syntax was, and that the kids in the community centre were struggling with the small stuff.

To learn more, Felienne asked an important question: ‘how do we teach other things?’ such as reading and mathematics. This question led her to the Oxford Handbook of Reading, where she uncovered different schools of thought, such as the phonics approach vs the whole language approach of language learning. In maths education, I also noted down the dilemma of explanation and practice versus exploration and problem solving. This takes us to another important question, which is: where are the controversies in computer science education? In other words: “let’s start a fight”.

During Felienne’s presentation I noted down a few more things, such as “skills begets ideas”, and a comment about the “rote practice of syntax” which is something that I had to go through as a teenager when I copied out programs that were printed in computer magazines. (An activity that helped me to develop ‘moral fibre’). Other comment that I noted down was: the “sensimotor level is syntax”, and “motivation leads to skills”.

After the event…

Two months after the event, the following note appeared in my inbox, as a part of the ACM circular that I mentioned earlier: “evidence is growing that students learn better through direct instruction rather than through a discovery-based method, where students are expected to figure things out for themselves. In general, it is possible to define direct instruction as explanation followed by a lot of focused practice. . . . In fact, direct instruction works especially well for weaker pupils. . . .  In short, they should teach students directly and reduce the amount of design and problem solving that they ask students to do."

This paragraph that relates to a Communications of the ACM blog by Mark Guzdial, Direct Instruction is Better than Discovery, but What Should We be Directly Instructing? (cacm.acm.org) This also relates directly to a blog by Felienne, Programming and direct instruction (Felienne.com)

Reflections

I really liked what Felienne said about looking to other subject areas for inspiration. As a doctoral student, I remember gate crashing a tutorial session (with permission) about the psychology of reading. The group wasn’t looking so much at how to teach reading, but more at the detail of the cognitive processes that guide reading (I was studying the area of program comprehension at the time). I also agree with her point of having a discussion or a debate about approaches to teaching and learning of programming.

When I wrote this blog, an interesting seminar entitled “The Computing Education Revolution in England: Four years on” was being hosted in the OU school of computing and communications. The seminar related to research into the recent changes to the computing and IT GCSE curriculum. This coincidence implicitly emphasises how important it is to think about not only what is taught, but also how that teaching takes place.

Permalink Add your comment
Share post
Christopher Douce

Psychology of Programming Interest Group : work in progress meeting: Day 2

Visible to anyone in the world
Edited by Christopher Douce, Monday, 10 July 2023, 16:43

Teaching programming at a distance

The first presentation of the second day was by yours truly.  I gave a short talk about a university funded project that aims to understand more about the teaching experience of Open University associate lecturers who are tutoring on the TT284 Web Technologies  module.

One of the purposes of the project was to understand issues particular to the learning (and teaching) of programming.  An area of particular interest is the transition between the first level modules (which use some visual programming language) to the second level modules which require students to pay more attention to other issues, such as language syntax.

I wasn’t able to present any firm findings at this stage (since I keep getting sucked into the idiosyncrasies of my day job), except that three themes that were emerging were that some students can struggle with understanding what PHP is and how it works, confusion about Javascript, and the perpetual battle to understand regular expressions (which is, I believe, an issue that pretty all developers, expert or novice, seem to have)

The question and answer session was interesting.  There was some chat about a coding DoJos (group coding sessions), the use of MOOCs (FutureLearn and the Kahn Academy), and how get students talking to each other.

Holistic programming teaching at Middlesex

Franco Raimondi’s talk was rather different to all the others: he showed us some robots (real hardware!) that he used in his teaching.  They had an interesting design, using both Raspberry Pi devices that were connected to Arduino microcontrollers.

One question was: why use both?  The Arduinos are used to control analogue input, but the heart of the control is managed (as far as can understand it) by the Raspberry Pi devices.  Students then have the challenge of how to design and implement a communication protocol between the Pi and the sub-component.  I personally think this is a great approach: students are exposed to different devices and learn more about their purpose.  When I had a job in industry, one thing that I had to do is figure out how to get one embedded controller talking to another: Franco’s robots would have helped me a lot to figure out how to do this.  More information about the robots can be found by taking a look at the MIddlesex Robotic plaTfOrm: MIRTO website

Okay, so there is an interesting robot, but how are they used in practice?  Franco described a series of lectures, design workshops, programming workshops and physical computing workshops.  In the workshops (if my notes are correct!) students are asked to solve different problems, such as to write a line following algorithm where the robot has to cater for 90 degree turns, and to complete different line circuits as fast as possible.  Students could also implement control algorithms, such as PID controllers (Wikipedia) (which again takes me back to the days when I worked in industry for a while), remote control and managing the taking of cameras by controlling the Raspberry Pi.

What I found really interesting was that the platform (and the workshops) made use of a programming language called Racket (Wikipedia).  Racket is a language that I had not heard of before, but apparently it has roots in the Lisp language.  In some respects, I commend the choice (because it’s great to expose students to different programming paradigms), but on the other hand, there is something to be said for getting to grips with tools that are used in industry.  I guess this just goes to show that whenever you come along to workshop like these, you always learn new stuff.

Towards the end of Franco’s session, he spoke about a system to record Student Observable Behaviours, which then led onto a discussion about learning objectives.  Apparently, the use of ‘observable student behaviours’ is something that Middlesex use, perhaps as a part of their assessment strategy.  We were shown a web-based tool that lecturers can use to gather evidence of student engagement and activity.

I don’t know what this relates to, but I also made a note of a place called The Crystal  (Crystal website), which was also described as the Siemens technology centre.  As soon as I looked into it, I realise that I had once seen it before: on a cable car ride across the Thames.  I now know how to get to The Crystal if ever I need to visit it!

I enjoyed Franco’s session: he covered a lot of ‘tech stuff’ in a very short time.  Students at Middlesex are clearly challenged and are clearly kept busy! 

One thought is that different computing courses and degrees cover different topics and perspectives.  When I was heading home from the workshop I remembered that The Open University covered a bit about robots too on a first year undergraduate module that has the code: TM129 Technologies in Practice (OU website).  Students are also presented with the challenge of creating a line following robot.  Rather than using real robots, a simulated one is used (but, students can get to see real ones if you come along to an engineering day school).

Measuring programming achievement after a first course

The next presentation was by Ed Currie who presented what were described as ‘thoughts and preliminary research’.  One of the key thoughts (and one that I found most interesting) was why some students find programming so difficult.  One note that I have made is that we can’t teach it, students can only learn programming.  It’s not up to us; it’s up to them, and our job (as lecturers and teachers) it to facilitate the learning.

Ed mentioned the idea of Threshold concepts (Wikipedia) by Meyer and Land.  I’ve made a note of the point that ‘sequence is a threshold concept’ (when it comes to programming).  I remembered hearing the phrase before from a colleague who was doing what I think was some research to see what happened when students grappled with key ‘threshold concepts’.

Two great phrases that I’ve noted down are ‘neo-piagetian stages’ and ‘flip classroom’ (Wikipedia).  In some respects the OU has always been doing ‘flipped classrooms’, i.e. students study some material and the go to a face to face tutorial to apply what has learnt, either in terms of solving a problem, or through facilitated discussions.

I don’t know what the context was or where this came from, but I also made the note ‘sharing of learning stories’.  This might have just been an idle idea during Ed’s talk, or something that Ed had said.  When it comes to learning how to do computer programming, I’ve got my own story (which might well be insufferably dull!), and I’m sure that other people have their stories.  Perhaps something could be gained (in terms of learning strategies and approaches) if we find the space to discuss and share how we know what we know.

Reflections on teaching design patterns

The final presentation of the day and final presentation of the workshop was by Carl Evans, who is a lecturer at Middlesex.  Carl talked about his work on an MSc module and his experience of creating and presenting a module about software design patterns.

In computer science and software engineering, Design Patterns is one of my favourite topics.  A couple of points that Carl made really resonated with me.  One was that ‘industry needs architects, not just programmers’.  Another great point (and one that I totally subscribe to) is that there is an increasing expectation (from industry) that graduates can work with frameworks as well as know how to use programming languages. In some ways, this point connected up with Thomas’s keynote.

Carl mentioned Sun/Oracle certifications, the use of layered architectures, and frameworks called Spring (Wikipedia) and Hibernate (Wikipedia) that I have heard of, but have never used in anger.  A quick look into these frameworks quickly shows that design patterns feature pretty prominently.

A really questions are: how do we best teach patterns, and where do we start?  Is there a pattern about how to teach patterns?  I noted down that a refresher about the object-oriented approach is useful, before taking students through different categories of patterns, such as object creation patterns, enterprise patterns, data access patterns, and compound patterns (I was writing everything down pretty quickly at this point, so I might not have managed to the nuances of everything that was said).

Carl also told us about a site called Design Patterns Library which I don’t think I’ve seen before.  One book that was referenced was Head First Design Patterns by O’Reilly.  There seems to be a claim going around that says that these ‘head first’ books are based on ‘neuroscience’ (but I’ve yet to find out exactly what exactly this means: claims like that immediately make me sceptical!)  Either way, anything that helps to make important technical concepts understandable is a good thing.

Final thoughts

I don’t know how many Psychology of Programming Work In Progress events I’ve been to, but it’s been quite a few.  This might have been my fourth or fifth.   I have enjoyed every single one, and I enjoyed this one at Middlesex University too.  It was well organised, friendly and thought provoking.  The talks were really interesting, covering distance learning, errors, notation, robots, challenge of teaching object-oriented programming and a whole load of other subjects too.  The great thing about these events is that you never know what you’re going to get, which means that you never really know what you’re going to learn (and this can be, invariably, a very good thing too).

From my perspective, the event helped to strengthen an opinion I have, which is that we need to figure out how to help students (and practicing programmers) how to best understand and work with software frameworks.  This issue is not only a computing education issue, but also, significantly, a psychology of programming issue.  The first subject that I studied when I discovered this subfield of computing (or of psychology, depending on your ‘home’ discipline) was the topic of program (or software) comprehension.  It’s clear from this short workshop that this continues to be (for me) an important topic. 

Permalink Add your comment
Share post
Christopher Douce

Psychology of Programming Interest Group : work in progress meeting: Day 1

Visible to anyone in the world
Edited by Christopher Douce, Monday, 10 July 2023, 16:42

On 8 January 2015 I went to a mini-workshop: the Psychology of Programming Interest Group (PPIG) Work in Progress meeting.  I’ve had an affiliation with PPIG for what must be at least fifteen years and I try to visit their meetings whenever I can.  In some sense, returning to the PPIG meetings is like returning to a comfortable academic home: you regain enthusiasm for the research interests that you once held (and have an opportunity to say hello some familiar faces too).

This 2015 WIP event (as it is colloquially known) was held at the University of Middlesex in Hendon Town Hall and skilfully organised by Richard Bornat, who is a PPIG community regular. This short series of two blog post aims to summarise what were my own highlights.

Keynote talk: Thomas Green

The opening keynote was by Thomas Green, who is one of the founders of the group.  Thomas told us what the group was about and emphasised the point that the group doesn’t just discuss research into programming, but also the activities that surround programming (and psychology too).  Subjects for investigation have involved pair programming and explorations into the sociological and anthropological.  Other research subjects have included computer science education and pedagogy, and studies into the relationship between personality and programming.

Thomas is known for creating (or discovering) the cognitive dimensions of notations framework (Wikipedia).  His framework can be used to help us think about programming language design and user interfaces.  Thomas described it as ‘ambitious in scope, [and a framework that] addresses any kind of information artefact’. Simply put, Cognitive Dimensions is a set of principles that helps us to think about stuff.

To explain, Thomas gave us a couple of examples. One of the dimensions is viscosity (which is my personal favourite!)  Viscosity is, of course, an attribute of liquids: the higher the viscosity, the harder it is to push you hand through a liquid (for example) – I think I’ve got that the right way round!   When it comes to the dimensions, this can be understood in terms of ‘changes’ to something (or, to move a system from one state to another).  To make one change (to an information artefact), you might have to do a whole bunch of smaller changes before you get to your desired outcome.

Another dimension is: hidden dependencies. An example of this is the links or connections that might exist between the different cells of spreadsheets. You can’t immediately see what the connections between different cells might be, but you need to understand them if you’re going to understand and work with a spreadsheet.

This is all very well and good, but how does this relate to programming that we find in the real world?  Thomas gave us a number of examples of computer code used with a content management system (CMS).  Why study content management systems?  Thomas had some good answers: they were widely used, often are pretty difficulty to get your head around (which is certainly true!), and they haven’t been studied very much.

If you’re interested in content management systems, this Wikipedia page presents an amazing list of different content management systems (Wikipedia).  On the same subject of geek lists, this is another favourite of mine: comparison of web application frameworks (Wikipedia).  You can spend hours looking though these different pages.  These two summary links clearly show how big the ‘CMS space’ is.  (As an aside, if you have too much time on your hands, there’s also a List of Cakes and a List of Lists)

An interesting point that Thomas made (which is one that resonates with my own experience), is that they all claim they are ‘easy to use’.  Two examples that were spoken about were Wordpress (Wikipedia) and Drupal (Wikipedia).  For the purposes of Thomas’s presentation, we looked at the Perch (CMS website), a CMS that I had never heard of before.  The point was clear: Thomas’s framework can be used applied to study CMS’s and web frameworks.

After being mildly baffled with screens filled with code that used lots of angle brackets, there was a brief question and answer session.  I think I made a comment that I chose a CMS based on the quality of the on-line tuition videos.  My decisions were not based on language efficiency, but how easily I could see how to create something that similar to what I wanted to do.  (I’ve often wondered about whether we could look to the murky world of media studies to learn about why some tools become more popular than others: there’s a whole other dimension of CMS systems that could be explored).  There was also brief discussion about design patterns, since many of them make use of the model-view controller pattern.

It was pretty thought provoking stuff.  When it comes to content management systems, I can’t help but think there’s an opportunity to use them as a vehicle to conduct research into the creation, development and sustainability of software communities.

Active error: examining error detection and recovery in software development

Tamara Lopez gave the first talk of the day, and within minutes of starting her talk, I had started to remember some research I looked at over fifteen years ago.  Tamara’s research was all about human error and programming.  As Tamara was speaking, I thought to myself, ‘I wonder whether she has heard of a researcher called James Reason.  The answer came within minutes: of course she had.

Reason wrote a book called Human Error and carried out research into active errors (mistakes that happen in ‘real time’) and latent errors (which remain undetected within a system or product for considerable time).  Have you ever bought a chocolate bar, unwrapped it from its wrapper and then thrown the chocolate bar away?  Have you ever walked into a room and immediately thought, ‘why am I here?’  I recently put my keys in my fridge for no apparent reason.  These, I guess, are examples of active errors.

The aim of Tamara’s research was to perform a naturalistic observation of error in programming, and gather reports of error occurrence.  Understanding the characteristics of error can, of course, allow us to understand more about it and why error arises.

Tamara used a great method.  She was studying pair programming data videos that had been published on the internet through a website called Pairwith.us (website).  The developers were working on a project to adapt some kind of testing tool.  Tamara analysed the errors in terms of incidents and themes.  Some keywords that I picked up from Tamara’s talk were temporal, material and social.  A good talk and interesting research.

Visual Analytics as End-User Programming

The second research talk was by Advait Sarkar who had travelled from the University of Cambridge.  Advait gave a demonstration of some software that he had put together.  The focus of his prototype appeared to relate to the area of data analytics, specifically, how the area of machine learning might be connected to a spreadsheet environment.

Following this session (and Advait’s demonstration) there was there quite a bit of discussion about different machine learning approaches such as decision trees and neural networks; subjects that I hadn’t really touched on or explored in any great depth since I was an undergraduate.  Advait’s presentation wasn’t really in my area of expertise but it’s good to be exposed to different areas.

SQ and EQ and programming, revisited

The next talk was by Melanie Coles from Bournemouth University.  I remember Melanie from other PPIG events, so it was great to see her again.  It was interesting to hear that her talk related to some earlier research that she presented at PPIG back in 2007 (if I’ve understood this correctly).

The title of her talk was: ‘SQ and EQ and programming’ So, what exactly does SQ and EQ mean?  I understand them as rough and broad measures of personality.  EQ is an abbreviation for Empathy Quotient. Simply put, EQ is a measure of someone’s drive to identify with other peoples’ feelings and emotions. SQ, on the other hand, means Systemising Quotient. It is the extent to which people have a drive to understand rules governing things.

I’ve made a very rough note that Melanie related both measures to work carried out by Simon Baron-Cohen, who works in the area of autism research.  I’ve made another note in my notepad that there are tensions between these traits, along with the sentence, ‘scientists score higher in AQ than non-scientists’.

Some studies seem to suggest a correlation between the role of programming and these traits.  Other studies, on the other hand, don’t show anything.  The message coming through is that you don’t have to be high on the AQ scale to become a programmer.

I don’t know that this means, but I’ve made another note that reads ‘polite grumpiness about sterotypes’ which might have been scribbled down during the question and answer session.  I have no idea who was expressing polite grumpiness, or which stereotypes were being discussed.  I do, however, feel that this expression should still stand and has some validity.  A sensible rule is that if you’re going to take issue with stereotypes, you’ll go a lot further if you politely disagree rather than go around shouting about them.  I should also add, that I have no idea how this paragraph relates to Melanie’s very good (and very clear) presentation.  All this said, some really interesting ideas and (some exceedingly polite) discussions.

A great end to the first day!

Permalink Add your comment
Share post
Christopher Douce

Psychology of Programming Interest Group 2012 workshop: London Metropolitan University

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 14 Oct 2020, 11:41

The 24th Psychology of Programming Interest Group workshop was held at London Metropolitan University between 21st and 23rd November 2012.  I wasn't able to attend the first day of the workshop due to another commitment, but was able to attend the second and third days (this is a shame since I've heard from the other delegates that the first day was pretty good and yielded a number of very thought provoking presentations and discussions).  This blog post is a summary of the days I managed to attend.  I'm sharing this post with the hope that this summary might be useful to someone.

Day 2: Expertise, learning to program, tools and doctorial consortium

Expertise

The first presentation of the day was entitled, 'Thrashing, tolerating and compromising in software development' by Tamara Lopez from the Open University.  I understand thrashing to be the application of problem solving strategies in an ineffective and unsystematic way, and tolerating to be working with temporary solutions with the intention of moving a solution along to another state, and compromising: solving a problem but not being entirely happy with its solution.  An interesting note that I've made during Tamara's presentation relates to the use of feelings.  I have also experienced 'thrashing' in the moments before I recover sufficient metacognitive awareness to understand that a cup of tea and a walk is necessary to regain perspective.

The second presentation of the day was by Rebecca Yates, from LERO based at the University of Limerick.  Rebecca's talk was entitled, 'conducting field studies in software engineering: an experience report' and her focus was all about program comprehension, i.e. what happens when programmers start a new job and start to learn an unfamiliar code base.  I made a special note of her points about the importance of going out into industry and the importance of addressing ethical issues. 

One of the 'take away' points that I got from Rebecca's talk was that getting access to people in industry can be pretty tough - the practical issues of carrying out programming research, such as time, restrictions about access to intellectual property and the importance of persuasion (or making the aim of research clear to those who are going to play a part in it) can all be particularly challenging.

Learning to program

Louis Major, from the University of Keele, started the second session with a paper entitled, 'teaching novices programming using a robot simulator: case study protocol'.  Louis told us about his systematic literature review before introducing us to his robot simulator which could be used to create programs to do simple tasks such as line following and line counting.  Louis also spoke about his research method, a case study approach which applied multiple methods such as tests and interviews.

Louis also spoke about the value of robots, that they were considered to be appealing, enjoyable, exciting and robotics (as a whole subject) had a strong connection with STEM disciplines (science, technology, engineering and mathematics).  The advantage of using simulations is that there are fewer limitations in terms of space, cost and technical barriers.

A couple of months after the workshop I was reminded about the relevance of Louis's research after having been tangentially involved in an introductory Open University module, TM129 Technologies in Practice, which also makes use of a robot simulator.  Students are also given the challenge of solving simple problems, including the challenge of creating line following robots. 

The second talk in this part of the workshop was by PPIG regular, Richard Bornat.  Richard's talk, entitled 'observing mental models in novice programmers' built on earlier work that was presented at PPIG where Richard and his colleague Saeed had designed a test that was claimed could (potentially) predict whether students were able to grasp some of the principles of programming. 

An interesting observation was that when it comes to computer programming the results sometimes have a bi-modal distribution.  What this means that if student pass, they are likely to pass very well.  On the other hand, there is also a peak in numbers when it comes to students who struggle.  During (and after) his talk, he presented that some students found some of the concepts that were connected to programming (such as the assignment operator) fundamentally difficult.

Paul Orlov, who joined us all the way from St. Petersburg, spoke about 'investigating the role of programmers peripheral vision: a gaze-contingent tool and experimental proposal'.  Paul's talk connected with earlier research where experimental tools, such as a 'restricted focus viewers', were used in conjunction with program comprehension experiments. Paul's talk inspired a lot of debate and questions.  I remember one discussion which was about the distinction between attention and seeing (and that we can easily learn not to attend to information should we choose not to).

Ben Du Boulay, formerly from the University of Sussex, was our discussant.  Ben mentioned that when it came to interdisciplinary research conducting systematic literature reviews can be particularly difficult due to the number of different publication databases that researcher have to consider.  Connecting with Richard's paper, Ben asked the question about what might be the fundamental misunderstandings that could emerge when it comes to computer programming.  Regarding Paul's paper which connects to the theme of perception and attention, Ben made the point that we can learn how to ignore things and that attention can be focussed depending on the task that we have to complete.  Ben also commented on earlier discussions, such as the drive to change the current computing curriculum in schools.

One thing that learning programming can do for us is help to teach us problem solving skills.  There is a school of thought that learning programming can be viewed as how Latin was once viewed; that learning to program is inherently good for you. Related points include the importance of task and the relationship to motivation.

Tools

Fraser McKay from the University of Kent presented, 'evaluation of subject-specific heuristics for initial learning environments: a pilot study'.  In human-computer interaction (or interaction design), heuristics are a set of rules of thumb that help you to think about the usability of a system.  General heuristics, such as those by Nielsen are very popular (as well as being powerful), but there is the argument that they may not be best suited to uncovering problems in all situations. 

Fraser focused on two environments that were considered helpful in the teaching of programming: Scratch (MIT) and Greenfoot.  Although this was very much a 'work in progress' paper, it is interesting to learn about the extent to which different sets of heuristics might be used together, and the way in which a new set of heuristics might be evaluated.

Mark Vinkovitis presented the work of his co-authors, Christian Prause and Jan Nonnen, which was entitled, 'a field experiment on gamification of code quality in Agile development'.  Initially I found the term 'gamification' quite puzzling, but I quickly understood it in terms of, 'how to make software development into a game, where the output can be appreciated and recognised by others'.

The idea was to connect code development with the use of quality metrics to obtain a score to indicate how well developers are doing.  This final presentation gave way to a lot of debate about whether developers might be inclined to develop software code in such a way to create high rankings.  (There is also the question of whether different domains of application will yield different quality scores).  I really like the concept.  Gamification exposes of different dimensions of software development which has the potential to be connected to motivation.  It strikes me that the challenge lies with understanding how one might affect the other whilst at the same time facilitating effective software development practice.

Doctorial consortium presentations

Before the start of the workshop on Wednesday, a doctorial consortium session was held where students could share ideas with each other and discuss their work with more experienced (or seasoned) researchers.  This session was all about allowing students to share their key research questions with a wider audience.

Presentation slots were taken by Louis Major, Frazer McKay, Michael Berry, Alistair Stead, Cosmas Fonche and Rebecca Yates (my apologies if I've missed anyone!)  Other research students who were a part of the doctorial consortium included Teresa Busjahn, Melanie Coles, Gail Ollis, Mark Vinkovits, Kshitij Sharma, Tamara Lopez, Khurram Majeed and Edgar Cambranes.

Day 3: Tools and their evaluation and keynotes

Tools and their evaluation

The first presentation of the final day was by Thibault Raffaillac who presented his research, 'exploring the design of compiler feedback'.  I enjoyed this presentation since the feedback that software tools offer developers is fundamental to enabling them to do the job that they need to do.  A couple of questions that I've noted from Thibault's presentation included the question of 'who is the user?' (of the feedback), and what is their expertise.  Another note is that compilers (and other languages) always tend to give negative points and information.  It strikes me that languages offer an opportunity for programmers to interrogate a code-base.  Much food for thought!

Luis Marques Afonso gave the next talk, entitled 'evaluation application programming interfaces as communication artefacts'.  Understanding API usability has a relatively long history within the PPIG community.  The interesting aspect of Luis's work is that three different evaluation techniques were proposed:  semiotic inspection method (which I had never heard of before), cognitive dimensions of notations (Wikipedia) and discourse analysis (Wikipedia).  It was interesting to hear of these different methods - the advantage of using multiple approaches is that each method can expose different issues.

The final paper presentation, entitled 'sketching by programming in the choreographic language agent' was given by Luke Church, University of Cambridge.  Luke described working amongst a group of choreographers.  It was interesting to hear that the tool (or language) that had been created wasn't all about representing choreography, but instead potentially enabling choreographers to become inspired by the representations that were generated by the tool.  Luke's presentation created a lot of interest and debate.   

Keynote: extreme notation design

A computer programming language is a form of notation.  A notation is a system that can be used to represent ideas or actions and can be understood by people (such as music) or machines (as in computer programming), or both.  Thomas Green proposed a set of 'dimensions' or characteristics of notation systems which relate to how people can work with them.  These dimensions can be traded-off against each other depending upon the nature of the particular problem that is to be solved.

One challenge is: how can we understand the characteristic of trade-offs?  Alan Blackwell gave a keynote talk about a programming language that was controversially described as being a hybrid of Photoshop and Excel.

Palimpsest used the idea of different layers which could then contain different elements which could interact with each other (if I understand things correctly).  Methodologically speaking, the idea of creating a tool or a language that aims to explore the extremes of language design is an interesting and potentially very powerful one.  My understanding is that it allows the language designer to gain a wealth of experience, but also provides researchers with an example.  Perhaps there is an opportunity for someone to write a paper that compares and collates the different 'extremities' of language design.

Panel: coding and music

The final session of the workshop was all about programming, music and performance.  We were introduced to a phenomena called 'live coding', which is where programmers 'perform' music by writing software in front of a live audience. The three presentations which were contained within this final part of the day were all slightly different but all very connected.

Alex Mclean

Alex Mclean from the University of Leeds presented two demonstrations and talked about the challenges of live coding.  These included that manipulating and working with music through code is an indirect manipulation.  Syntactic glitches can interrupt the flow of performance and there is the possibility that being wrapped up within the code has the potential to detract from the music.

Live coders can also improvise with musicians who play 'non-programming language' (or 'real') instruments.  Since the notion of 'live' can have different meaning (and can depend on the abstractions that are contained within a language), challenges include the negotiation of time and harmony.  Delays can exist between the having a musical idea and realising it.

Alex mentioned Scheme Bricks, which has been inspired by Scratch (and Sense) which allows you to drag and drop portions of code together.  This also made me realise that if there are two live coders performing at the same time they might use entirely different 'instruments' (or notation systems) to each other. 

Thor Magnusson

Thor Magnusson from the University of Brighton introduced us to a language called ixi that has been derived from SuperCollider (Wikipedia).  Thor set out to make a language that could be understood by an audience.  To demonstrate this, Thor quickly coded a changing of drum and sound loops using a text editor using a notation that has come clear and direct connections to music notation.  Thor spoke of polyrhythms and code to change amplitude, to create harmonics and sound that is musically interesting. 

What I really liked was the metaphor of creating agents which 'play' fragments of code (or music).  Distortions can be applied to patterns and patterns can be nested within other patterns.  Thor also presented some compelling description of the situations in which the language is used; 'programming in a nightclub, late at night, maybe you've had a few beers; you're performing - you've got to make sure the comma is in the right place'.  For those who are interested, you can also see a video recording of Thor giving a live coding performance (YouTube).  In my notebook I have written something that Thor must have said: 'I see code as performance; live coding is a link between performance and improvisation'.

Sam Aaron

When Sam began his short talk, I couldn't believe my eyes - he was using a text editor called Emacs! (Wikipedia).  The last time I used Emacs was when I was a postgraduate student where it persistently confused me.  Emacs, however, uses a language called Lisp which is particularly useful for live coding, since it is a declarative language. 

During his talk Sam gives a brief introduction to Overtone.  You can see a video of a similar introduction to overtone through Vimeo.  One thing that did strike me was way in which aspects of music theory could be elegantly represented within code.

Discussion

This final part of the workshop gave way to quite a lot of energetic debate.  There appeared to be a difference between those who were thinking, 'why on earth would you want to do this stuff?' and, 'I think this stuff is really cool!'  When it comes to live coding there is the question of who is the user of the language - is it the performer, or is it the listener, or viewer (especially if a live coding notation is intended to be understandable by a non-musician-coder)?

But what of the motivations of the people who do all this cool stuff?  When it comes to performance there is the attraction of 'being in the moment', of using technology in an interesting and exciting way to creating something transitory that listeners might like to dance to.  It certainly strikes me that to do it well requires skill, time, persistence and musicality; all the qualities that 'traditional' musicians need.  Live coders can also face the fundamental challenge of keeping things going when things begin to sound a bit odd, to create new and creative code structure on-the fly, and an ability to move from one semi-improvised (by means of programming and musical abstraction) to another.

Other than the performance dimension, there is the intellectual attraction of changing and challenging people's perceptions of how software and programming languages are thought of.   Another dimension is the way that technology can give rise to a community of people who enjoy using different tools to create different styles of music.  All of the tools that were mentioned within the final part of the day are free and open source.  Free code, it can be said, can lead to free musical expression.

Reflections

Like other PPIG workshops this workshop had a great mix of formal presentations, more informal doctorial sessions mixed with many opportunities for discussion.  I think this was the first time that the workshop was held at London Metropolitan University.  Yanguo Jing, our local conference chair, did a fabulous job at ensuring that everything ran as smoothly as possible.  Yanguo also did a great job at editing the proceedings.  All in all, a very successful event and one that was expertly and skilfully organised.

There are two 'take home' points that have stuck in my mind.  The first is that programming languages need not only about programming machines; through their structures code can also be used as a way to gain inspiration for other endeavours, particularly artistic ones.  

The second point is that programming can be a performance, and one that can be fun too.  The music session with certainly stick in my mind for quite some time to come.  Programming performances are not just about music - they can be about education and creation; code can be used to present and share stories. 

Permalink Add your comment
Share post
Christopher Douce

PPIG 2011

Visible to anyone in the world
Edited by Christopher Douce, Friday, 1 Dec 2017, 14:21

 I recently had the pleasure of attending the PPIG 2011 workshop between 6 and 8 September.  As I might have mentioned in an earlier blog, PPIG is an abbreviation for the Psychology of Programming Interest Group.  There used to be an American equivalent which was called ESP (Empirical Studies of Programmers), but this community seem to have disappeared.  PPIG, on the other hand, is going strong.

This year it was held in the University of York computer science department.  The department had moved since I was last there, forcing me to circumnavigate the campus and arrive at a time when almost all the tea and sandwiches had disappeared.  Thomas Green gave an opening address, and then it was swiftly on to the first presentation.

Mathematics and Visual Impairments

Alistair Edwards gave a talk entitled 'new approaches for mathematics in blind students'.  Mathematics, of course, relies on visual notations.  These notations, it was argued, are an integral part of working with maths.

Alistair holds that view (or, should I say that I understand that he holds the view) that there is more to it than just using an appropriate notation, or having that notation converted into another form.  We externalise parts of our working memory by using pen and paper.  Also, the idea of cancelling (or balancing) an equation by crossing of similar teams from both sides can be viewed as a visual manipulation.

Alistair mentioned a couple of projects, one of which was called Lambda, an abbreviation for Linear Access to Mathematics for Braille Device and Audio-synthesis.  Here, the challenges began to be clear: I didn't know this but different countries have different braille notation for mathematics.  There is, of course, the issue that using an interface to a notation immediately places cognitive barriers in the way that has the potential to make things more difficult to understand.  Users, it was argued, need more direct forms of interaction when working with mathematics.

All in all, a very thought provoking introduction, and it made me wonder how Green's cognitive dimensions of notations framework might be used to analyse interactions to notational systems (such as mathematics or programming languages) by users who may have different modality preferences (i.e. auditory over visual).

New Tech

The first main session of the workshop was called New Tech.  I'll do a very quick run through of the papers from each session.  Jon Rimmer presented the shyness project, and introduced the beguiling ambient computing device called the 'subtle stone', whereby class participants can communicate their emotional state to the lecturer by a click of a juggling ball.  Jon deftly directed us to a series of interesting papers about shyness.

The following paper was closely connected.  It was entitled 'self-reporting emotional experiences in a computing lab', by Judith Good (et al).  Us programmers go through a whole spectrum of different emotions during a programming session, from delight, through to wishing to jump on our laptop (although I don't recommend this).  This is an interesting direction: emotion connects to motivation, and traditionally the PPIG community have more focussed upon the cognitive.

Chris Roast, from Sheffield Hallam University, took us on a tour of a tool that helped to create different internationalised film posters; software localisation through abstraction.

One of the papers that I most enjoyed was by Chris Martin from the University of Dundee, who took dancing robots on a road trip to a number of different schools to further explore whether a robot dance workshop can inspire interest in a technical subject such as computer programming.  Or, does using robots 'sugar coat' a difficult subject, or is there more to it than this?  By robots, imagine small buggies.

In fact, Chris's robots use the same Arduino microcontroller that is central to the TU100 Senseboard. By dance, this is an activity that has a low threshold to success, is created, transcends culture, and where performance can be valued over competition (I made notes during this bit of Chris's talk).

The two other papers in this section described how to make music with a dry marker pen, a whiteboard and a mobile phone by defining your own musical notation (which was pretty cool), and considered the challenges that users of mobile spreadsheets face.

Human and algorithmic complexity

The next presentation was a 'work in progress' paper by yours truly.  I talked about a project (that has been making very slow progress, due to a myriad of different reasons) to explore whether it is possible to link measurements of program complexity to physiological measurements that are known to indicate human cognitive load. 

The workshop participants gave me a whole number of things to think about, including the names of a number of researchers, as well as a clear message of: 'stop procrastinating, just get on with it'.  I'm certainly going to take that last piece of advice.

The following presentation was about the challenge of working with test data.  A subject that can easily cause terror to many a software developer.

Language Formalities

Wednesday yielded a change of plan.  Thomas Green ran a session entitled, 'how to design a notation'.  A programming language is, of course, a form of notation.  Thomas posed interesting questions, like, 'how might we invent a different type of musical notation?', which led onto other questions such as its level of abstraction (what each element of a notation can represent), what you might wish to do with a notation, it's overall purpose, and how you might off load to external representations.

A theme underpinning all this debate was one that is familiar to many human-computer interaction and interaction design researchers: the idea of trading one thing off against another.

Giora Alexandron then presented his paper entitled 'programming with the user in mind' which led us towards considering something called live sequence charts, which is an extension of UML sequence diagrams.  We were introduced to the Play Engine and the PlayGo IDE.

This was followed by a presentation by Ahmad Taherkhani who spoke about 'automatic algorithm recognition based on programming schemas'.  I really liked this paper, since it was a new angle on some very early psychology of programming themes.  I particularly like how Ahmad was attempting to use existing theories to engineer a solution that may not only have practical use (in terms of providing tools to help educators to understand the programming code that students write), but also the act of programming a solution has the potential to allow us to learn more about the theories that are being applied.

The final paper in this session was a work in progress paper by Khuong A Nguyen who used a human-computer interaction technique called cognitive walkthough to learn more about the NXT-G visual programming language that can be used with the Lego Mindstorms hardware.

HCI for the future

Following a tour of a HCI and accessibility lab (which resembles a small apartment), a short panel discussion took place, with Nicholas Merriam and Luke Church taking centre stage.  Nicholas's perspective was especially welcome, since he spoke about the challenge of working with low level embedded software and the role that timing visualisation tools may play when attempting to solve programming problems that seem particularly intractable.  This linked back well to the earlier discussions of notation.

Learners and language design

This session contained two papers.  The first was a paper by Ahmed Alardawi, from Sheffield Hallam, aimed to explore the effect that object-oriented programming language class structure has on the comprehension of computer programs.  What was great about this research was that it clearly made use of existing work, and partially replicated earlier studies, thus adding to evidence.  Babak Khazaei, also from Sheffield Hallam, presented an empirical study on the influence of OCL, an abbreviation for Object Constraint Language (wikipedia). 

Motivation and affect

This part of the workshop contained a single presentation, by Rein Sach, from the Department of Computing at the Open University. 

Rein's paper was entitled, 'what makes software engineers go that extra mile?'  Rein asked software engineers what it was about their work that they enjoyed.  This gave way to an interesting discussion about the perception of the nature of programming work.  Even though developers might have to battle with misbehaving operating systems and maintain servers from time to time, perhaps these activities need to be considered as work rather than nuisances that get in the way of the real task, which is doing the intrinsically rewarding and creative work of creating code.

Invited paper

Thursday kicked off with a presentation by Gerrit van der Veer, from the Open University in the Netherlands and the Free University, Amsterdam.  Gerrit's presentation was about different aspects of design, how technology might be used to gather debate surrounding the artefacts that are created during the design process through a system called CAM, meaning Co-operative Artefact Memory.  A barcode sticker could be attached to different artefacts, such as a sketch or a physical prototype.  Design groups could then use these stickers, with mobile phones, to access a shared Twitter stream that relates to each object, allowing views and ideas to be shared.

Two thoughts came to mind during Gerrit's presentation.  Firstly, I wondered the extent of similarities between design practice (in different disciplines) and what occurs within software development practices that use agile methods, such as eXtreme Programming.  Secondly, CAM reminded me of a tool called OpenDesignStudio used for an Open University design course called U101, Design Thinking.

Another part of Gerrit's presentation was all about service design (i.e. design which yields a product that is not a tangible item).  Gerrit pointed us to a number of resources, such as the Service Design Tools site, and mentioned the importance of culture, by referring to the work of Hofstede (which is studied in the M364 Interaction Design Open University course). 

Cognitive considerations

The final session of the workshop contained two presentations that related to the cognitive dimensions of notations framework, papers by Anna Bobkowska, Maria Kutar and John Muirhead.  Anna introduced a language for the processing of multimedia streams through the use of a visual language.  Maria and John's presentation explored how a cognitive dimensions questionnaire might be used by non-experts.

Miguel Monteiro went on to speak about the cognitive foundations of modularity.  Miguel referred to a programming paradigm called Aspect-oriented programming (wikipedia), a subject that I have heard of many times, but one that I have not explored in a great deal of depth.  Learning more about AOP is certainly something to do at some point.

Qualitative and Quantitative

The final presentation of the workshop was by Gordon Fletcher from the University of Salford.  Gordon's presentation was entitled 'methods and practicalities of qualitative research', but it was so much more than this.  Gordon spoke about data collection in different communities, and mentioned the concept of biographical research (which made me wonder if anyone has thought about applying this technique, perhaps with regards to exploring motivation or software related careers.

I came away with a number of messages, namely, it can be relatively easy to gather qualitative data, but figuring out what to do with it is a whole other issue.  Also, both quantitative and qualitative research can be both systematic and rigorous; these different approaches to research have a lot in common.  An interesting quote was, 'the method has to fit the researcher even more than it has to fit the research'.

Gordon's presentation gave way to a memorable debate on the use of terms.  Undoubtedly the use of language will remain a perpetual challenge when carrying out multidisciplinary research.

Themes

A number of diverse themes were evident within the PPIG '11 workshop, representing its broad membership.  There was a strong theme of computing education and pedagogy.  Programming and educational motivation was also apparent, mixed in with the use and design of visual programming languages.  This connected to the important theme of cognitive dimensions, notational systems and notational design. 

Two interesting inclusions were links to the broader subject of design, and accessibility.  Human-computer interaction and interaction design remains important theme too.  A final theme is (perhaps that isn’t as strong as in previous years) is the application of ethnographic methods to further understand the activity of programming.  It was great to hear from a broad spread of presenters who are exploring many different research areas.

Permalink Add your comment
Share post
Christopher Douce

Psychology of Programming

Visible to anyone in the world
Edited by Christopher Douce, Friday, 10 Aug 2018, 14:39

Ever since July 2001 I have edited (off and on) the Psychology of Programming Interest Group newsletter.  The group, known as PPIG has been in existence since 1987.  Its purpose is to provide an interdisciplinary academic forum for researchers who are interested in the intersection between computer programming and psychology.

PPIG can be described as a pretty broad church.  On one hand there are those who aim to explore program comprehension and the relationship between notation systems and programming languages, whereas other researchers have been performing ethnographic studies and considering the different types of research methods that could be used.

Some of the questions that the PPIG community have been exploring resonated strongly with my doctoral research which was all about understanding how computer programmers go about maintaining computer software. 

I will probably always remember the moment when I started to be interested in the link between computer programming and psychology, particularly cognitive psychology.  I studied computer science as an undergraduate.  Our lecturers asked us to do a time limited summative programming assignment.  What I mean by this is that myself and my cohort were all corralled into a tired computer lab, given a sheet of program requirements, a Pascal compiler, and were told to get on with it (and, no, we couldn't talk to each other).

When we had finished our programs, we had to print them out using a dot matrix printer (which was, of course, situated within its own sound proof room), and give the fruits of our labour to our instructor who exuded a unique mixture of boredom and mild bewilderment at the stress that he had caused.

What struck me was that some students had finished and had left the laboratory to go to the union bar within twenty minutes, whereas others were pulling out their hair four hours later and still didn't have a working program.  This made me ask the questions, 'why was there such a difference between the different programmers?', and 'what exactly do we do when we write computer software?'

I seem to remember that this was in our first year of our degree.  Our computing lecturers had another challenge in store for those of us who made it to our second year: a software maintenance project.

The software maintenance project comprised of one third role play, and two thirds utter confusion.  Our team of four students were presented with a printout of around forty thousand lines of wickedly obscure FORTRAN code, and given another challenging requirements brief.  We were then introduced to a fabulous little utility called GREP, and again told us to get on with it.

This project made me ask further questions of, 'how on earth do we understand a lot of unfamiliar code quickly?', and 'what is the best way to make effective decisions?'  These and other questions stuck with me, and eventually I discovered PPIG.

So, during my week on study leave I compiled the latest edition of the PPIG Newsletter.  The next annual workshop is to take place at the University of York, and I hope to attend.  If I have the time, I'll try to write a short blog post about it and the themes that emerge.

Work-in-Progress Paper

When I was done with the newsletter I turned my attention to a research idea I have been trying to work on for well over the past year with an esteemed collaborator from Royal Holloway, University of London.

As well as studying a number of different programming languages during my undergraduate years I was also introduced to the all-encompassing subject of software engineering. In engineering there is a simple idea that if you can measure something, that something can be controlled. One of the difficulties of managing software projects is that software is intrinsically intangible: it isn't something you can physically touch or see. It's impossible to ascertain, at a glance, how your developers are getting along or whether they are experiencing difficulties. To get round the problem researchers have proposed software complexity metrics.

Having a complexity metric can be useful (to some extent).  If you apply a complexity metric to a program, the bigger the number, the more trouble a developer might be faced with (and more money spent).  Researchers have proposed a number of different metrics which measure different aspects of a program. One counts the number of linguistic parts of a program, another metric counts the number of unique paths of execution that a program might have.

Another type of metric, called spatial complexity metrics, have sprung from an understanding that programmers use different types of short term memory during program comprehension and development. The idea behind this metric, which was first published in a PPIG workshop back in 1999, was to try to propose a metric which is inspired by the psychology of a programmer.

The work in progress paper describes a number of experiments that aims to explore whether there might be a correlation between different software complexity metrics, and empirical measurements of cognitive load taken from an eye tracking programming comprehension study. The research question is: are program complexity metrics psychologically valid? 

Of course, writing up a research idea is a lot easier than carrying it out!  This said I do hope to share some of the research materials that may be used within the studies through this blog when they are available.

Permalink 2 comments (latest comment by Christopher Douce, Tuesday, 16 Aug 2011, 14:37)
Share post

This blog might contain posts that are only visible to logged-in users, or where only logged-in users can comment. If you have an account on the system, please log in for full access.

Total visits to this blog: 2273147