Psychology of Programming Interest Group 2012 workshop: London Metropolitan University
Monday, 18 Feb 2013, 19:01
Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 14 Oct 2020, 11:41
The 24th Psychology of Programming Interest Group workshop was held at London Metropolitan University between 21st and 23rd November 2012. I wasn't able to attend the first day of the workshop due to another commitment, but was able to attend the second and third days (this is a shame since I've heard from the other delegates that the first day was pretty good and yielded a number of very thought provoking presentations and discussions). This blog post is a summary of the days I managed to attend. I'm sharing this post with the hope that this summary might be useful to someone.
Day 2: Expertise, learning to program, tools and doctorial consortium
Expertise
The first presentation of the day was entitled, 'Thrashing, tolerating and compromising in software development' by Tamara Lopez from the Open University. I understand thrashing to be the application of problem solving strategies in an ineffective and unsystematic way, and tolerating to be working with temporary solutions with the intention of moving a solution along to another state, and compromising: solving a problem but not being entirely happy with its solution. An interesting note that I've made during Tamara's presentation relates to the use of feelings. I have also experienced 'thrashing' in the moments before I recover sufficient metacognitive awareness to understand that a cup of tea and a walk is necessary to regain perspective.
The second presentation of the day was by Rebecca Yates, from LERO based at the University of Limerick. Rebecca's talk was entitled, 'conducting field studies in software engineering: an experience report' and her focus was all about program comprehension, i.e. what happens when programmers start a new job and start to learn an unfamiliar code base. I made a special note of her points about the importance of going out into industry and the importance of addressing ethical issues.
One of the 'take away' points that I got from Rebecca's talk was that getting access to people in industry can be pretty tough - the practical issues of carrying out programming research, such as time, restrictions about access to intellectual property and the importance of persuasion (or making the aim of research clear to those who are going to play a part in it) can all be particularly challenging.
Learning to program
Louis Major, from the University of Keele, started the second session with a paper entitled, 'teaching novices programming using a robot simulator: case study protocol'. Louis told us about his systematic literature review before introducing us to his robot simulator which could be used to create programs to do simple tasks such as line following and line counting. Louis also spoke about his research method, a case study approach which applied multiple methods such as tests and interviews.
Louis also spoke about the value of robots, that they were considered to be appealing, enjoyable, exciting and robotics (as a whole subject) had a strong connection with STEM disciplines (science, technology, engineering and mathematics). The advantage of using simulations is that there are fewer limitations in terms of space, cost and technical barriers.
A couple of months after the workshop I was reminded about the relevance of Louis's research after having been tangentially involved in an introductory Open University module, TM129 Technologies in Practice, which also makes use of a robot simulator. Students are also given the challenge of solving simple problems, including the challenge of creating line following robots.
The second talk in this part of the workshop was by PPIG regular, Richard Bornat. Richard's talk, entitled 'observing mental models in novice programmers' built on earlier work that was presented at PPIG where Richard and his colleague Saeed had designed a test that was claimed could (potentially) predict whether students were able to grasp some of the principles of programming.
An interesting observation was that when it comes to computer programming the results sometimes have a bi-modal distribution. What this means that if student pass, they are likely to pass very well. On the other hand, there is also a peak in numbers when it comes to students who struggle. During (and after) his talk, he presented that some students found some of the concepts that were connected to programming (such as the assignment operator) fundamentally difficult.
Paul Orlov, who joined us all the way from St. Petersburg, spoke about 'investigating the role of programmers peripheral vision: a gaze-contingent tool and experimental proposal'. Paul's talk connected with earlier research where experimental tools, such as a 'restricted focus viewers', were used in conjunction with program comprehension experiments. Paul's talk inspired a lot of debate and questions. I remember one discussion which was about the distinction between attention and seeing (and that we can easily learn not to attend to information should we choose not to).
Ben Du Boulay, formerly from the University of Sussex, was our discussant. Ben mentioned that when it came to interdisciplinary research conducting systematic literature reviews can be particularly difficult due to the number of different publication databases that researcher have to consider. Connecting with Richard's paper, Ben asked the question about what might be the fundamental misunderstandings that could emerge when it comes to computer programming. Regarding Paul's paper which connects to the theme of perception and attention, Ben made the point that we can learn how to ignore things and that attention can be focussed depending on the task that we have to complete. Ben also commented on earlier discussions, such as the drive to change the current computing curriculum in schools.
One thing that learning programming can do for us is help to teach us problem solving skills. There is a school of thought that learning programming can be viewed as how Latin was once viewed; that learning to program is inherently good for you. Related points include the importance of task and the relationship to motivation.
Tools
Fraser McKay from the University of Kent presented, 'evaluation of subject-specific heuristics for initial learning environments: a pilot study'. In human-computer interaction (or interaction design), heuristics are a set of rules of thumb that help you to think about the usability of a system. General heuristics, such as those by Nielsen are very popular (as well as being powerful), but there is the argument that they may not be best suited to uncovering problems in all situations.
Fraser focused on two environments that were considered helpful in the teaching of programming: Scratch (MIT) and Greenfoot. Although this was very much a 'work in progress' paper, it is interesting to learn about the extent to which different sets of heuristics might be used together, and the way in which a new set of heuristics might be evaluated.
Mark Vinkovitis presented the work of his co-authors, Christian Prause and Jan Nonnen, which was entitled, 'a field experiment on gamification of code quality in Agile development'. Initially I found the term 'gamification' quite puzzling, but I quickly understood it in terms of, 'how to make software development into a game, where the output can be appreciated and recognised by others'.
The idea was to connect code development with the use of quality metrics to obtain a score to indicate how well developers are doing. This final presentation gave way to a lot of debate about whether developers might be inclined to develop software code in such a way to create high rankings. (There is also the question of whether different domains of application will yield different quality scores). I really like the concept. Gamification exposes of different dimensions of software development which has the potential to be connected to motivation. It strikes me that the challenge lies with understanding how one might affect the other whilst at the same time facilitating effective software development practice.
Doctorial consortium presentations
Before the start of the workshop on Wednesday, a doctorial consortium session was held where students could share ideas with each other and discuss their work with more experienced (or seasoned) researchers. This session was all about allowing students to share their key research questions with a wider audience.
Presentation slots were taken by Louis Major, Frazer McKay, Michael Berry, Alistair Stead, Cosmas Fonche and Rebecca Yates (my apologies if I've missed anyone!) Other research students who were a part of the doctorial consortium included Teresa Busjahn, Melanie Coles, Gail Ollis, Mark Vinkovits, Kshitij Sharma, Tamara Lopez, Khurram Majeed and Edgar Cambranes.
Day 3: Tools and their evaluation and keynotes
Tools and their evaluation
The first presentation of the final day was by Thibault Raffaillac who presented his research, 'exploring the design of compiler feedback'. I enjoyed this presentation since the feedback that software tools offer developers is fundamental to enabling them to do the job that they need to do. A couple of questions that I've noted from Thibault's presentation included the question of 'who is the user?' (of the feedback), and what is their expertise. Another note is that compilers (and other languages) always tend to give negative points and information. It strikes me that languages offer an opportunity for programmers to interrogate a code-base. Much food for thought!
Luis Marques Afonso gave the next talk, entitled 'evaluation application programming interfaces as communication artefacts'. Understanding API usability has a relatively long history within the PPIG community. The interesting aspect of Luis's work is that three different evaluation techniques were proposed: semiotic inspection method (which I had never heard of before), cognitive dimensions of notations (Wikipedia) and discourse analysis (Wikipedia). It was interesting to hear of these different methods - the advantage of using multiple approaches is that each method can expose different issues.
The final paper presentation, entitled 'sketching by programming in the choreographic language agent' was given by Luke Church, University of Cambridge. Luke described working amongst a group of choreographers. It was interesting to hear that the tool (or language) that had been created wasn't all about representing choreography, but instead potentially enabling choreographers to become inspired by the representations that were generated by the tool. Luke's presentation created a lot of interest and debate.
Keynote: extreme notation design
A computer programming language is a form of notation. A notation is a system that can be used to represent ideas or actions and can be understood by people (such as music) or machines (as in computer programming), or both. Thomas Green proposed a set of 'dimensions' or characteristics of notation systems which relate to how people can work with them. These dimensions can be traded-off against each other depending upon the nature of the particular problem that is to be solved.
One challenge is: how can we understand the characteristic of trade-offs? Alan Blackwell gave a keynote talk about a programming language that was controversially described as being a hybrid of Photoshop and Excel.
Palimpsest used the idea of different layers which could then contain different elements which could interact with each other (if I understand things correctly). Methodologically speaking, the idea of creating a tool or a language that aims to explore the extremes of language design is an interesting and potentially very powerful one. My understanding is that it allows the language designer to gain a wealth of experience, but also provides researchers with an example. Perhaps there is an opportunity for someone to write a paper that compares and collates the different 'extremities' of language design.
Panel: coding and music
The final session of the workshop was all about programming, music and performance. We were introduced to a phenomena called 'live coding', which is where programmers 'perform' music by writing software in front of a live audience. The three presentations which were contained within this final part of the day were all slightly different but all very connected.
Alex Mclean
Alex Mclean from the University of Leeds presented two demonstrations and talked about the challenges of live coding. These included that manipulating and working with music through code is an indirect manipulation. Syntactic glitches can interrupt the flow of performance and there is the possibility that being wrapped up within the code has the potential to detract from the music.
Live coders can also improvise with musicians who play 'non-programming language' (or 'real') instruments. Since the notion of 'live' can have different meaning (and can depend on the abstractions that are contained within a language), challenges include the negotiation of time and harmony. Delays can exist between the having a musical idea and realising it.
Alex mentioned Scheme Bricks, which has been inspired by Scratch (and Sense) which allows you to drag and drop portions of code together. This also made me realise that if there are two live coders performing at the same time they might use entirely different 'instruments' (or notation systems) to each other.
Thor Magnusson
Thor Magnusson from the University of Brighton introduced us to a language called ixi that has been derived from SuperCollider (Wikipedia). Thor set out to make a language that could be understood by an audience. To demonstrate this, Thor quickly coded a changing of drum and sound loops using a text editor using a notation that has come clear and direct connections to music notation. Thor spoke of polyrhythms and code to change amplitude, to create harmonics and sound that is musically interesting.
What I really liked was the metaphor of creating agents which 'play' fragments of code (or music). Distortions can be applied to patterns and patterns can be nested within other patterns. Thor also presented some compelling description of the situations in which the language is used; 'programming in a nightclub, late at night, maybe you've had a few beers; you're performing - you've got to make sure the comma is in the right place'. For those who are interested, you can also see a video recording of Thor giving a live coding performance (YouTube). In my notebook I have written something that Thor must have said: 'I see code as performance; live coding is a link between performance and improvisation'.
Sam Aaron
When Sam began his short talk, I couldn't believe my eyes - he was using a text editor called Emacs! (Wikipedia). The last time I used Emacs was when I was a postgraduate student where it persistently confused me. Emacs, however, uses a language called Lisp which is particularly useful for live coding, since it is a declarative language.
During his talk Sam gives a brief introduction to Overtone. You can see a video of a similar introduction to overtone through Vimeo. One thing that did strike me was way in which aspects of music theory could be elegantly represented within code.
Discussion
This final part of the workshop gave way to quite a lot of energetic debate. There appeared to be a difference between those who were thinking, 'why on earth would you want to do this stuff?' and, 'I think this stuff is really cool!' When it comes to live coding there is the question of who is the user of the language - is it the performer, or is it the listener, or viewer (especially if a live coding notation is intended to be understandable by a non-musician-coder)?
But what of the motivations of the people who do all this cool stuff? When it comes to performance there is the attraction of 'being in the moment', of using technology in an interesting and exciting way to creating something transitory that listeners might like to dance to. It certainly strikes me that to do it well requires skill, time, persistence and musicality; all the qualities that 'traditional' musicians need. Live coders can also face the fundamental challenge of keeping things going when things begin to sound a bit odd, to create new and creative code structure on-the fly, and an ability to move from one semi-improvised (by means of programming and musical abstraction) to another.
Other than the performance dimension, there is the intellectual attraction of changing and challenging people's perceptions of how software and programming languages are thought of. Another dimension is the way that technology can give rise to a community of people who enjoy using different tools to create different styles of music. All of the tools that were mentioned within the final part of the day are free and open source. Free code, it can be said, can lead to free musical expression.
Reflections
Like other PPIG workshops this workshop had a great mix of formal presentations, more informal doctorial sessions mixed with many opportunities for discussion. I think this was the first time that the workshop was held at London Metropolitan University. Yanguo Jing, our local conference chair, did a fabulous job at ensuring that everything ran as smoothly as possible. Yanguo also did a great job at editing the proceedings. All in all, a very successful event and one that was expertly and skilfully organised.
There are two 'take home' points that have stuck in my mind. The first is that programming languages need not only about programming machines; through their structures code can also be used as a way to gain inspiration for other endeavours, particularly artistic ones.
The second point is that programming can be a performance, and one that can be fun too. The music session with certainly stick in my mind for quite some time to come. Programming performances are not just about music - they can be about education and creation; code can be used to present and share stories.
Psychology of Programming Interest Group 2012 workshop: London Metropolitan University
The 24th Psychology of Programming Interest Group workshop was held at London Metropolitan University between 21st and 23rd November 2012. I wasn't able to attend the first day of the workshop due to another commitment, but was able to attend the second and third days (this is a shame since I've heard from the other delegates that the first day was pretty good and yielded a number of very thought provoking presentations and discussions). This blog post is a summary of the days I managed to attend. I'm sharing this post with the hope that this summary might be useful to someone.
Day 2: Expertise, learning to program, tools and doctorial consortium
Expertise
The first presentation of the day was entitled, 'Thrashing, tolerating and compromising in software development' by Tamara Lopez from the Open University. I understand thrashing to be the application of problem solving strategies in an ineffective and unsystematic way, and tolerating to be working with temporary solutions with the intention of moving a solution along to another state, and compromising: solving a problem but not being entirely happy with its solution. An interesting note that I've made during Tamara's presentation relates to the use of feelings. I have also experienced 'thrashing' in the moments before I recover sufficient metacognitive awareness to understand that a cup of tea and a walk is necessary to regain perspective.
The second presentation of the day was by Rebecca Yates, from LERO based at the University of Limerick. Rebecca's talk was entitled, 'conducting field studies in software engineering: an experience report' and her focus was all about program comprehension, i.e. what happens when programmers start a new job and start to learn an unfamiliar code base. I made a special note of her points about the importance of going out into industry and the importance of addressing ethical issues.
One of the 'take away' points that I got from Rebecca's talk was that getting access to people in industry can be pretty tough - the practical issues of carrying out programming research, such as time, restrictions about access to intellectual property and the importance of persuasion (or making the aim of research clear to those who are going to play a part in it) can all be particularly challenging.
Learning to program
Louis Major, from the University of Keele, started the second session with a paper entitled, 'teaching novices programming using a robot simulator: case study protocol'. Louis told us about his systematic literature review before introducing us to his robot simulator which could be used to create programs to do simple tasks such as line following and line counting. Louis also spoke about his research method, a case study approach which applied multiple methods such as tests and interviews.
Louis also spoke about the value of robots, that they were considered to be appealing, enjoyable, exciting and robotics (as a whole subject) had a strong connection with STEM disciplines (science, technology, engineering and mathematics). The advantage of using simulations is that there are fewer limitations in terms of space, cost and technical barriers.
A couple of months after the workshop I was reminded about the relevance of Louis's research after having been tangentially involved in an introductory Open University module, TM129 Technologies in Practice, which also makes use of a robot simulator. Students are also given the challenge of solving simple problems, including the challenge of creating line following robots.
The second talk in this part of the workshop was by PPIG regular, Richard Bornat. Richard's talk, entitled 'observing mental models in novice programmers' built on earlier work that was presented at PPIG where Richard and his colleague Saeed had designed a test that was claimed could (potentially) predict whether students were able to grasp some of the principles of programming.
An interesting observation was that when it comes to computer programming the results sometimes have a bi-modal distribution. What this means that if student pass, they are likely to pass very well. On the other hand, there is also a peak in numbers when it comes to students who struggle. During (and after) his talk, he presented that some students found some of the concepts that were connected to programming (such as the assignment operator) fundamentally difficult.
Paul Orlov, who joined us all the way from St. Petersburg, spoke about 'investigating the role of programmers peripheral vision: a gaze-contingent tool and experimental proposal'. Paul's talk connected with earlier research where experimental tools, such as a 'restricted focus viewers', were used in conjunction with program comprehension experiments. Paul's talk inspired a lot of debate and questions. I remember one discussion which was about the distinction between attention and seeing (and that we can easily learn not to attend to information should we choose not to).
Ben Du Boulay, formerly from the University of Sussex, was our discussant. Ben mentioned that when it came to interdisciplinary research conducting systematic literature reviews can be particularly difficult due to the number of different publication databases that researcher have to consider. Connecting with Richard's paper, Ben asked the question about what might be the fundamental misunderstandings that could emerge when it comes to computer programming. Regarding Paul's paper which connects to the theme of perception and attention, Ben made the point that we can learn how to ignore things and that attention can be focussed depending on the task that we have to complete. Ben also commented on earlier discussions, such as the drive to change the current computing curriculum in schools.
One thing that learning programming can do for us is help to teach us problem solving skills. There is a school of thought that learning programming can be viewed as how Latin was once viewed; that learning to program is inherently good for you. Related points include the importance of task and the relationship to motivation.
Tools
Fraser McKay from the University of Kent presented, 'evaluation of subject-specific heuristics for initial learning environments: a pilot study'. In human-computer interaction (or interaction design), heuristics are a set of rules of thumb that help you to think about the usability of a system. General heuristics, such as those by Nielsen are very popular (as well as being powerful), but there is the argument that they may not be best suited to uncovering problems in all situations.
Fraser focused on two environments that were considered helpful in the teaching of programming: Scratch (MIT) and Greenfoot. Although this was very much a 'work in progress' paper, it is interesting to learn about the extent to which different sets of heuristics might be used together, and the way in which a new set of heuristics might be evaluated.
Mark Vinkovitis presented the work of his co-authors, Christian Prause and Jan Nonnen, which was entitled, 'a field experiment on gamification of code quality in Agile development'. Initially I found the term 'gamification' quite puzzling, but I quickly understood it in terms of, 'how to make software development into a game, where the output can be appreciated and recognised by others'.
The idea was to connect code development with the use of quality metrics to obtain a score to indicate how well developers are doing. This final presentation gave way to a lot of debate about whether developers might be inclined to develop software code in such a way to create high rankings. (There is also the question of whether different domains of application will yield different quality scores). I really like the concept. Gamification exposes of different dimensions of software development which has the potential to be connected to motivation. It strikes me that the challenge lies with understanding how one might affect the other whilst at the same time facilitating effective software development practice.
Doctorial consortium presentations
Before the start of the workshop on Wednesday, a doctorial consortium session was held where students could share ideas with each other and discuss their work with more experienced (or seasoned) researchers. This session was all about allowing students to share their key research questions with a wider audience.
Presentation slots were taken by Louis Major, Frazer McKay, Michael Berry, Alistair Stead, Cosmas Fonche and Rebecca Yates (my apologies if I've missed anyone!) Other research students who were a part of the doctorial consortium included Teresa Busjahn, Melanie Coles, Gail Ollis, Mark Vinkovits, Kshitij Sharma, Tamara Lopez, Khurram Majeed and Edgar Cambranes.
Day 3: Tools and their evaluation and keynotes
Tools and their evaluation
The first presentation of the final day was by Thibault Raffaillac who presented his research, 'exploring the design of compiler feedback'. I enjoyed this presentation since the feedback that software tools offer developers is fundamental to enabling them to do the job that they need to do. A couple of questions that I've noted from Thibault's presentation included the question of 'who is the user?' (of the feedback), and what is their expertise. Another note is that compilers (and other languages) always tend to give negative points and information. It strikes me that languages offer an opportunity for programmers to interrogate a code-base. Much food for thought!
Luis Marques Afonso gave the next talk, entitled 'evaluation application programming interfaces as communication artefacts'. Understanding API usability has a relatively long history within the PPIG community. The interesting aspect of Luis's work is that three different evaluation techniques were proposed: semiotic inspection method (which I had never heard of before), cognitive dimensions of notations (Wikipedia) and discourse analysis (Wikipedia). It was interesting to hear of these different methods - the advantage of using multiple approaches is that each method can expose different issues.
The final paper presentation, entitled 'sketching by programming in the choreographic language agent' was given by Luke Church, University of Cambridge. Luke described working amongst a group of choreographers. It was interesting to hear that the tool (or language) that had been created wasn't all about representing choreography, but instead potentially enabling choreographers to become inspired by the representations that were generated by the tool. Luke's presentation created a lot of interest and debate.
Keynote: extreme notation design
A computer programming language is a form of notation. A notation is a system that can be used to represent ideas or actions and can be understood by people (such as music) or machines (as in computer programming), or both. Thomas Green proposed a set of 'dimensions' or characteristics of notation systems which relate to how people can work with them. These dimensions can be traded-off against each other depending upon the nature of the particular problem that is to be solved.
One challenge is: how can we understand the characteristic of trade-offs? Alan Blackwell gave a keynote talk about a programming language that was controversially described as being a hybrid of Photoshop and Excel.
Palimpsest used the idea of different layers which could then contain different elements which could interact with each other (if I understand things correctly). Methodologically speaking, the idea of creating a tool or a language that aims to explore the extremes of language design is an interesting and potentially very powerful one. My understanding is that it allows the language designer to gain a wealth of experience, but also provides researchers with an example. Perhaps there is an opportunity for someone to write a paper that compares and collates the different 'extremities' of language design.
Panel: coding and music
The final session of the workshop was all about programming, music and performance. We were introduced to a phenomena called 'live coding', which is where programmers 'perform' music by writing software in front of a live audience. The three presentations which were contained within this final part of the day were all slightly different but all very connected.
Alex Mclean
Alex Mclean from the University of Leeds presented two demonstrations and talked about the challenges of live coding. These included that manipulating and working with music through code is an indirect manipulation. Syntactic glitches can interrupt the flow of performance and there is the possibility that being wrapped up within the code has the potential to detract from the music.
Live coders can also improvise with musicians who play 'non-programming language' (or 'real') instruments. Since the notion of 'live' can have different meaning (and can depend on the abstractions that are contained within a language), challenges include the negotiation of time and harmony. Delays can exist between the having a musical idea and realising it.
Alex mentioned Scheme Bricks, which has been inspired by Scratch (and Sense) which allows you to drag and drop portions of code together. This also made me realise that if there are two live coders performing at the same time they might use entirely different 'instruments' (or notation systems) to each other.
Thor Magnusson
Thor Magnusson from the University of Brighton introduced us to a language called ixi that has been derived from SuperCollider (Wikipedia). Thor set out to make a language that could be understood by an audience. To demonstrate this, Thor quickly coded a changing of drum and sound loops using a text editor using a notation that has come clear and direct connections to music notation. Thor spoke of polyrhythms and code to change amplitude, to create harmonics and sound that is musically interesting.
What I really liked was the metaphor of creating agents which 'play' fragments of code (or music). Distortions can be applied to patterns and patterns can be nested within other patterns. Thor also presented some compelling description of the situations in which the language is used; 'programming in a nightclub, late at night, maybe you've had a few beers; you're performing - you've got to make sure the comma is in the right place'. For those who are interested, you can also see a video recording of Thor giving a live coding performance (YouTube). In my notebook I have written something that Thor must have said: 'I see code as performance; live coding is a link between performance and improvisation'.
Sam Aaron
When Sam began his short talk, I couldn't believe my eyes - he was using a text editor called Emacs! (Wikipedia). The last time I used Emacs was when I was a postgraduate student where it persistently confused me. Emacs, however, uses a language called Lisp which is particularly useful for live coding, since it is a declarative language.
During his talk Sam gives a brief introduction to Overtone. You can see a video of a similar introduction to overtone through Vimeo. One thing that did strike me was way in which aspects of music theory could be elegantly represented within code.
Discussion
This final part of the workshop gave way to quite a lot of energetic debate. There appeared to be a difference between those who were thinking, 'why on earth would you want to do this stuff?' and, 'I think this stuff is really cool!' When it comes to live coding there is the question of who is the user of the language - is it the performer, or is it the listener, or viewer (especially if a live coding notation is intended to be understandable by a non-musician-coder)?
But what of the motivations of the people who do all this cool stuff? When it comes to performance there is the attraction of 'being in the moment', of using technology in an interesting and exciting way to creating something transitory that listeners might like to dance to. It certainly strikes me that to do it well requires skill, time, persistence and musicality; all the qualities that 'traditional' musicians need. Live coders can also face the fundamental challenge of keeping things going when things begin to sound a bit odd, to create new and creative code structure on-the fly, and an ability to move from one semi-improvised (by means of programming and musical abstraction) to another.
Other than the performance dimension, there is the intellectual attraction of changing and challenging people's perceptions of how software and programming languages are thought of. Another dimension is the way that technology can give rise to a community of people who enjoy using different tools to create different styles of music. All of the tools that were mentioned within the final part of the day are free and open source. Free code, it can be said, can lead to free musical expression.
Reflections
Like other PPIG workshops this workshop had a great mix of formal presentations, more informal doctorial sessions mixed with many opportunities for discussion. I think this was the first time that the workshop was held at London Metropolitan University. Yanguo Jing, our local conference chair, did a fabulous job at ensuring that everything ran as smoothly as possible. Yanguo also did a great job at editing the proceedings. All in all, a very successful event and one that was expertly and skilfully organised.
There are two 'take home' points that have stuck in my mind. The first is that programming languages need not only about programming machines; through their structures code can also be used as a way to gain inspiration for other endeavours, particularly artistic ones.
The second point is that programming can be a performance, and one that can be fun too. The music session with certainly stick in my mind for quite some time to come. Programming performances are not just about music - they can be about education and creation; code can be used to present and share stories.