OU blog

Personal Blogs

Christopher Douce

UCL : Introducing engineering and computing

Visible to anyone in the world

On 12 February 2013 I volunteered at a joint Open University and UCL event on 12 February 2013 which aimed to introduce aspects of computing and engineering to school students.  This was the first time I had been involved with this type of event.  I have started to view outreach (in the broadest sense) as something that is something that is increasingly important to do (and this is something that I have written about in an earlier blog).  So, if you're interested in hearing some about the outreach stuff that I've recently heard about, the previous blog I've posted might (or might not!) be of interest.

Structure

I learnt about this event by a colleague who was canvassing for volunteers.  Upon accepting his challenge I quickly discovered that I was to play a tiny part of what was a much bigger event and soon heard rumours that students were coming to UCL to hear about other subjects such as chemistry and engineering.  My own role was to offer some support and guidance to students who wished to learn a little bit about computing and information technology.

Not only was this, for me, my first ever time being involved in an outreach or engagement event, it was also my first ever time on the UCL campus: it was massive!  I found myself being ushered into a large computer suite in the basement of one of UCL's impressive buildings.  Within moments, our lead facilitator and lecturer, Arosha Bandara, started to outline the plan for the day.

The focus on the day was the programming language Sense, a language that is used with the Open University module TU100 My Digital Life which is a first level undergraduate module in computing.  One of the key aspects of Sense is that it works with a bit of electronics that allows different types of measurements to be made.  Arosha talked us through a program that simulated a simple etch-a-sketch game.  Students would be asked to make a change to the program so that it would work properly - they were required to do some software maintenance!  During the second part of the day, students were then required to get together in groups to think of how to the language and the sensors to do something fun.

The talking bit...

The morning began with Arosha outlining the broad concept of Ubiquitous computing (Wikipedia), namely, that computers can be everywhere, can contain sensors and can be embedded within the environment.  Arosha then introduced a programming problem (in the form of an etch-a-sketch game).  Everyone was taken through different parts of the Sense programming environment.  Key elements such as buttons, instruction palettes and sprites (graphics) were introduced.

Students were then directed to some key parts of the game that accepted inputs from Sense hardware.  Students were then shown, step-by-step, how to make a change to the game to modify the behaviour of an on-screen pen.  They could immediately see the effect of changes to their programs.  Further modifications included adding some conditions that enabled their game programs to respond to noises (such as clapping!)

The projects bit...

There were loads of things to take in during the first part of the morning.  There was a whole new programming environment, there was the concept that a computer can receive and work with signals from the outside world, and the idea that a program can be formed out of groups of instructions.

The second part of the day was all about being imaginative, thinking about the different kinds of inputs and outputs that the electronics allow, and trying to think of some kind of application or demonstration.  Students were assigned to small groups and were encouraged to come up with different ideas.

The group that I was assigned to came up with the idea of trying to build some kind of 'human sensor', perhaps creating an infra-red trip wire (the Sense board came with a number of different sensors and outputs - one of them being an infrared transmitter or detector).  We collectively thought about the different cables and sensors that we had at our disposal before beginning to play with what kinds of signals (or numbers) we could detect from the outside world.  We got a fair way with this task before our time was up.

Reflections

It was a fun day!  Although there was limited time to do real stuff, the tiny team that I was allied to wrote some simple program code that allowed a heat sensor to work.  The Sense board represented a connection between the magical world of code and software to the physical world, where measurements could be made.

One of the biggest challenges of the day was to convey such a lot of (often quite difficult) theory in such a short amount of time.  Arosha was charged with telling our students something about the different types of programming constructs, variables and graphics.  Although this was necessary to get to the point where we could all do some fun stuff (modify our program), the way that hardware was used with software certainly facilitated engagement and helped to focus our attention.

I liked the way the idea of ubiquitous computing was used as an introduction, but one additional might have been to emphasise the extent that we are surrounded by computers.  The moment you receive a telephone call, there is an unknown number of computers all working together to deliver your telephone call.  There's the computer in your mobile phone, there's a computer in the base station which speaks to other computers... at the other end, there is a similar situation.  Also, turning on the TV means starting up a pretty powerful computer that is performing millions of instructions a second which coverts signals from one format to another.  Their ubiquity and invisibility is astonishing.

What is also astonishing is that the fundamental principles of computer programming that are exposed by the Sense programming language is also shared amongst all these devices and systems.  In the same way we have ubiquitous computing, we also have ubiquitous code; computer software that run anywhere.

Being involved in this day took me back in time to the days when I first got my hands on a computer.   Although the form of a computer has changed immeasurably, some things have not changed.  Computers remain very particular and pedantic - they require patience.  It's also important to remember that to learning how to work with code can and should be fun.  But when you've created a world out of code and you understand how things work, working with them can be immeasurably rewarding too.

Permalink Add your comment
Share post
Christopher Douce

Psychology of Programming Interest Group 2012 workshop: London Metropolitan University

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 14 Oct 2020, 11:41

The 24th Psychology of Programming Interest Group workshop was held at London Metropolitan University between 21st and 23rd November 2012.  I wasn't able to attend the first day of the workshop due to another commitment, but was able to attend the second and third days (this is a shame since I've heard from the other delegates that the first day was pretty good and yielded a number of very thought provoking presentations and discussions).  This blog post is a summary of the days I managed to attend.  I'm sharing this post with the hope that this summary might be useful to someone.

Day 2: Expertise, learning to program, tools and doctorial consortium

Expertise

The first presentation of the day was entitled, 'Thrashing, tolerating and compromising in software development' by Tamara Lopez from the Open University.  I understand thrashing to be the application of problem solving strategies in an ineffective and unsystematic way, and tolerating to be working with temporary solutions with the intention of moving a solution along to another state, and compromising: solving a problem but not being entirely happy with its solution.  An interesting note that I've made during Tamara's presentation relates to the use of feelings.  I have also experienced 'thrashing' in the moments before I recover sufficient metacognitive awareness to understand that a cup of tea and a walk is necessary to regain perspective.

The second presentation of the day was by Rebecca Yates, from LERO based at the University of Limerick.  Rebecca's talk was entitled, 'conducting field studies in software engineering: an experience report' and her focus was all about program comprehension, i.e. what happens when programmers start a new job and start to learn an unfamiliar code base.  I made a special note of her points about the importance of going out into industry and the importance of addressing ethical issues. 

One of the 'take away' points that I got from Rebecca's talk was that getting access to people in industry can be pretty tough - the practical issues of carrying out programming research, such as time, restrictions about access to intellectual property and the importance of persuasion (or making the aim of research clear to those who are going to play a part in it) can all be particularly challenging.

Learning to program

Louis Major, from the University of Keele, started the second session with a paper entitled, 'teaching novices programming using a robot simulator: case study protocol'.  Louis told us about his systematic literature review before introducing us to his robot simulator which could be used to create programs to do simple tasks such as line following and line counting.  Louis also spoke about his research method, a case study approach which applied multiple methods such as tests and interviews.

Louis also spoke about the value of robots, that they were considered to be appealing, enjoyable, exciting and robotics (as a whole subject) had a strong connection with STEM disciplines (science, technology, engineering and mathematics).  The advantage of using simulations is that there are fewer limitations in terms of space, cost and technical barriers.

A couple of months after the workshop I was reminded about the relevance of Louis's research after having been tangentially involved in an introductory Open University module, TM129 Technologies in Practice, which also makes use of a robot simulator.  Students are also given the challenge of solving simple problems, including the challenge of creating line following robots. 

The second talk in this part of the workshop was by PPIG regular, Richard Bornat.  Richard's talk, entitled 'observing mental models in novice programmers' built on earlier work that was presented at PPIG where Richard and his colleague Saeed had designed a test that was claimed could (potentially) predict whether students were able to grasp some of the principles of programming. 

An interesting observation was that when it comes to computer programming the results sometimes have a bi-modal distribution.  What this means that if student pass, they are likely to pass very well.  On the other hand, there is also a peak in numbers when it comes to students who struggle.  During (and after) his talk, he presented that some students found some of the concepts that were connected to programming (such as the assignment operator) fundamentally difficult.

Paul Orlov, who joined us all the way from St. Petersburg, spoke about 'investigating the role of programmers peripheral vision: a gaze-contingent tool and experimental proposal'.  Paul's talk connected with earlier research where experimental tools, such as a 'restricted focus viewers', were used in conjunction with program comprehension experiments. Paul's talk inspired a lot of debate and questions.  I remember one discussion which was about the distinction between attention and seeing (and that we can easily learn not to attend to information should we choose not to).

Ben Du Boulay, formerly from the University of Sussex, was our discussant.  Ben mentioned that when it came to interdisciplinary research conducting systematic literature reviews can be particularly difficult due to the number of different publication databases that researcher have to consider.  Connecting with Richard's paper, Ben asked the question about what might be the fundamental misunderstandings that could emerge when it comes to computer programming.  Regarding Paul's paper which connects to the theme of perception and attention, Ben made the point that we can learn how to ignore things and that attention can be focussed depending on the task that we have to complete.  Ben also commented on earlier discussions, such as the drive to change the current computing curriculum in schools.

One thing that learning programming can do for us is help to teach us problem solving skills.  There is a school of thought that learning programming can be viewed as how Latin was once viewed; that learning to program is inherently good for you. Related points include the importance of task and the relationship to motivation.

Tools

Fraser McKay from the University of Kent presented, 'evaluation of subject-specific heuristics for initial learning environments: a pilot study'.  In human-computer interaction (or interaction design), heuristics are a set of rules of thumb that help you to think about the usability of a system.  General heuristics, such as those by Nielsen are very popular (as well as being powerful), but there is the argument that they may not be best suited to uncovering problems in all situations. 

Fraser focused on two environments that were considered helpful in the teaching of programming: Scratch (MIT) and Greenfoot.  Although this was very much a 'work in progress' paper, it is interesting to learn about the extent to which different sets of heuristics might be used together, and the way in which a new set of heuristics might be evaluated.

Mark Vinkovitis presented the work of his co-authors, Christian Prause and Jan Nonnen, which was entitled, 'a field experiment on gamification of code quality in Agile development'.  Initially I found the term 'gamification' quite puzzling, but I quickly understood it in terms of, 'how to make software development into a game, where the output can be appreciated and recognised by others'.

The idea was to connect code development with the use of quality metrics to obtain a score to indicate how well developers are doing.  This final presentation gave way to a lot of debate about whether developers might be inclined to develop software code in such a way to create high rankings.  (There is also the question of whether different domains of application will yield different quality scores).  I really like the concept.  Gamification exposes of different dimensions of software development which has the potential to be connected to motivation.  It strikes me that the challenge lies with understanding how one might affect the other whilst at the same time facilitating effective software development practice.

Doctorial consortium presentations

Before the start of the workshop on Wednesday, a doctorial consortium session was held where students could share ideas with each other and discuss their work with more experienced (or seasoned) researchers.  This session was all about allowing students to share their key research questions with a wider audience.

Presentation slots were taken by Louis Major, Frazer McKay, Michael Berry, Alistair Stead, Cosmas Fonche and Rebecca Yates (my apologies if I've missed anyone!)  Other research students who were a part of the doctorial consortium included Teresa Busjahn, Melanie Coles, Gail Ollis, Mark Vinkovits, Kshitij Sharma, Tamara Lopez, Khurram Majeed and Edgar Cambranes.

Day 3: Tools and their evaluation and keynotes

Tools and their evaluation

The first presentation of the final day was by Thibault Raffaillac who presented his research, 'exploring the design of compiler feedback'.  I enjoyed this presentation since the feedback that software tools offer developers is fundamental to enabling them to do the job that they need to do.  A couple of questions that I've noted from Thibault's presentation included the question of 'who is the user?' (of the feedback), and what is their expertise.  Another note is that compilers (and other languages) always tend to give negative points and information.  It strikes me that languages offer an opportunity for programmers to interrogate a code-base.  Much food for thought!

Luis Marques Afonso gave the next talk, entitled 'evaluation application programming interfaces as communication artefacts'.  Understanding API usability has a relatively long history within the PPIG community.  The interesting aspect of Luis's work is that three different evaluation techniques were proposed:  semiotic inspection method (which I had never heard of before), cognitive dimensions of notations (Wikipedia) and discourse analysis (Wikipedia).  It was interesting to hear of these different methods - the advantage of using multiple approaches is that each method can expose different issues.

The final paper presentation, entitled 'sketching by programming in the choreographic language agent' was given by Luke Church, University of Cambridge.  Luke described working amongst a group of choreographers.  It was interesting to hear that the tool (or language) that had been created wasn't all about representing choreography, but instead potentially enabling choreographers to become inspired by the representations that were generated by the tool.  Luke's presentation created a lot of interest and debate.   

Keynote: extreme notation design

A computer programming language is a form of notation.  A notation is a system that can be used to represent ideas or actions and can be understood by people (such as music) or machines (as in computer programming), or both.  Thomas Green proposed a set of 'dimensions' or characteristics of notation systems which relate to how people can work with them.  These dimensions can be traded-off against each other depending upon the nature of the particular problem that is to be solved.

One challenge is: how can we understand the characteristic of trade-offs?  Alan Blackwell gave a keynote talk about a programming language that was controversially described as being a hybrid of Photoshop and Excel.

Palimpsest used the idea of different layers which could then contain different elements which could interact with each other (if I understand things correctly).  Methodologically speaking, the idea of creating a tool or a language that aims to explore the extremes of language design is an interesting and potentially very powerful one.  My understanding is that it allows the language designer to gain a wealth of experience, but also provides researchers with an example.  Perhaps there is an opportunity for someone to write a paper that compares and collates the different 'extremities' of language design.

Panel: coding and music

The final session of the workshop was all about programming, music and performance.  We were introduced to a phenomena called 'live coding', which is where programmers 'perform' music by writing software in front of a live audience. The three presentations which were contained within this final part of the day were all slightly different but all very connected.

Alex Mclean

Alex Mclean from the University of Leeds presented two demonstrations and talked about the challenges of live coding.  These included that manipulating and working with music through code is an indirect manipulation.  Syntactic glitches can interrupt the flow of performance and there is the possibility that being wrapped up within the code has the potential to detract from the music.

Live coders can also improvise with musicians who play 'non-programming language' (or 'real') instruments.  Since the notion of 'live' can have different meaning (and can depend on the abstractions that are contained within a language), challenges include the negotiation of time and harmony.  Delays can exist between the having a musical idea and realising it.

Alex mentioned Scheme Bricks, which has been inspired by Scratch (and Sense) which allows you to drag and drop portions of code together.  This also made me realise that if there are two live coders performing at the same time they might use entirely different 'instruments' (or notation systems) to each other. 

Thor Magnusson

Thor Magnusson from the University of Brighton introduced us to a language called ixi that has been derived from SuperCollider (Wikipedia).  Thor set out to make a language that could be understood by an audience.  To demonstrate this, Thor quickly coded a changing of drum and sound loops using a text editor using a notation that has come clear and direct connections to music notation.  Thor spoke of polyrhythms and code to change amplitude, to create harmonics and sound that is musically interesting. 

What I really liked was the metaphor of creating agents which 'play' fragments of code (or music).  Distortions can be applied to patterns and patterns can be nested within other patterns.  Thor also presented some compelling description of the situations in which the language is used; 'programming in a nightclub, late at night, maybe you've had a few beers; you're performing - you've got to make sure the comma is in the right place'.  For those who are interested, you can also see a video recording of Thor giving a live coding performance (YouTube).  In my notebook I have written something that Thor must have said: 'I see code as performance; live coding is a link between performance and improvisation'.

Sam Aaron

When Sam began his short talk, I couldn't believe my eyes - he was using a text editor called Emacs! (Wikipedia).  The last time I used Emacs was when I was a postgraduate student where it persistently confused me.  Emacs, however, uses a language called Lisp which is particularly useful for live coding, since it is a declarative language. 

During his talk Sam gives a brief introduction to Overtone.  You can see a video of a similar introduction to overtone through Vimeo.  One thing that did strike me was way in which aspects of music theory could be elegantly represented within code.

Discussion

This final part of the workshop gave way to quite a lot of energetic debate.  There appeared to be a difference between those who were thinking, 'why on earth would you want to do this stuff?' and, 'I think this stuff is really cool!'  When it comes to live coding there is the question of who is the user of the language - is it the performer, or is it the listener, or viewer (especially if a live coding notation is intended to be understandable by a non-musician-coder)?

But what of the motivations of the people who do all this cool stuff?  When it comes to performance there is the attraction of 'being in the moment', of using technology in an interesting and exciting way to creating something transitory that listeners might like to dance to.  It certainly strikes me that to do it well requires skill, time, persistence and musicality; all the qualities that 'traditional' musicians need.  Live coders can also face the fundamental challenge of keeping things going when things begin to sound a bit odd, to create new and creative code structure on-the fly, and an ability to move from one semi-improvised (by means of programming and musical abstraction) to another.

Other than the performance dimension, there is the intellectual attraction of changing and challenging people's perceptions of how software and programming languages are thought of.   Another dimension is the way that technology can give rise to a community of people who enjoy using different tools to create different styles of music.  All of the tools that were mentioned within the final part of the day are free and open source.  Free code, it can be said, can lead to free musical expression.

Reflections

Like other PPIG workshops this workshop had a great mix of formal presentations, more informal doctorial sessions mixed with many opportunities for discussion.  I think this was the first time that the workshop was held at London Metropolitan University.  Yanguo Jing, our local conference chair, did a fabulous job at ensuring that everything ran as smoothly as possible.  Yanguo also did a great job at editing the proceedings.  All in all, a very successful event and one that was expertly and skilfully organised.

There are two 'take home' points that have stuck in my mind.  The first is that programming languages need not only about programming machines; through their structures code can also be used as a way to gain inspiration for other endeavours, particularly artistic ones.  

The second point is that programming can be a performance, and one that can be fun too.  The music session with certainly stick in my mind for quite some time to come.  Programming performances are not just about music - they can be about education and creation; code can be used to present and share stories. 

Permalink Add your comment
Share post
Christopher Douce

First Open University Sense Programming Workshop

Visible to anyone in the world
Edited by Christopher Douce, Tuesday, 8 Oct 2013, 12:23

The first Open University Sense Workshop was held at the London School of Economics on Saturday 11 November 2012.

Sense is a computer programming language that has been derived from Scratch, a language that was developed by Massachusetts Institute of Technology.   The aim of the Sense workshop was to allow TU100 My Digital Life students to become more familiar with the Sense environment helping them to learn some of the fundamental principles of computer programming.

This blog post is intended as a summary of the first ever Sense workshop.  It has been written for both students and tutors. If you feel that anyone might find this summary useful, please don't hesitate to distribute widely.

Introductions

The phase 'computer programming' is one that can easily elicit an anxious response.  Programming is sometimes seen as something that is done through a set of mysterious tools.  The good news is that once you have gained some understanding of the fundamental principles of programming (and how to tackle problems and debug programs), the skills that you learn in one language can be transferred between other languages.

Sense is a programming language that uses the same fundamental concepts of languages that are used in industry (such as C++ and Java) but Sense makes the process of writing computer programs (or code) easier by allowing programs to be created from sets of visual building blocks. In some ways, Sense is a visual programming language that is completely analogous to many other languages.  The fundamental difference between Sense and other languages is that it helps students to focus on the fundamental bits of programming by shielding new programmers from the difficulty of writing program instructions in a language that can be quite cryptic and difficult to understand.

The overarching intention of the Sense workshop day (that is described here) was to demystify Sense and encourage everyone to have fun.  The Sense environment allows programming instructions to be manipulated as a series of lego-like blocks.  These snap together to form 'clumps' of instructions which can be attached to either a background (or stage, where things can more about on), or sprites (which are, in essence, graphical objects).  Through Sense it is (relatively) straightforward to create sets of instructions to build simple animations and games.

The workshop is divided up into three different sections.  The first is a broad overview of some of the ideas about programming, followed by a demonstration about how to use the Sense environment.  The second section was a presentation which contained some useful guidance about how to complete an assignment.  The third section was more open... but more of this later.

The lecture bit - stepping towards programming...

The workshop kicked off by a talk by one of our Open University tutors, Tammy.  Tammy made a really good point that 'we can't teach you programming'.  The implication is that only a student can learn how to do it.  The best way to learn how to do it is, of course, to find the time to play with a programming environment and to tackle, head on, the challenge of grappling with a problem.

Tammy asked a couple of people to come up and draw some shapes on the whiteboard.  Different participants drew very different shapes despite being given exactly the same instructions.  The point of the exercise was clear: that it is absolutely essential to provide sets of instructions that are both completely clear and unambiguous (as otherwise you may well be surprised with the results that you come back with).

Tammy talked about the different categories of program instruction, which were: sequence instructions, selection instructions and iteration instructions.  Pretty much all programs are composed of these three different types of operations.  Put simply, a sequence of instructions is where you do one thing after another.  A selection operation is where you make a choice to do something depending upon the status of a condition (for example, if you are cold, you might turn the heating on).  An iteration operation is where you do something either a number of times.

These sets of operations can be used to describe every day actions, such as making a cup of coffee, for instance.  This simple activity can be split into a sequence of steps, which can include iterations where we check to see if the kettle is boiling.  (We might also do some parallel processing, such as making some toast whilst the kettle is boiling, but multi-threading is a whole other issue!)

The main points were (1) programming cannot be taught, it can only be learnt by those who do it, (2) there are some fundamental building blocks that can be combined together and nested within each other; you can have a sequence of steps within an iteration, for instance, and (3) programming requires things to be defined and described unambiguously.

The demonstration bit - creating an animation...

The second part of the morning was hosted by Leslie.  Building on Tammy's summary of programming Leslie showed us what it meant to actually 'write' a program using the Sense environment.

In some respects, you can create anything within the Sense environment.  It provides a set of tools and it is up to you to come up with an idea and figure out how to combine the pieces together to do what you want to do.  In some respects (and getting slightly philosophical for a moment), you can define a whole universe or a world in software.  You can, in effect, define your own laws of physics.  I can't remember who said it, but I have always remembered the phase, 'the universe is mathematical'.  Given that computers only understand numbers, the Sense environment allows you to create and represent your own universe (and interact with it in some way).

Leslie's universe was a fishtank.  She began by drawing the tank, including water weeds.  She then went onto draw a set of different fish characters.  Script was then added to move the fish around the screen (in the tank), first in one direction (from left to right), and then in both directions (from side to side).  Leslie then added more characters and defined interactions between them using something called the 'broadcast' feature to alert some of the virtual fish that a bigger and more dangerous fish had arrived in the tank.

What was really great was how she demonstrated how to connect different instructions together (to create sequences), to have sequences of instructions operate when certain conditions are met (which represent selections), and introduce repeat loops (which represent iterations; carrying out the same instructions over and over again).

The bit about the assignment...

The final 'lecture' part of the day was by Open University tutor Dave, who took everyone through the structure of the forthcoming assignment (without giving any of the answers).  Dave talked about the use of the on-line discussion forums and this gave way to an interesting discussion about the importance of referencing.  Other points that were mentioned included the importance of things such as including word counts (on the TMA), and the learning objectives that are used by the module.

The programming bit...

During the afternoon, we all split into two different groups and got together into small groups of between two and four people.  The intention of the second part of the day was to try to create a small Sense project by huddling around a single laptop on which the Sense environment had been installed. We would then work on something for an hour, and then we would present what we had done to the other groups, describing some of the problems and challenges that we had encountered along the way.

Not having had much experience at using Sense, I was very happy to play an active role within one of the groups.  One of my main intentions at coming along for the day was to learn more about how to use the language and discover more about what it was capable of.  Our group came up with two different ideas: a representation of a car race track and some kind of athletic game or animation.  We settled on the athletic theme and decided we would try to animate a man running around a very simple athletics track.  (Our track became a square as opposed to an oval shape since we decided that re-discovering the mathematics of the circle was probably going to be quite tricky to master in about an hour!)

Within an hour we had drawn some stick figures, got our character doing a really simple 'run' animation and had our figure run around a really simple athletics track.  From memory, one of the challenges was figuring out how to represent program state and have it shared between different scripts that were running within the same sprite (apologies for immediately going into Sense-speak!)  Another challenge was to figure out how to represent state with Boolean variables and have those embedded within a continuous loop (but given enough time, I'm sure that we would have cracked it!)  A final challenge (and surprise) was to understand that the Sense environment automatically 'remembered' how much a character had been rotated between the different times that we 'ran' our scripts.  (We had instances where our running character ran off the side of the screen, much to our surprise!)

After our time was up, we were all asked to demonstrate and talk through our various projects.  I can remember a simple etch-a-sketch game, a demonstration of some bouncing balls (which bounced at different speeds), a space invader game (where the invader was a cat), a Tom and Jerry animation where Tom chased Jerry across a screen, and an animation that involved a balloon and a plane.   It was great to see very different projects since when we were coding our own, we can easily get into the mindset of just solving our own problem; seeing the work of others is something that is very refreshing.  It was inspiring to see what could be created after an hours of programming.

Reflections

The whole day reminded me of the time when I first tried to learn computer programming and I still remember that it was a pretty difficult challenge (in my day!)  I always wanted to rush ahead and solve the bigger more exciting problems but I was often tripped up because I needed to understand the operation of the fundamental instructions and operators (and the way a language worked).  In my own experience the only way to really understand how things work is to find the time to play, to explore the various operators and instructions, but finding both the time and the confidence to do this is perhaps a challenge itself.

All in all, the first Sense Workshop was a fun day.  I certainly got a lot out of it and I hope that everyone did too.  I certainly hope this is going to become a bi-annual event for all our TU100 students.  From my 'I've never really used Sense before to do anything other than to run a demo program' perspective, I certainly came out learning a lot more than I did when I started.  Large parts of Sense was demystified, and I certainly had a lot fun attending.

Additional resources

After sharing a link to this post my colleague Arosha (who also came along to the Sense workshop) has written a short blog post.  Arosha is loads more skilled when it comes to Sense programming and has re-created one of the projects that were demonstrated on the day.  Thanks Arosha!

Permalink
Share post
Christopher Douce

Exploring Sense

Visible to anyone in the world
Edited by Christopher Douce, Monday, 24 Mar 2014, 14:13

Last weekend I attended an event known as a Sense development session, hosted at the Open University in the South East offices in East Grinstead.  Sense is, of course, the graphical programming language that is used to teach the fundamentals of programming in a new module entitled TU100 My Digital Life

Whilst TU100 discusses a whole range of issues (such as privacy, mobility and ubiquity) and allows different skills to be developed, programming remains an important subject and one that some students find difficult. 

The main objective of the event was to enable associate lecturers to get together to share their experiences about using of the Sense software.  Before the main Sense session, another tool was demonstrated and discussed: Jing.

Jing

The Open University provides and supports a number of different digital tools, such as its Moodle based virtual learning environment, synchronous discussion tools  and image sharing software (such as the kind of software used on TU100, as well as other modules such as U101 Design Thinking and T189 Digital Photography).  Sometimes, however, it is possible to make use of freely available tools that are just 'out there' (on the cloud) to facilitate teaching and learning.

Jing is one of those tools.  At the start of the session, Graham Eaton demonstrated how Jing (Techsmith website) can be used to create simple and effective demonstrations to show students how to make use of different applications.  One of the really nice features of an application such as Jing is that it also allows you to make voice recordings: you can talk through how you use something.  When you are done, you can also share your digital recording to others by uploading the results to a shared website.

Graham went further than just saying that 'Jing is a tool that allows you to quickly make screen casts'.  Using MS Paint, a graphics tablet and Jing, Graham demonstrated that it is possible to create customised 'chalk board' animations which can be used to explain simple mathematical principles.

There are, of course, some drawbacks: cost.  The demo version (which is free to use) doesn't permit editing and has a limit of five minutes.  These five minutes, however, may make the difference between understanding a principle and not understanding a principle. 

An important (implicit) point was that we have different tools at our disposal, and it's up to us to find a blend of the different tools that we may feel comfortable using.   Educational practice sessions such as these may inspire us to consider investigating and deciding upon our own blend of tools (and allow us to think differently about new possibilities).

Introducing Sense

The Sense part of the day was facilitated by Diane Brewster and Michelle Dewey.  Diane kicked off the first activity to try and answer the question, 'what were the problems of teaching programming to novices?'  From three groups we arrived at a number of answer, which I'll do my best to summarise.

Firstly there were the broad skills, such as thinking algorithmically and being able to 'abstract' the essence of a problem so it can be translated into code.  This was connected to the challenge of looking (and understanding) the logic of problems.  The issue of syntax was also mentioned, along with the acquiring the knowledge (and understanding) of different programming structures and how they might be used. 

Knowing how (and where) to look things up was considered to be an important skill, as was techniques (and strategies) for testing and debugging.  A final general point that was discussed was that some students who had learnt how to program using one programming paradigm (Wikipedia) might find it difficult to learn a programming language that uses a different paradigm.

Diane took us through a presentation that aimed to answer the question 'why has Sense been developed and what is its pedigree?'  We were told about the Scratch language (MIT), a programming language called Alice (Alice website), and a microcontroller called the Arduino (Wikipedia).  Sense is, of course, a version of Scratch that the Open University has modified.  The differences being is that it has a small number of different programming constructs, and can also be interfaced with some Arduino based physical hardware.

Towards the end of this first session, we were then assigned into mixed groups and asked to consider how to write a small program using different coloured post-it notes.  (Some of us were programmers, others were not!)

Playing with Sense

Before we were allowed into a lab filled with computers, we were introduced to a number of other Sense concepts, such as the notion of 'broadcast', or sending messages from one component of a Sense program to another.  There was some discussion about the stage metaphor, and a presentation of a simple maze game.  In keeping with this metaphor, something new for me was the idea that a sprite (a graphical object on the screen) can have different costumes.

The final part of the day was dedicated to about an hour of 'tinkering'.  It is felt that Sense is one of those things that you can only get to grips with properly if you spend a bit of time 'messing around' with.  By messing around, this might mean creating new programs, or changing existing programs.

Not having had much time to tinker before (and being a former software developer), some of the constructs (and graphical palettes that held these constructs) soon became familiar to me.  What was apparent was that I had to do quite a bit of looking and searching, but by the end of the hour, I roughly knew where I needed to look (and what colour of programming construct to look for) to do the things that I wanted to do.

Final points

I took away a number of things points this session.  The first was a reminder about how the teaching and learning of programming is not just about programming itself.  It is all very well knowing about different programming constructs and understanding what they do but it is a whole other challenge to know how to decompose a problem into discrete steps that a computer can execute. 

Researchers who have studied the psychology of programming have explored the notion of a programmer's cognitive strategy.  As well as a programming strategy there is also the conception of a programmer's tactic, which can be considered in terms of something that a programmer might do to help them understand or get to grips with a problem, or understand what a computer is doing when faced with a buggy program.

Teaching programming isn't only about teaching the constructs, but also about exposing and sharing (or even 'bootstrapping', to take a computing analogy) these tactics.  I clearly remember a discussion about using something called the Plan Do Check Act, or PDCA cycle (Wikipedia) to help users of Sense understand what needs to be done.

Another important point (and one that I've mentioned before) is the need for both students and tutors alike to find the time to 'tinker', to explore what is possible within a programming language or environment.  Tinkering facilitates the development of strategies and tactics.

My own view is that programming isn't something that is just about making sets of instructions to get a machine to do stuff; it is also about facing up to the sometimes difficult challenge of problem solving.  Programming is an intrinsically creative activity, and this is something that is easily forgotten.  To be creative, we need to find the time to play and tinker.  This is something that is easily forgotten too.

Permalink Add your comment
Share post
Christopher Douce

Distance Learning for Computing and ICT Workshop

Visible to anyone in the world
Edited by Christopher Douce, Monday, 3 Mar 2014, 18:47

A Higher Education Academy sponsored distance learning workshop for computing and ICT was held at the Open University on Thursday 20 October 2011.  The workshop addressed a number of different themes.  These included internationalisation and the delivery of modules to different countries, professionalization and industry, models of distance learning, the use of technology and its accessibility.

The day was divided up into a number of different sessions, and I'll do my best to summarise them.  I feel that blogging this event is going to be a little bit different from the previous times I have blogged HEA workshops since this time I was less of an observer and more of a participant.  This said, I'll do my best!

Introduction and keynote

The event was introduced by Professor Hugh Robinson, head of the department of Computing at the Open University.  Hugh briefly spoke about the history of the university and mentioned that Open means that students who enrol to courses do not necessarily have to have any qualifications.  This connected to one of the university's themes: to be open in terms of people, places and ideas.  Distance education enables education to be open in all these respects but it is apparent that due to the changes in the higher education sector, all institutions are to face challenges in the future.

Hugh's opening presentation gave way to Mike Richards keynote presentation about a new computing module entitled TU100, My Digital Life.  Mike described some of the main topic areas of this new module which will for a common entry point to a number of degrees.  This module addresses themes that are rather different to those that used to be on the computing curriculum, mostly due to the changes in technology and what is meant by a 'computer'.

Mike mentioned important subjects such as privacy and security, the notion of ubiquitous computing and what is meant by 'free', connecting to subject of open source software systems.  Mike went on to say that the TU100 module contains some hardware that might once have been known as a 'home experiment kit'.

In the case of TU100 this is in the form of a programmable microcontroller board which can be configured in a way to work with different types of measurements and share the results with other people over the internet.  Furthermore, the microcontroller (and connected software) can be developed using a visual programming language called Sense, which is a version of Scratch, a popular introductory programming environment developed by MIT.

Mike's presentation emphasised that distance education need not only begin and end with a virtual learning environment.  A distance education module can contain a rich set of resources such as video materials and physical equipment that can be used to facilitate both understanding and debate.  Mike emphasised the point that many issues that connect to the increasingly broad discipline of computing (broad because of its impact on so many other areas of human activity) is that some debates do not have right or wrong answers.

One thing is certain: technology has changed so many different aspects of our lives and will continue to do so in ways that we may not be able to expect.  It's my understanding that one of the aims of TU100 is to highlight and uncover different debates and help students to navigate them.  What was very clear is that computing education is so much more than just technology and getting it to do cool stuff.  It's essential to understand and to consider how technology affects so many different aspects of our lives.

Morning session

The first presentation in the morning session was by Quan Dang from London Metropolitan University.  Quan's presentation was entitled, 'blending virtual support into traditional module delivery to enhance student learning'.  Quan emphasised how synchronous tools, such as on-line text chat could be used to create virtual 'drop in' sessions outside of core teaching hours to enable students to gain regarding subjects such as computer programming.  Quan's presentation was very though provoking since it made me ask myself the question, 'what different tools and practices might we potentially adopt (at a distance) to help student get to grips with difficult issues such as debugging'.  Debugging is something (in my humble opinion) that you can best learn by seeing how different people consume elements of the programming tools that are available through development environments.  Getting a feeling of the different strategies that can be applied is something that can only be gained through experience, and technology certainly has the potential to facilitate this.

The following presentation, by Amanda Banks from the University of Manchester, was entitled 'advanced professional education in computer science'.  Amanda spoke at some length about how a tool such as MediaWiki could be used to enable students to create useful materials that could be used with others.  This presentation was also thought provoking: Wiki's can certainly be used within on-line modules to enable to student to generate materials for their own study, but Amanda's presentation made me consider the possibility that wiki-hosted material can be used between different module presentations as a way to facilitate debates about different ideas.

The final presentation was by Philip Scown, from Manchester Metropolitan University Business School.  Philip's thought provoking presentation was entitled, 'the unseen university: full-flexible degrees enabled by technology'. Philip argued that technology can potentially allow different models of studying and learning, such as modules which don't have start dates, for instance.  I can't do justice to Philip's talk within this space, so I do encourage you to have a look on the HEA website where I understand that his presentation slides are hosted.

First afternoon session

The afternoon session was started by Mark Ratcliffe, discipline lead for computing at the Higher Education Academy.  Mark outlined the role of the HEA and then went on to describe funding opportunities and the role of a HEA academic associates.  Mark then directed us to the HEA website for more information.

Distance education is one of those terms that can mean different things to different people, and this difference was, in part, highlighted by Mariana Lilley's first presentation of the afternoon that had the title, 'online, tutored e-learning and blended: three modalities for the delivery of distance learning programmes in computer science'.  Mariana's presentation also represented a form of case study of a programme that is presented internationally by the University of Hertfordshire.  It was interesting to hear about the application of different tools, such as Elluminate (now Blackboard Collaborate), QuestionMark Perception and VitalSource Bookshelf.  This suggested to me the point that distance learning is now facilitated by a mix of different tools and made me question whether we have (collectively) identified best (or most effective) mix.  Institutions have to necessarily explore technology in combination with pedagogic practice, and sharing case studies is certainly one way to understand something about what is successful.

Mariana's presentation was nicely complemented by Paul Sant's (in collaboration with his colleague Malcolm Sant) who was from the University of Bedfordshire.  Paul's presentation was entitled, 'distance learning in higher education - an international case study'.  Paul identified a number of challenges which included, 'how can we ensure that distance students remain engaged? How can we offer support in a way that meets their schedule and requirements?', and 'How can we ensure that the work performed by students meets their potential?'  Paul mentioned tools such as the Blackboard VLE and synchronous tools by Horizon Wimba.  Paul's presentation also helped to expose the subject of partnerships with international institutions.

Second afternoon session

The final session of the day was broadly intended to focus upon the needs of the student from two different perspectives.  Steve Green from the Accessibility Research Centre, Teeside University kicked off this session by describing 'studying accessibility and adaptive technologies using blended learning and widgets'.  Accessibility is an important subject since it enables students to make use of learning resources irrespective of how or where they may be studying (both in terms of their physical and technical environment), but also widens the way in which resources may be consumed, taking into account learners with additional requirements.  Steve described how students create accessible widgets and their evaluation.

Steve's talk reminded me of a question that I was asked not so long ago, which is, given that distance legislation is now an international endeavour and the development of accessibility is supported by equality legislation, where do the boundaries lie in terms of offering support to students?  The answer may depend on the issue of how partnerships are developed and function.

The final presentation of the day, entitled 'finding a foundation for flexibility: learner centred design' was by Andrew Pyper from the University of Hertfordshire.  The underlying theme is that institutions need to understand the needs of their learners to best support them.  Tools such as learner centred design, which is known to the interaction design and human-computer interaction communities, have the potential to create rich pictures which then potential guide the development of both learning experiences and technology alike.

Plenary

Towards the end of the day there was a bit of time to hold an open discussion about some of the different themes that the presentations had exposed.  Many thanks to Amanda, Philip and Andrew for taking part.  Some of the themes that came to my mind were the issues of  tools and technology, internationalisation, industry and employability, and student skills.  Points included that we need to be careful about our assumptions of the technology that students might have.  Another important point is that one way to differentiate between different institutions might be in terms of the technologies that they use (and also how they use it).

We were also reminded about something called the Stanford Machine Learning course, which provoked some debate about 'free' (which relates back to Mike Richard's earlier TU100 presentation), and we were all directed towards the QAA Distance Learning precepts (many thanks to Richard Howley for bringing this to our attention).

Summary

All in all, it was a fun day!  There were loads of questions asked following each of the sessions and much opportunity for talk and debate in between.  I have to confess I was very relieved when the tea, coffees and sandwiches arrived on time, so thanks are extended to the Open University catering group.

It's tough, for me, to say what the highlight of the day was due to the number of very interesting thought provoking presentations.  I certainly feel that there is always an opportunity to learn lessons from each other; it is clearly apparent that there are many different ways to approach distance education.  Whilst there are many differences between institutions, similar issues are often grappled with, such as how to best make use of technology and ensure that students are offered the best possible level of support.

Permalink 2 comments (latest comment by Christopher Douce, Thursday, 2 Feb 2012, 12:04)
Share post

This blog might contain posts that are only visible to logged-in users, or where only logged-in users can comment. If you have an account on the system, please log in for full access.

Total visits to this blog: 1976867