OU blog

Personal Blogs

Christopher Douce

South East of England Associate lecturer conference: Kent College

Visible to anyone in the world
Edited by Christopher Douce, Monday, 24 Mar 2014, 14:14

Twice a year Open University associate lecturers have an opportunity to attend regional development events.  These conferences offer tutors a number of different training sessions about a range of different topics, ranging from change in university policies, through to the best way to use technology.

Each event is different and has a slightly different character.  This blog is a really simple overview of an event that I recently attended at Kent College.  In fact, I think I remember visiting Kent College to attend my first ever tutorial, which was run by my then mentor, not long after starting as an associate lecturer.  I remember getting quite lost amongst a number of different buildings and being in quite a gloomy room.  Things have changed: Kent College was unrecognisable.  Old buildings had been demolished to make way for new modern ones.  This, however, wasn't the only surprise.

Teaching through drama

Not long after arriving, we were all gently ushered into a large theatre.  We could see a number of tables set out at the front and I immediately expected to endure a series of formal presentations about changes to the structure of the university, or an update about student registrations, for example.   Thankfully, I was disappointed. 

From stage left and right, actors suddenly appeared and started to scream and shout.  It immediately became apparent that we were all in the middle of a theatre production which was all about teaching and learning.  We all watched a short twenty minute play of a tutorial, in which we were presented with some fundamentally challenging situations.  The tutorial, needless to say, was a disaster.  Things didn't go at all well, and everyone seemed to be very unhappy.  Our hapless tutor was left in tears!

When the play had finished and we were collectively shocked by the trauma of it all, we were told that it would be restarted.  We were then told that we should 'jump in' and intervene to help correct the pedagogic disaster that we were all confronted with.  Every five or so minutes, colleagues put up their hands to indicate that they would like to take control of the wayward situation.  It was astonishing to watch for two different reasons.  Firstly, the willingness that people took on the situation, and secondly the extensive discussions that emerged from each of the interventions.

Towards the end of the modified (and much more measured) play, I could resist no longer.  I too put up my hand to take on the role of the hapless tutor 'Rosie'.  My role, in that instant, was about communicating the details surrounding an important part of university policy and ensuring that the student (played by an actor) had sufficient information to make a decision about what to do.   It was an experience that felt strangely empowering, and the debates that emerged from the intervention were very useful; you could backtrack and run through a tricky situation time and time again.  The extensive audience, sitting just a few meters away, were there to offer friendly situations.

If an outsider peered around the door and saw what was going on, it might be tempting to view all this activity as some form of strange self-reflective light entertainment.  My own view is very different: there is a big distance between talking about educational practice in the third person, i.e. discussing between ourselves what we might do, and actually going ahead and actually doing the things that could immediately make a difference.   A really nice aspect of the play was that all the students (as played by actors) were all very different.  I'm personally very happy that I'm not tutoring on the fictional module 'comparative studies'!  This first session of the AL development conference was entertaining, enjoyable, difficult and insightful all at the same time.

Sessions

After the theatre production, we (meaning: conference delegates) went to various parallel sessions.  I had opted for a session that was part about the students and part about gaining more familiarity with the various information systems that tutors have access to (through a page called TutorHome).  I've heard it said time again that the only constant in technology is change.  Since the OU makes extensive use of technology, the on-line portal that tutors use on a day to day basis is occasionally updated.  A face to face training session is an opportunity to get to know parts of our on-line world that we might not have otherwise discovered, and to chat with other tutors to understand more about the challenges that each of us face.

The second session that I attended was also very different.  Three research students from the University of Surrey presented some of their research on the subject of motivation in higher education.  There is, of course, quite a difference between the face to face study context and the Open University study context.  A presentation on methods and conclusions gave way to an extended (and quite useful) discussion on the notion of motivation.

One memory of this session is the question of how it might potentially move from being strategic learners (completing assignments just to gain credit for a module or degree), to motivation that is connected with a deep fascination and enthusiasm for a subject.  There are a number of factors at play: the importance of materials, the way in which support is given and the role that a tutor can play in terms of inspiring learners.

I made a note about the importance of feedback (in response to assessments that had been completed and returned).  A really important point was that negative feedback can be difficult to apply, especially if there is no guidance about what could be done to improve.  (This whole subject of feedback represents a tip of a much larger discussion, which I'm not going to write about in this blog).

In terms of inspiration, one useful tip that I took away from this final session was that the relevance and importance of a module if a module can be connected to debates, stories and discussions that can be found in the media.  Although this is something that is really simple (and obvious), it sometimes takes conferences such as these to remind us of the really important and useful things that we can do.

Final points

All in all, a fun day!  From my own personal perspective, I enjoyed all the sessions but I found the theatre session particularly thought provoking - not just in terms of the points that were covered, but also in terms of the approach that was used.

Since I have no idea who is going to be reading this particular blog post (not to mention all the others I've written!), I guess I'm primarily writing for other OU tutors who might accidentally discover these words.  If you are a tutor, my overriding message would be: 'do go along to your regional conferences if you can make it - they are really good fun!'

If you're a student with the university I guess my message is that there are many of us working behind the scenes.  We're always trying to do the best that we can to make sure that you're given the best possible learning experience.  Another point that I must emphasise is that the instances of interaction with tutors are really important and precious (for student and tutor alike).  So, if you're a student, my message is: 'do go along to any face to face tutorials or days schools that might be available as a part of your module - there is always going to be something that you'll be able to take away'.

Permalink Add your comment
Share post
Christopher Douce

Animal Computer Interaction : Seminar

Visible to anyone in the world
Edited by Christopher Douce, Sunday, 4 Nov 2018, 11:09

As a part of my job I regularly visit the Open University campus in Milton Keynes.  On the 5 June, I managed to find some time to attend a seminar by my colleague Clara Mancini.  Over the last couple of years, I had heard that Clara had been doing some research into the subject of Animal-Computer Interaction but we had never really had the opportunity to chat about her work.  Her seminar was the perfect opportunity to learn more about the various ideas and projects she was working on.

After a short introduction, Clara mentioned a number of topics from human-computer interaction (or 'interaction design').  These included topics such as the use of ambient technology.  This could include the use of smart sensors that can be embedded into the fabric of buildings, for example, so their environmental conditions and properties can dynamically change. Other topics include the use of augmented reality.  This is where additional information is presented on top of a 'real' scene.  You might say that Google Glass is one product that can make good use of augmented reality.

Clara also spoke of the interaction design process (or cycle), where there is a loop of requirements gathering, designing and prototyping, followed by evaluation.  A key part of the process is that users are always involved.  ACI is very similar to HCI.  The biggest difference is the users.

History and context

It goes without saying that technology is being used and continues to be used to understand our natural world.  One area which is particularly interesting is that of conservation research, i.e. understanding how animals behave in their natural environment.  One approach to develop an understanding is to 'tag' animals with tracking devices.  This, of course, raises some fundamental challenges.  If a device is too obtrusive, it might disrupt how an animal interacts within its natural environment.

Another example of the application of technology is the use of computer driven lexigraphic applications (or tools) with great apes.  The aim of such research is to understand the ways that primates may understand language.  In conducting such research, we might then be able to gain an insight into how our own language has evolved or developed.

Products and systems could be designed that could potentially increase the quality of life for an animal.  Clara mentioned the development of automated milking machines.  Rather than herding cows to a single milking facility at a particular time, cows might instead go to robotic milking machines at times when it suits them.  An interesting effect of this is that such developments have the potential to upset the complex social hierarchies of herds.  Technology has consequences.

One important aspect of HCI or interaction design is the notion of user experience.  Usability is whether a product allows users to achieve their fundamental goals.  User experience, on the other hand, is about how people feel about a product or a design.  A number of different usability experience goals have emerged from HCI, such as whether a design is considered to be emotionally fulfilling or satisfying.  Interaction designers are able to directly ask users their opinions about a particular design.  When it comes to designing systems and devices for animals, asking opinions isn't an option.  Clara also made the point that in some cases, it's difficult for us humans to give an opinion.  In some senses by considering ACI, we force ourselves to take a careful look at our own view of interaction design.

Aims of ACI

Clara presented three objectives of ACI.   Firstly, ACI is about understanding the interaction and the relationship between animals and technology.  The second is that ACI is about designing computer technology to give animals a better life, to support them in their tasks and to facilitate or foster intra and inter species relationships.  The third is to inform development of a user-centred approach that can be used to best design technology intended for animals. 

Clara made the very clear point that ACI is not about conducting experiments with animals.  One important aspect of HCI is that researchers need to clearly consider the issues of ethics.  Participants in HCI research are required to give informed consent.  When it comes to ACI, gaining consent is not possible.  Instead, there is an understanding that the interests of participants should take precedence over the interests of science and society.

Projects

Clara described a system called Retriva (company website), where dogs can be tagged with collars which have a GPS tracking device.  Essentially, such a product allows a solution to the simple question of: 'if only I could find where my dog was using my iPhone'.  Interestingly, such a device has the potential to change the relational dynamics between dog owner and dog.  Clara gave an example where an owner might continually call the name of the dog whilst out walking.  The dog would then use the voice to locate where the owner was.  If a tracker device is used on a dog, an owner might be tempted less to call out (since he or she can see where the dog is on their tracking app).  Instead of the owner looking for the dog, the dog looks for the owner (since the dog is less reliant on hearing the owner's voice).

Dogs are, of course, used in extreme situations, such as searching for survivors following a natural disaster.  Technology might be used to monitor vital signs of a dog that enters into potentially dangerous areas.  Different parameters might be able to give handlers an indication of how stressed it might be.

As well as humanitarian uses, dogs can be used in medicine as 'medical detection dogs'.  I understand that some dogs can be trained to detect the presence of certain types of cancers.  From Clara's presentation I understand that the fundamental challenges include training dogs and attempting to understand the responses of dogs after samples have been given to them (since there is a risk of humans not understanding what the dog is communicating when their behavioural response to a sample is not as expected).

One project that was interesting is the possible ways in which technology might be used to potentially improve welfare.  One project, funded by the Dogs Trust, will investigate the use of ambient computing and interactive design to improve the welfare of kennelled dogs.  Some ideas might include the ways in which the animals might be able to control aspects of their own environment.  A more contented dog may give way to a more positive rehoming outcome.

Final points

Clara presents a question, which is, 'why should we care about all this stuff?'  Studying ACI has the potential to act as a mirror to our own HCI challenges.  It allows us to think outside of the human box and potentially consider different ways of thinking about (and solving) problems. 

A second reason connects back to an earlier example and relates to questions of sustainability.  Food production has significant costs in terms of energy, pollution and welfare.  By considering and applying technology, there is an opportunity to potentially reconceptualise and rethink aspects of agricultural systems.  A further reason relates to understanding about to go about making environments more accessible for people who share their lives with companion animals, i.e. dogs who may offer help with some everyday activities.

What I liked about Clara's seminar was its breadth and pace.  She delved into some recent history, connected with contemporary interaction design practice and then broadened the subject outwards to areas such as increasing prominence (welfare) and importance (sustainability).  There was a good mix of the practical (the challenges of creating devices that will not substantially affect how an animal interacts within their environment) and the philosophical.  The most important 'take away' point for me was that there is a potential to learn more by looking at things in a slightly different way. 

It was also interesting to learn about collaborations with people working in different universities and disciplines.  This, to me, underlined that the boundaries of what is considered to be 'computing' is continually changing as we understand the different ways in which technology can be used.

Acknowledgements:  Many thanks to Clara for commenting on an earlier part of this blog.  More information about Clara's work on Animal -Computer Interaction can be seen by viewing an Open University video clip (YouTube).

Permalink 2 comments (latest comment by Jackie Doorne, Friday, 19 Jul 2013, 14:13)
Share post
Christopher Douce

BCS Lecture: The Power of Abstraction

Visible to anyone in the world
Edited by Christopher Douce, Friday, 10 Aug 2018, 14:41

When I was a graduate student at the University of Manchester (or the bit of it that was once known as UMIST) I was once asked to show some potential computer science students around the campus.  At the end of the tour I ushered them to lecture which was intended to give the students a feel for what things would be like if they came to the university.

The lecture, given by one of the faculty, was all about the notion of abstraction.  We were told that this was a fundamental concept in computing.  In some respects, it felt less of a lecture about computing, but more of a lecture about philosophy.  I had never been to a lecture quite like it and it was one that really stuck in my mind.  When I left the lecture, I thought, 'why didn't I have this kind of lecture when I was an undergraduate?'  As an undergrad I had spent many a hour creating various kinds of computer programs without really being told that there was an essential and fundamental idea that underpinned what I was doing.

When I saw the British Computer Society (BCS) advertising a lecture that was about the 'power of abstraction', I knew that I had to try to make time to come along. The lecture, by Professor Barbara Liskov, was an annual BCS lecture (the Karen Spärck Jones lecture) that honours women in computing research.

All this sounds great, right?  But what, fundamentally, is abstraction?  An 'abstract' at the top of a formal research paper says, in essence, what it contains.  Abstraction, therefore, can be thought of as a process of creating a representation of something, and that something might well be a problem of some kind.  Admittedly, this sounds both confusing and vague...

Barbara began her lecture by stating that abstraction is the basis of how we implement computer software.  The real world is, fundamentally, a messy place.   Since computers are ultimately mathematical machines, we need a way to represent problems (using, ultimately, numbers) so that a computer can work with them.  As a part of her lecture, Barbara said that she was going to talk through some developments in the way that people (or computer programmers) could create and work with abstractions.  I was intrigued; this talk wasn't just about a history of programming languages, it was also a history of thought.

So, what history was covered?  We were immediately taken back to the 1970s.  This was a period in computing history where the term 'software crisis' gained currency. One of the reasons was that it was becoming increasingly apparent that creating complex software systems was a fundamentally difficult thing to do.  It was also apparent that projects were started, became excruciatingly late and then abandoned, costing astronomical amounts of money. (It might be argued that this still happens today, but that's a whole other debate which goes beyond this pretty short blog post).

One of the reasons why software is so fundamentally hard to create is that it is 'mind stuff'.  Software isn't like a physical artefact or product that we can see. The relationships between components can easily become incredibly complicated which can, in turn, make things unfeasibly difficult.  Humans, after all, have limited brain capacity to deal with complexity (so, it's important that we create tools and techniques that help us to manage this).

We were introduced to a number of important papers. The first paper was by Dijkstra, who wrote a letter to the Communications of the ACM entitled, 'Goto considered harmful'.  'Goto' is an instruction that can help to create very complicated (and unfathomable) software very quickly.  Barbara described the difficulty very clearly. One of the reasons why software is so hard is that there is a fundamental disconnect between how the program text might be read by programmers and how it might be processed or executed by a machine.  If we can create a program representation that tries to bridge the difference between the static (what is described should happen) and the dynamic (what actually happens when software does its stuff), then things would be a whole lot easier.

Another paper that was mentioned was Wirth's 'program development by stepwise refinement'. Wirth is famous for the design of two closely related languages: Pascal and Modula-2. It certainly is the case that it's possible to write software without the 'goto' instruction, but Barbara made the interesting point that it's also possible to write good, well-structured software in bad languages (providing that you're disciplined enough). The challenge is that we're always thinking about trade-offs (in terms of program performance and code economy), so we can easily be lured into doing clever things in incomprehensible ways.

Barbara spoke about the importance of modules whilst mentioning a paper by Parnas entitled, 'information distribution aspects of design methodology'. One of the great things about modules, other than that they can be used to group bits of code together, is that they enable the separation of the implementation and the interface.   This has reminded me of some stuff from my undergrad days and time spent in industry: modules are connected to the term 'cohesion'.  Cohesion is, simply, the idea that something should do only one thing.  A function that has one name and does two or more things (that are not suggested in its name) is a recipe for confusion and disaster.  But I fear I'm beginning to digress from the lecture and onto one of my 'coding hobbyhorses'.

Through a short mention of a language called Simula-67 (Wikipedia) we were then introduced to a paper by Liskov and Zilles entitled, 'programming with abstract data types'.  We were told that this paper represented a sketch of a programming language which eventually led to the creation of a language called CLU (Wikipedia), CLU being short for Clusters.

There is one question Barbara clearly answered, which is: why go to all the trouble of writing a programming language?  It's to understand whether an idea works in practice and to understand some of the barriers to performance.  Also, whenever a language designer describes a language in natural language there are always going to be some assumptions that the compiler writer must make. Only by going through the process of creating a working language are language designers able to 'smoke out' any potential problems.

Just diverting into programming language speak for a moment, CLU implemented static type checking, used a heap, and doesn't support concurrency, the goto statement or inheritance.  What it does implement is polymorphism (or the use of generics), iterators and exception handling.

Barbara also mentioned a very famous language called Smalltalk, developed by Alan Kay.  Different developments at different times and at different places have all influenced the current generation of programming languages.  Our current object-oriented languages enable programmers to define abstractions, or a representation of a problem in a way that wasn't possible during the earlier days of software.

Research directions

Barbara mentioned two research topics that continue to be of interest.  The first was the question of what might be the most appropriate design of a programming language for novices?  In various years, these have been BASIC (which introduced the dreaded Goto statement), Pascal, and more recently Java.  Challenges of creating a language that helps learners develop computational thinking skills (Wikipedia) include taking account of programming language design trade-offs, such as ease of use vs. expressive power, and readability vs. writeability, and how to best deal with modularity and encapsulation.

Another research subject is languages for massively parallel computers.  These days, PCs and tablets, more often than not, contain multiple processor cores (which means that they can, quite literally, be doing more than one calculation at once).  You might have up to four cores, but how might you best design a programming language that more efficiently allows you to define and solve problems when you might have hundreds of processors working at the same time?  This immediately took me back to my undergrad days when I had an opportunity to play with a language called Occam (Wikipedia).

There was one quote from Barbara's lecture that stood out (for me), and this was when she said, 'you don't get ideas by not working on things'. 

Reflections

I should say at the point that I haven't done Barbara's speech justice.  There were a whole lot of other issues and points that were mentioned but I haven't touched on.  I really enjoyed being taken on a journey that described how programming languages have changed.  I liked the way that the challenges of coding (and the challenge of using particular instructions) led to discussions about modules, abstract data types and then to, finally, object-oriented programming languages.

It's also possible to take a broader perspective to the notion of abstraction, one that has been facilitated by language design.  During Barbara's lecture, I was mindful of two related subjects that can be strongly connected to the notion of abstraction.  The first of these is the idea of design patterns.

Design patterns (Wikipedia) take their inspiration from architecture. Rather than design a new building from scratch every time you need to make one, why not buy a pre-existing design that has already solved some of the problems that you might potentially come up against?  There is a strong parallel with software: developers often have to solve very similar problems time and time again.  If we have a template to work from, we might arguably get things done more quickly and cheaply.

Developers can use patterns to gain inspiration about how to go about solving common problems.  By using well understood and defined patterns, the communication between programmers and developers can be enhanced since abstract concepts can be readily named; they permit short-cuts to understanding.

In some cases, patterns can be embedded into pre-existing code that can be used by developers to kick-start a development.  This can take the form of a framework, software code that solves well known problems that ultimately enables developers to get on and solve the problems that they really need to solve (as opposed to dealing with stuff such as reading and writing to databases).

Abstraction has come a long way in my own very short career as a developer. One of the biggest challenges that developers face is how to best break down a problem into structures that can be represented in a language that a machine can understand.  Another challenge lies with understanding the various tools that developers now have at their disposal to achieve this.

Note: The logo at the top of the blog is used to indicate that this blog relates to a BCS event and this post is not connected with the BCS in any other way. All mistakes and opinions are my own, rather than that of the OU or the BCS.

Permalink Add your comment
Share post
Christopher Douce

Journey: London to Lincoln

Visible to anyone in the world
Edited by Christopher Douce, Sunday, 14 May 2017, 10:01

20130502_163018

Riding from London to Lincoln on a motorbike is a blast.  I decided to be sensible and set off after the rush hour but I just couldn't wait.  I edged out into the London traffic at nine in the morning and quickly realised that I had made a mistake.  After about half an hour of wrestling with traffic, I was on an A-road heading towards the London orbital motorway.  Fifteen minutes later, I was circumnavigating a large chunk of London and heading towards the M11; a route that I hadn't done before.

The reason for my trip up to Lincolnshire was to visit my parents.  It was the third time I did this trip via motorbike and on this occasion I decided that I wanted to go on a journey that I had promised to take ever since I started to learn more about the history of computing.  

Lincoln is a city that I know well.  I spent quite a lot of time there, staying at my parents house whilst I got my head down to spend many hours doing some computer programming for a research project I worked on a couple of years ago.  During this time I also gained my motorbike licence.  I used to spend hours riding to and from Lincoln, gaining some kind of perverse pleasure if I became snarled up in a traffic jam (since it gave me the opportunity to practice clutch control and feathering the back brake).  Gradually, some of the city's secrets revealed themselves to me; the links between the old and the new - the contrast between the imposing medieval cathedral and ancient castle juxtaposed against modern industrial units and trading estates.

The M11 was a dull but quick road.  Within a couple of hours I skirted part Cambridge, a city that I've been to before a number of times but barely know.  As I rode I made a mental note that I need to return.  When it comes to the history (and the future) of the computer, Cambridge is a fundamentally important place.  My objective, at that moment, was to get to Lincoln and leave Cambridge for another day.

The M11 soon became the A1 and within hardly any time at all, I discovered the exit I was looking for: Stamford.  A gentle ride through this pretty market town soon gave way to quieter roads, the kinds of roads that motorcyclists love; roads that are gently undulating and sweep from left to right.  Not only were they undulating, they were also fairly empty, there was no rain and very little wind: perfect.  Small towns and villages came and went, my destination becoming ever nearer.  All in all, the journey took about five hours, including two stops (one for fuel, another for coffee).

After two days of catching up with my parents, the time had come; I was going to take a short trip to explore some places I had read about, had ridden past and had never properly seen.  I donned my protective 'gubbins' and set off across the fens.  There is this glorious road between the village where I was staying and Lincoln.  It's dead straight, with wide distant fields on either side - you can see for miles.

My objective was to get to the heart of the city and park in a place where had seen bikers parking.  When I got to the city, I blundered my way through the one way system twice before I bagged a space, vacated by a departing Ducati.  My first objective was to figure out where Silver Street was.  I looked up a nearby street.  I had accidentally (or unconsciously) parked on Silver Street!  My next objective was to find number 34, the birth place of George Boole (1815-1864).

If you're a computer scientist or just a casual user of a spreadsheet or database you would have quite likely stumbled across his name.  The terms 'boolean expressions' or 'boolean conditions' have been, quite obviously, derived from his name (in the same way that the word algorithm can be traced back to the name of a Persian mathematician).  I have to admit that I've only just started to scratch the surface on the history of Boole.  George's father, John, was cobbler.  Apparently was somewhat distracted by other pursuits, particularly mathematics and science.

I walked the entire length of Silver Street to try to find number thirty-four but quickly became confused; the street numbers were few and far between.  There seemed to be no discernible pattern.  I adopted the age old tactic of 'appearing to be confused' and barrelled into the entrance of an estate agent.  'Excuse me, mate, is this number thirty-four?' I asked a smart looking man who was wearing a shirt and sporting a tie.  'This is number thirty-two... I have no idea where number thirty-four is, might be next door?'  I offered a smiley thank you and returned to the street.

'Hello... erm, is this number thirty-four?', 'Yes!' came the delighted reply from a nice lady who was sat at a computer.  Number thirty-four, like number thirty-two was an estate agency.  'I've found it!' I exclaimed.  I took a step back and cast my eyes around the office-like interior, as if I was looking for some kind of shrine to the great man.  Instead, I saw a photocopier. 

The nice lady was bewildered.  I explained that where she worked just happened to be the birthplace of a famous mathematician (which appeared to bewilder her even more).  I was tempted to explain my enthusiasm by started to talk about the importance of Boole and the history of the computer but I felt that it was neither the time nor the place since I obviously wasn't interested in buying a house.  Realising that my first quest was coming to an end, I began to feel that I was making a bit of a nuisance of myself.  Before I went, I asked for their business card (to gain proof that their estate agency really was number thirty-four).  Sure enough, I had found number thirty four Silver Street.

Boole invented something called Boolean Algebra and I know his work in terms of Boolean Logic and studied it college during my vocational course in computing.  He proposed a form of algebra that works with two states: one or zero, or true or false.  The reason why Boole's work became so important was that computers represent everything using numbers which are made of these two states  Sound, music, images, video, computer software, documents, instructions to turn on burglar alarms, pretty much anything you can imagine can ultimately, represented using just 'on' and 'off'.  Strings of these states form numbers: the bigger the number of 'bits' (which are, in essence, Boolean on-off states) the more numbers that can be stored and moved around in a computer.

But why use those two states?  The answer is pretty simple: it makes electronics simple.  By going with the simplest possible representation it's then possible to do ever increasingly complicated stuff with a high degree of reliability.  One day, I hope to write something about electronic machines that worked with the kinds of numbers that humans work with - but would require a much longer journey than the one I'm writing about.

I'm simplifying things terribly here (since I'm not a mathematician and I'm writing about subjects that are slightly outside of my area of expertise), but I think it's safe to say that Boole's work on logic is so fundamental that without it we wouldn't have computer processors or logic circuits.  Boole, ultimately, created the tools of thought that allowed us to work with logic states.  In software terms, an on-off state can be considered akin to an atom in the physical world.

Boole's birthplace wasn't the only place I wanted to visit.  After saying my goodbyes to the nice estate agent people, I had another quest: to go and visit the school that Boole founded.  I walked to the end of Silver Street, crossed a road, walked a bit, then got confused... and only then consulted my GPS enabled mobile phone.  Within minutes, I was walking up a steep flight of stairs towards Lincoln's medieval cathedral.  It stuck me that I had probably found a path that hadn't changed for a couple of hundred years; some of the steps had been visibly worn down over time.  Looking upwards, I could see the cathedral through a small archway in the distance.

20130502_164044

When I was at the top, standing in the shadow of the cathedral, I consulted my phone again and figured out where I needed to go.  I knew where I was.  I had ridden on it many times before on my bike training.  It's a road that runs from the bottom of the hill (where the industrial and retail part of the city), to the ancient part of the city.  The top bit can get a bit exciting, since it's quite a fast road and two lanes merge into one before taking a route past the cathedral.   Within moments, I had arrived at my second destination.  I peered through the railings at a lovely looking house and I soon found a plaque on the wall that indicated that I was in the right place.  Here's what it said: 'George Boole, father of modern algebra. Author of the laws of thought and first professor of mathematics at university college, Cork, was born in Lincoln and established an academy in this house c. 1840'.  Satisfied, I turned around and retraced my steps and returned to my bike. 

Five days later it was time to return to London.  I set off ridiculously early, hoping to avoid as much traffic as I could.  The ride though Lincolnshire was beautiful.  There were these moments where you could see where dew had touched the undulating roads that I could see in the distance; roads that appeared as ribbons of silver.  I was touched by not only the physicality of negotiating them, but awestruck by the light and the experience that the roads were presenting me.  By the time I had got to London, everyone was fully awake and the motorways that took me back to South East London were pretty solid.

I've now got some more work to do to answer a number of different questions: what was the time in which Boole living was really like?  Who else did Boole know?  What kind of work did he do after he left Lincolnshire?  How exactly did he influence other mathematicians and has he made other contributions to mathematics (with a view to understanding its connection with computing), other than the ones that I've already touched on?  Time, of course, is the challenge: there are so many other questions out there that are interesting!

I've also got some plans for the next journey. I'm going to stick around in South East London for a bit and then cross the river for another Babbage related adventure.  I'm going to be spending quite a lot of time in London before venturing further afield. 

Permalink 2 comments (latest comment by Chris Stanton, Wednesday, 22 Jul 2020, 18:18)
Share post
Christopher Douce

ESRC seminar: inclusion, usability and difference

Visible to anyone in the world

On 22 April 2013 I managed to find a bit of time to attend a seminar that touched upon some of the themes that I recently blogged about, namely, the way in which technology can be made available (and can be used to help) different groups of users. 

During the day there were a total of five presentations, each of which touched upon many of the different themes that continue to be a strong interest: accessibility, usability, and the way in which technology can potentially help people.  Like so many of these blogs, I'm going to do a bit of a write-up of each presentation, and then conclude with a set of thoughts and points which emerged from the closing discussion.

Older people and on-line social interactions

The first talk of the day was by Shailey Minocha who talked about a project called OCQL (project website) that has been exploring how technology may be able to be used to help and support older people.  If you're interested, I've written a brief blog summary of an earlier workshop that Shailey and her colleagues ran.

Some of the issues that the project aims to explore are the different motivations for being on-line, understanding various advantages and disadvantages and corresponding potential risks and obstacles. Another aspect of the project was to explore whether we might be able to offer advice to designers to allow them to create more usable systems.

Shailey touched upon challenges and dilemmas that users may face.  One challenge is how we might help to create formal and informal support networks to enable users to not only get online in the first place, but also help users to develop their technology skills.  One comment that I noted was that 'buying a [internet] connected computer is easy, it's continuing to use it that is difficult'.

Shailey gave us a flavour of some preliminary findings.  A simple motivation for getting connected is a desire to keep in touch with people, which is connected with the advantage that certain aspects of technology has a potential to reduce social isolation.  Some of the obstacles included the need to gain technical support and the challenges that lie with understanding certain concepts and metaphors that are a necessary part of being on-line.  The perceived risks include fears about a loss of privacy, concerns about knowing who or which organisations or products to trust.  The perceived disadvantages include the fear that technology might take over the lives of the user and this might take the user away from other events and activities that were important.

I remember a really interesting anecdote of a user who started to use an iPad.  The device was used so much (to keep in contact with distant friends and family), that this took away from time socialising with other people who lived nearby.

Shailey also left us some recommendations.  Training, it was suggested, should be personalised to the needs of individuals.  One-off training sessions are not sufficient.  Instead, training should take place over a longer period of time. 

For those who are interested, here are two links to some related resources.  The first is a link to a paper entitled, Conducting empirical research with older people (ORO repository), to be presented at a human-computer interaction (HCI) conference.  The second is a set of web resources (Delicious) that have been acquired during the project.

Towards the end of the presentation I noted two really interesting questions.  The first was, 'to what extent is the familiarity of technology a temporary problem?', and the second question (which is related to the first) is: 'putting age as an issue to one side, how can we all prepare ourselves to become familiar with and work with the next big technological innovation that may be on the horizon?'

The haptic bracelets

Simon Holland, from the department of Computing and Communication introduced us to devices known as the Haptic Bracelet (Music Computer Laboratory).  In essence, a haptic bracelet is a wearable device that you can put on your wrist or ankle.  The word haptic, of course, relates to your sense of touch.  The devices can be controlled so that they can vibrate at different frequencies or produce rhythms.  They also contain accelerometers which can be used to detect movement and gestures. 

My first question was, 'okay, so all this stuff is pretty cool but what on earth can it be used for?'  Simon clearly had anticipated this thought and provided some very compelling answers.  Fundamentally, it can be used with the teaching of music, specifically with the teaching of rhythm, or drumming.  Drum kits have pedals; drummers use both their hands and their feet.  Simon told us that he imagined a device that was akin to an iPod: a form of music player that could help musicians to more directly (and immediately) learn and feel rhythms.  When I started to think about this, I really wanted one - I could imagine that a haptic iPod could add a whole new dimension to the music which I listen to as a travel across London on the tube.

Its one thing listening to a piece of music through headphones, it's something totally different if you're feeling beats and vibrations through the same limbs that could be creating exactly the same rhythm if you were sitting at a drum kit.  I've noted the following quote that pretty much sums it up:  'at best, it goes through your two ears... [but] how do you know what limb is doing what?!'  All this can be linked to a music education approach called Dalcroze Eurhythmics (wikipedia), which was something totally new to me.  Something else that I hadn't heard of before is sensorimotor contingency theory (which I don't know anything about, but whatever it is, it sounds very cool!)

Early on in his talk, Simon suggested that these devices have the potential to be an assistive technology.  One area in which these devices might be useful is with gait rehabilitation, i.e. by providing additional feedback to people who are trying to re-learn how to walk following a brain injury or stroke.  Apparently a metronome is used to help people to move in time with a rhythm, which is a useful technique to regain (and guide) rhythmic motor control.  One of the advantages of using haptic bracelets is that the responses or feedback they could provide could be more dynamic.  Plus, due to the presence of an accelerometer, different feedback might be presented in real-time - but this is mostly conjecture on my part; this is something that is a part of on-going research.

During the final part of Simon's slot, we were given an opportunity to play with some of the bracelets.  Pairs were configured in such a way that we were able to 'send' real-time rhythms wirelessly to another user.  When we 'tapped' on a table, the same 'tap' was picked up by someone else who was wearing another bracelet.

We were introduced to other (potential) uses.  These included sport, gaming, and helping with group synchronisation (or learning) in dance.  Fascinating stuff!

Digital inclusion in the era of the smartphone

Becky Faith is a doctoral student at the Open University who spoke about some of her research interests, and it was all pretty interesting stuff.  One of her areas of interest is how technology (particularly the smartphone) can be used as a means of support for vulnerable people (and how it might be used to gain support from others). 

During Becky's talk I was introduced to a range of new terms, phrases and frameworks that I hadn't heard of before, such as capability theory (which might relate to what rights people may have but are not aware of) and technofeminist theory.   I also noted questions that related to the roles of the private sector versus the state in facilitating access to technology.  This reminded me of one of the drivers for good interaction design and usability: that it can lead to higher levels of productivity, more effective sales and lower costs.  Since goods and services are now on-line, facilitating digital inclusion also, fundamentally, means good business sense.

Becky's session was also very interactive.  We were given a challenge: we had to find out a very specific piece of information using our smartphone (if we had one).  This was to find the name of our MEP.  We were also asked how we might feel if this was our only device.  I, for one, wouldn't be very happy.  I (personally) feel more comfortable with a keyboard that moves than one that is only visible on a screen.

The activity gave way to a debate.  Some users will be faced with fundamental access challenges.  These could be thought of in in terms of the availability of devices or availability of signal coverage.  Ultimately, there is the necessity of understanding the needs of the users, their situations and the kinds of devices and equipment they may have access to.  A thought provoking session.

Careware

Andrew Stuart from Careware (company website) started his presentation by describing a question that he had asked himself, or he had been asked by someone else (I didn't note down the exact wording!).  The question was, 'why can't I find my dog using my iPhone?'.  Dogs go missing all the time.  The company that Andrew established created a GPS dog collar, which allowed dogs to be found using iPhones.  A great idea!

Andrew's company later expanded to create devices, such as a tracking belt, which could be used with vulnerable people.  Tracking dogs is one thing, but tracking people is a whole other issue.  The idea of people wearing tracking devices obviously raises serious ethical issues, but the necessity for privacy needs to be balanced against the desire to ensure that vulnerable people (who are sometimes family members) are cared for and looked after.  It is argued that personal tracking devices can help some people to maintain their independence whilst allowing family members not only peace of mind but also open up new ways to offer personal support.  Users of a personal tracker can, for instance, press a button to alert other people of difficulties or problems.  A GPS belt (instead of a collar) is a device that is very different from a mobile phone (which, arguably, with its in built GPS facilities, can almost do a very similar task).

Andrew's presentation touched on a number of different issues, i.e. centralised telemedicine through call centres versus the use of individual devices for families, and the roles that local authorities may be able to play.  There were also hints of future developments, such as the use of accelerometers to potentially detect falls.

Open University modules such as Fundamentals of Interaction Design touch upon subjects such as wearable computing or wearable interfaces.  It was interesting to see that two presentations demonstrated two very different types of wearable devices - and both presentations were about how they can be used to help people, but in very different ways.

Exploring new technologies through playful peer-to-peer engagement in informal learning

The final presentation of the day was by Josie Tetley, from the Health and Social Care faculty.  Josie spoke of an EU funded project called Opt-In which 'aims to explore if and how new technologies can improve the quality of life of older people' and investigates 'whether existing pedagogic approaches are the best way of enabling older people to learn new technologies'.

Getting people to play with technology was one of the topics that were mentioned, both in a research lab, but also as a part of informal social settings.  Josie also spoke about the different research methods that were used, such as questionnaires, diaries and semi-structured interviews.  One point that I've noted include that some technologies can lead to obvious instances of deskilling, such as overreliance and use of satellite navigation systems.   

Some preliminary findings include that some users are interested in certain applications, notably video telephony applications such as Skype or FaceTime (wikipedia).  Technology, it was also said, can be readily accepted.  I also noted a really good phrase, which is that good technology transcends all age groups.

Summary

All in all, a very interesting event.  I have to say that I wasn't quite sure what I was letting myself in for.  I didn't really know too much about what was on the agenda before the morning of the seminar.  I was more guided by the words of the title that sparked an interest.

The most significant point that I took away from the day was that my conception of what an assistive technology was had been fundamentally broadened.  Another take away point related to the importance of considering the types of learning that are appropriate to different user groups. 

It was also great fun to hear about different research projects and gain an awareness of new ideas and frameworks.  Learning about subjects that are slightly outside our own discipline has the potential to be both rewarding and refreshing.

Permalink 1 comment (latest comment by Sharif Al-Rousi, Friday, 10 May 2013, 10:58)
Share post
Christopher Douce

Journey: Westminster to Walworth

Visible to anyone in the world
Edited by Christopher Douce, Monday, 28 Oct 2013, 13:39

A number of months ago I wrote a blog about buying a smartphone (I know what you're thinking: this sounds pretty boring!)  The blog ended on a question: 'where did this device come from?'  The device I'm referring to is, of course, a computer.  Such a simple question can be answered in very different ways and one way to answer it is to think about the people who played an important role in either thinking about or creating one.

This is a follow up blog post about a trip to a part of London that I had never been to visit before, but one that I have known about for quite some time.  My quest was simple: to seek out the birthplace of someone who is known as the 'great uncle' of computing.  There are, of course, many other stories and journeys that can be connected to the one that follows, and I hope that this is going to one more in a very long series of blog posts.

A journey in reverse

April has been a month of contrasts.  The first few months were absolutely freezing, but this day was enticing.  It was a day that I couldn't resist exploring a bit of my own city; taking a journey that I had been threatening to make ever since winter had descended with certainty.  I exited Westminster underground station and looked skyward, through glorious morning sunshine, quickly finding Big Ben and the houses of parliament.  In some respects, it seemed like an appropriate starting point, since government had played an important role in the life of Charles Babbage, a Victorian gentleman, mathematician, engineer and (if we can stretch it this far) raconteur.  Babbage is famous for proposing and partially designing mechanical calculating engines that echo aspects of the inner workings of today's modern day computers.

The purpose of this blog isn't so much to talk about Babbage (although he is the reason why I am writing in the first place), but more to record the trip.  When it comes to Babbage I've got numerous books and notes and read and re-read, and I think it'll take time to understand the fine detail and significance of his inventions.  In some respects, this is a journey of contextualising, or understanding.

'Excuse me, sir... we want to take a photo...', said a voice behind me.  I peered into my smartphone, thumbing at a googlemap, trying to figure out where I was.  A few paces away, the tourist had gained her view of the London Eye, and I was off, gingerly taking my first steps towards a new (albeit modest) adventure.

Within five or six minutes of walking, I had pieced another part of London together in my head.  My knowledge of the city is fragmented across three dimensions; distant childhood memories, an improving knowledge of the underground map, and a misunderstood knowledge of the monopoly board.  I recognised streets that I have previously travelled through whilst riding on my motorbike towards my office, traversing them in a different direction.  I soon knew where I was heading: I was going towards the Elephant.

Within ten minutes, I found myself at the Elephant and Castle, a bustling inner city area serviced by the Bakerloo and Northern underground lines, a train station that heads north to Kentish Town, and bus routes I had never heard of.  Remembering a series of photographs that had featured in the London Evening Standard newspaper a couple of days before, I decided to try to find a scene that I remembered.  I dived into some walkways and emerged at a train platform that overlooked one of the most notorious housing estates of the 1960s: the Heygate estate.  I know next to nothing about architecture but I do know that they Heygate was one of a number of brutalist housing estates that were built between the 60's and 70's.  Whilst on one hand there is a certain elegance and simplicity in its structures, on the other hand the structures are inhuman, stark and impersonal.  The impersonal nature was amplified since all the windows I could see were boarded up with steel shutters.  These, I thought, looking from the outside, were places to live in.  These flats didn't look like homes, and I'm sure I would have felt the same if I had visited when they were fully occupied.

I accessed the rail platform through the shopping centre.  Built in the 1960's, the shopping centre was showing its age.  In comparison to bright and airy modern malls the Elephant's shopping centre was slightly claustrophobic.  Chain stores were the exception rather than the rule, which was something I liked.  On the second floor, I decided that a well deserved up of tea was overdue, so I popped into a relatively new Polish café I had visited once before.  It's functional manner, i.e. you had to clean your own table, seemed to be entirely in keeping with the Elephant's very functional shopping centre.  I approved.

After a few false starts, I walked past the Strata (Wikipedia) tower block, around a gentle curve in the road and onto the Walworth Road.  Within five or so minutes I had found what I had been looking for; a simple blue plaque commemorating the birth of Babbage, the 'grandfather of the computer', situated on the corner of Larcom Street.  Walking down Larcom Street I discovered another blue plaque, this time commemorating the birth of Micheal Faraday and his work on electromagnetism.  Both plaques were on the side of what is now a clinic.

I took a couple of minutes to do some more exploring.  I really liked Larcom Street.  It offered a slight bend, and then revealed a quiet tree-lined road, filled with bay fronted three level Victorian terrace houses.  The hustle and bustle of Walworth Road disappeared into the background.  Cars parked aside, it felt as if I had stumbled into an oasis of history; a time warp.  Modernity came into view again when I arrived at the end of the street.  I saw modern flats on my right, recently constructed, and there was some building work going on, diggers gouging the ground in preparation for foundations.

Ten minutes later, I was back on the Walworth Road, astonished by its busyness and the single row of shops that seemed to go on and on and on.  With Larcom Street behind me, I caught sight of fast food establishments and the wonderfully eclectic East Street market which dates back, in one form or another, to the 1880s (as another blue plaque testified).  Stall holders had just about got everything ready for the day's trading by the time I had arrived.  I also accidentally found another blue plaque which celebrated the birth of another famous resident; Charlie Chapin.

My journey home took a bit of time.  Walking back to the Elephant, I passed by a fire damaged museum, and then found a bus stop on the New Kent Road - the direction of home.  This wasn't a big or exciting adventure, but it was one that was fun and has made me slightly more aware of my own city.  Moving forwards, what I've got to do is continue with my reading about Babbage and take at least three more journeys.

The next one (about Babbage) will be to the town house where he not only dreamt of mechanical computers, but also built parts of them too.  Then there's a trip to Greenwich, which relates to a key vector of inspiration that caused Babbage to start his life long quest to make a mechanical computer, and then a visit to South Kensington, where the remnants of his computing devices are currently housed.

Permalink Add your comment
Share post
Christopher Douce

Academic conduct symposium – Towards good academic practice (day 2)

Visible to anyone in the world
Edited by Christopher Douce, Tuesday, 23 Feb 2021, 18:57

This is the second post in a series of two about an academic conduct symposium that I attended at the Open University between 20 March and 21 March 2013.

The difference between the first day of the conference and the second was that the first day was more focussed towards the student and the essential role of the associate lecturer.  The second day (in my opinion!) seemed to be more focussed towards those who have the role of dealing with and working with academic conduct issues. Below is a brief summary of the three workshop sessions, followed by some final reflections on the whole symposium.

Student perspectives on good academic practice

Pete Smith from the Faculty of Education and Languages, was the facilitator for my first workshop of the day.  This session addressed a different perspective to all the previous workshops.  It aimed to ask the question:  'what is the published literature on the student perspective?  [or 'views' about academic conduct].  Pete presented what was, in essence, a short literature review of the subject.  I was really struck by the wealth of information that Pete presented (which means that I'm only going to pick out a number of points that jumped out at me).  If you're interested in the detail of the research that Pete has uncovered (which is almost akin to a masters thesis), it might be a good idea to contact him directly.

Some key notes that I've made from the session include the point that learners can perceive themselves in terms of different roles in terms of how they relate to issue of academic conduct.  There are also differences of perceived seriousness and attitudinal differences.  Factors such as topic knowledge, cultural influences, demographic variables, new technology and conflicting advice are all considered to play a part.

Multiple reasons for academic misconduct range from genuine lack of understanding, attempts to gain greater levels of efficiency, temptation, cultural differences and beliefs. 

When looking more deeply at the research it was commented that there was a lack of robust evidence about the success of interventions.  We don't know what works, and also we don't have consistent guidance about how to begin to tackle this issue.  One important perspective is that everyone is different and knowledge and understanding of a learner is needed to make the best judgement about the most approach to take.

What resources are available?

This session was facilitated by Jenny Alderman from the Open University Business School and another colleague who works in the Academic Conduct Office.

One of the reasons why academic conduct is considered to be so important is that there is an important principle of ensuring that all students are given fair and equitable treatment.  Jenny reminded us that there are considerable costs in staffing the academic conduct office, running the central disciplinary and appeal committees and supporting the academic conduct officers.

An interesting debate that emerged from this session related to the efficacy of tools.  Whilst tools such as TurnItIn can be useful, it is necessary to take time to scrutinise the output.  There will be some clear differences between submissions for different faculties.  Some more technical subject (such as mathematics) may lead to the production of assignments that are necessarily similar to one another.  This has the potential to generate false positives within plagiarism detection systems.

Key resources: code of practice for student assessment, university policy on plagiarism, developing good academic practice website (which was linked to earlier), and the skills for study website which contains a section entitled developing academic English (Skills for Study).

Other resources that could be useful include Time Management Skills (Skills for Study), Writing in your own Words (Skills for Study), Use of source Materials (Skills for Study) and Gathering Materials for preparing for your assignments (Skills for Study).

The library have also produced some resources that can be useful.  These include a video about avoiding plagiarism (which features 'Bob').  The library have some resources about digital literacy entitled 'being digital'.  There is also a plagiarism pathway (Being Digital, Open University Library), which contains a number of activities.  (At the time of writing, I hadn't seen these before - many of these resources were pretty new).

As an aside, I had some discussions with colleagues about the need to more fully embed academic English into either individual modules or programmes of study, and I was directed to a module entitled L185 English for Academic Purposes.  Two fundamental challenges that need to be overcome include that of will and resource.  This said, there are three sections of the L185 module that are available freely on-line through OpenLearn.  These are: Paraphrasing Text, Summarising Text and How to be a Critical Reader.

Since the workshop, I've also been directed towards a resource entitled, Is my English good enough?  This page contains a link to the English for OU study pages.

What works?

The final session, facilitated by Jonathan Hughes, was all about what interventions might successfully nurture good academic practice (and what we might be able to learn from student casework).

Connecting back to earlier debates surrounding the use of technology to detect plagiarism, the issue of spurious reports discussed.  In instances where we are unsure what the situation was, we were reminded that the right thing to do is refer cases to the faculty academic conduct officer. 

I've noted that academic conduct is an issue of education and an important part of this is sharing the university view of what plagiarism is.  It is also connected with the judicious application of technology in combination with human judgement and adoption of necessary of process to ensure appropriate checks and balances.  (Again, all this is from the notes that I made during the event).

During this session I remember a debate about whether it was possible to create something called a 'plagiarism proof assignment'.  One contributor said, 'if you write a question, if you can do a quick internet search for an answer, then it is a poor question'.  The point being that there is an intrinsic connection between academic conduct and good instructional design.

One question that arose was whether the university should be telling our students more about tools such as TurnItIn and Copycatch.  Another approach is, of course, to have students submit their own work through these detection tools and also permit them to see their reports (which is an approach that other institutions adopt). 

Final thoughts

This conference or symposium was very different to other conferences I've been to before.  It seemed to have two (if not more) main objectives.  The first was to inform other people within the university about the current thinking on the subject and to share more information about the various policies and procedures that the university employs.  The second was to find a space to debate the different conceptions, approaches and challenges which come with the difficult balancing act of supporting students and policing academic conduct.

In terms of offering a space that informs and facilitates debate, I felt the conference did a good job, and I certainly feel a bit more equipped to cope with some of the challenges that I occasionally face.  Moving forward, my own objective is to try my best to share information about the debates, policies and resources with my immediate colleagues. 

I came away with three take away points.  The first relates to the definition of what 'plagiarism' is.  It now strikes me that there are almost two different definitions.  One definition is the internal definition which acknowledges that students can both deliberately and inadvertently fail to acknowledge the work of others.  The other more common definition is where plagiarism can be interpreted (almost immediately) as maliciously and deliberately copying someone else with the clear intention of passing someone's work off as your own.  Although the difference is one that is very subtle, the second definition is, of course, much more loaded.

The second take away point lies with the policies and procedures.  I now have a greater understanding of what they are and the role of the academic conduct office.  I can clearly see that there are robust processes that ensure fairness in academic conduct cases.  These processes, in turn, help to maintain the integrity and validity of the qualifications.

The final take away point is that I am now a lot clearer in understanding what I need to do, from my perspective, to help both students and tutors deal with different types of academic conduct.

Copies of slides and videos are now available on the Academic Conduct Site (Open University staff only)

Permalink 1 comment (latest comment by Jonathan Vernon, Thursday, 18 Apr 2013, 21:20)
Share post
Christopher Douce

Academic conduct symposium – Towards good academic practice (day 1)

Visible to anyone in the world
Edited by Christopher Douce, Tuesday, 23 Feb 2021, 18:58

This is the first of two posts about an academic conduct symposium that I attended at the Open University between 20 March and 21 March 2013.  I'm mainly writing this as a broad 'note for self', a reminder of some of issues that emerged from the event, but I hope it will be useful for my OU colleagues and others too.

The symposium was kicked off by Peter Taylor who spoke briefly about an academic practice project that ran in 2007 which led to the last conference (which coincided with the launch of policies) in 2009.  Peter emphasised the point that the issue of academic conduct (and dealing with plagiarism cases) is fundamental to the academic integrity of the university and the qualifications that it offers.

Each day of the symposium had three parallel sessions which comprised of three different workshops.  Each workshop covered a slightly different aspect of academic conduct.  I'll do my best to present a quick summary of each one.

Keynote: Carol Bailey, EFL Senior Lecturer

Carol Bailey, who works as an English as a Second Language lecturer at the University of Wolverhampton, gave a keynote that clearly connected with many of the challenges that the symposium aimed to address. 

One of Carol's quotes that I particularly remember is a student saying, 'I never wrote such a long essay before'.  This is a quote that I can directly relate to.  It also relates to the truth that academic writing is a fundamentally challenging endeavour; it is one that requires time and experience.  To some, the process of writing can be one that is both confusing and stressful.   Students might come to study having experienced very different academic approaches to the one that they face either within the Open University or within other UK institutions - situations where the teachers provide all the resources necessary to complete study, situations where access to information technology may be profoundly limited.

When it comes to study, particularly in distance education, writing is a high level fundamental skill that is tested from the very start of a module.  Students need to quickly grasp the idiolect of a discipline and appreciate sets of subject words to begin to appreciate what is meant to become a part of a 'discourse community'.  It takes time to develop an understanding of what is meant by the 'casual elegance' of academic writing.

There is also the tension between accuracy and personal expression.  When faced with new study challenges where students are still grappling with the nuances and rules of expression, misunderstandings of what is required can potentially lead to accidental academic misconduct.  The challenge of presenting your ideas in your own voice is one that is fundamental to study within the Open University.

Hide and Seek : Academic Integrity

Liz McCrystal and Encarna Trinidad-Barnes ran what was my first workshop of the symposium.  The premise of this workshop was that 'Information is hidden and we need to seek it out'.  Encarna opened with a question, which was, 'what do you understand by academic integrity?'  Some answers included: honesty, doing it right, following academic conventions, crediting other people - all these answers resonated with all the participants.

We were then directed to some group work.  We were asked a second question, which was, 'how do you find information [about academic integrity]?'  Our group came up with a range of different answers.  Some of them were: official notes offered to tutors by module teams, the developing good academic practice site (OpenLearn version), assessment guides (also provided by the module team), helpful colleagues and representatives of module teams.

Another question was, 'when would you expect students to look at or be directed to the information?'  Answers included: ideally part of the induction process, before the first assignment, feedback from an assignment, tutorials (and associated connections with the on-line forums).  One perspective was that issues surrounding good academic practice should be an integral part of the teaching (and learning) that is carried out within a module.

A final question that I noted down was, 'is it clear what academic integrity is?'  The answer that we arrived at was information is there, but we have to actively seek it out - but there's also a responsibility by the university and for those who work for the university to offer proactive guidance (for students) too.

A useful resource that was mentioned a couple of times was Writing in your own words (OpenLearn), which contains a very useful podcast.

Plagarism: Issues, Policy and Practice

The second workshop I attended was facilitated by Anne Martin from the Faculty of Health and Social Care.  In comparison to the first workshop, this workshop had a somewhat different focus.  Rather than focussing on how to find stuff, the focus was on the importance of policies and practice.  Key phrases that I noted included: university and policy context, definitions of terms and the importance of study skills.

On the subject of process, there was some discussion about the role of a university body called the academic conduct office.  The office accepts evidence, such as reports (from plagiarism detection tools), explanations from students, script feedback, whether additional support has been arranged for a student.  An important point was made that students always have the right to appeal.

One of the (very obvious) points that I've noted is that there is no one 'gold standard' in terms of detecting academic conduct issues (there are also different ways of dealing with the issue).  The role of the associate lecturer (AL) or tutor is just as important as automated tools such as TurnItIn (website) and Copycatch. 

Technology, of course, isn't perfect, but technology can be used to highlight issues before they may become significant.

Fuzzy Lines: Determining between good and bad academic practice

The third and final workshop of the day was facilitated by Arlene Hunter and Lynda Cook.  When faced with a report from a plagiarism detection system (such as TurnitIn) it's important to ask the question of 'what has happened here?'  Very often, things are not at all clear cut.  The reports that we are presented with can be, without a doubt, very ambiguous.

During this session I was introduced to some different ways to characterise or to think about evidence that relates to academic practice.  Examples include poor paraphrasing and shadow writing, excessive use of quotations, and the use of homework sites and social networking tools.  (I now understand shadow writing to be where a writer might use different words but uses almost the same structure of another document or source).  I also remember that were was some discussion that related to the university social networking policy.   

In many (it not most) situations there is no distinct line between poor study skills and plagiarism.  A point was: if in doubt, pass it onto the academic conduct office.  On the other hand, it is an imperative to help tutors to help students to focus on developing academic writing and literacy skills.

Plenary

The final session of the day was a short plenary session which highlighted many of the issues that were brought to the fore.  These included the tension between policing academic standards whilst at the same time helping students to develop good academic practices.  There was also some debate that related to the use of tools.  The university makes use of plagiarism detection tools at the module team level and there was some debate as to whether it might also be useful to provide access to detection software to associate lecturers, since they are arguably closer to the students. 

Another challenge is that of transparency, i.e. how easy it is to get information about the policies and procedures that are used by the university.  It was also mentioned that it is important to embed the values of good academic practice within modules and that the university should continue, and ideally do more, to support its associate lecturers when it comes to instilling good academic practice amongst its students.  An unresolved question that I had which related to supporting of students whose English is a second language was touched on during the second day.

All in all, it was a useful day.  Of the two days, this first day was the one that was more closely aligned to the challenges that are faced by the tutors.  What I took away from it was  a more rigorous understanding and appreciation of the processes that have been created to both support students but also to maintain academic integrity.

Permalink
Share post
Christopher Douce

UCL : Introducing engineering and computing

Visible to anyone in the world

On 12 February 2013 I volunteered at a joint Open University and UCL event on 12 February 2013 which aimed to introduce aspects of computing and engineering to school students.  This was the first time I had been involved with this type of event.  I have started to view outreach (in the broadest sense) as something that is something that is increasingly important to do (and this is something that I have written about in an earlier blog).  So, if you're interested in hearing some about the outreach stuff that I've recently heard about, the previous blog I've posted might (or might not!) be of interest.

Structure

I learnt about this event by a colleague who was canvassing for volunteers.  Upon accepting his challenge I quickly discovered that I was to play a tiny part of what was a much bigger event and soon heard rumours that students were coming to UCL to hear about other subjects such as chemistry and engineering.  My own role was to offer some support and guidance to students who wished to learn a little bit about computing and information technology.

Not only was this, for me, my first ever time being involved in an outreach or engagement event, it was also my first ever time on the UCL campus: it was massive!  I found myself being ushered into a large computer suite in the basement of one of UCL's impressive buildings.  Within moments, our lead facilitator and lecturer, Arosha Bandara, started to outline the plan for the day.

The focus on the day was the programming language Sense, a language that is used with the Open University module TU100 My Digital Life which is a first level undergraduate module in computing.  One of the key aspects of Sense is that it works with a bit of electronics that allows different types of measurements to be made.  Arosha talked us through a program that simulated a simple etch-a-sketch game.  Students would be asked to make a change to the program so that it would work properly - they were required to do some software maintenance!  During the second part of the day, students were then required to get together in groups to think of how to the language and the sensors to do something fun.

The talking bit...

The morning began with Arosha outlining the broad concept of Ubiquitous computing (Wikipedia), namely, that computers can be everywhere, can contain sensors and can be embedded within the environment.  Arosha then introduced a programming problem (in the form of an etch-a-sketch game).  Everyone was taken through different parts of the Sense programming environment.  Key elements such as buttons, instruction palettes and sprites (graphics) were introduced.

Students were then directed to some key parts of the game that accepted inputs from Sense hardware.  Students were then shown, step-by-step, how to make a change to the game to modify the behaviour of an on-screen pen.  They could immediately see the effect of changes to their programs.  Further modifications included adding some conditions that enabled their game programs to respond to noises (such as clapping!)

The projects bit...

There were loads of things to take in during the first part of the morning.  There was a whole new programming environment, there was the concept that a computer can receive and work with signals from the outside world, and the idea that a program can be formed out of groups of instructions.

The second part of the day was all about being imaginative, thinking about the different kinds of inputs and outputs that the electronics allow, and trying to think of some kind of application or demonstration.  Students were assigned to small groups and were encouraged to come up with different ideas.

The group that I was assigned to came up with the idea of trying to build some kind of 'human sensor', perhaps creating an infra-red trip wire (the Sense board came with a number of different sensors and outputs - one of them being an infrared transmitter or detector).  We collectively thought about the different cables and sensors that we had at our disposal before beginning to play with what kinds of signals (or numbers) we could detect from the outside world.  We got a fair way with this task before our time was up.

Reflections

It was a fun day!  Although there was limited time to do real stuff, the tiny team that I was allied to wrote some simple program code that allowed a heat sensor to work.  The Sense board represented a connection between the magical world of code and software to the physical world, where measurements could be made.

One of the biggest challenges of the day was to convey such a lot of (often quite difficult) theory in such a short amount of time.  Arosha was charged with telling our students something about the different types of programming constructs, variables and graphics.  Although this was necessary to get to the point where we could all do some fun stuff (modify our program), the way that hardware was used with software certainly facilitated engagement and helped to focus our attention.

I liked the way the idea of ubiquitous computing was used as an introduction, but one additional might have been to emphasise the extent that we are surrounded by computers.  The moment you receive a telephone call, there is an unknown number of computers all working together to deliver your telephone call.  There's the computer in your mobile phone, there's a computer in the base station which speaks to other computers... at the other end, there is a similar situation.  Also, turning on the TV means starting up a pretty powerful computer that is performing millions of instructions a second which coverts signals from one format to another.  Their ubiquity and invisibility is astonishing.

What is also astonishing is that the fundamental principles of computer programming that are exposed by the Sense programming language is also shared amongst all these devices and systems.  In the same way we have ubiquitous computing, we also have ubiquitous code; computer software that run anywhere.

Being involved in this day took me back in time to the days when I first got my hands on a computer.   Although the form of a computer has changed immeasurably, some things have not changed.  Computers remain very particular and pedantic - they require patience.  It's also important to remember that to learning how to work with code can and should be fun.  But when you've created a world out of code and you understand how things work, working with them can be immeasurably rewarding too.

Permalink Add your comment
Share post
Christopher Douce

NESTA Crucible Alumni - Google UK

Visible to anyone in the world

nesta_logo.jpg

A couple of years ago I managed to find myself involved with something called NESTA Crucible (NESTA website).  Amongst other things, Crucible was a programme that was all about getting people from different disciplines together and offering some useful and practical guidance to researchers and academics.  A couple of years after the final Crucible event, NESTA funded a day which was broadly entitled 'Crucible Alumni' to enable past participants to reflect on what had happened after the programme came to an end.

It was both a fun and useful day, and I'm summarising bits of it as a blog for a couple of reasons.  The first to remember what happened (!), the second for the other people who were able to come along and third, to share something about the useful points that were discussed with a wider audience (which seems appropriate, given that engagement with different people was one of the themes of the day).

Introductions and Presentations

The event was hosted at Google's London offices.  I was interested to discover that I had been to the area in which the offices were situated, but I had no idea that this was where Google's offices were.  The day began with a brief introduction by a Googler, followed by a further brief talk by NESTA's chief exec.  We were soon into the first key part of the day where former members of the programme were able to give some short presentations using the Pecha Kucha (Wikipedia) format.

I had never witnessed the use of this technique before but, in essence, presenters were asked to give talks that contained 20 PowerPoint slides which changed every 20 seconds - a tough format, and one that forces presenters to avoid waffle!

The notes I've made accompanying the first presentations are: 'linguistic map of Glasgow', 'lego' and 'genome sequencing'.  The next presentation described some science outreach activities to schools.  The words I've used in relation to this presentation were 'chromosome carnival', 'Edinburgh festival' and 'radio programme'.  If anyone is interested in learning more, do let me know so I can put you in contact with the presenter.

The next presentation was by a Crucible contemporary called Howard Falcon-Long, who is an expert in fossil plants.  Howard talked about getting involved with some media fellowships, and has had an opportunity to write for the BBC - Howard certainly has managed to do a lot since our time on the programme. Other phrases that I've noted from other presenters include 'research in neurodisability', 'lab automation', 'life at high altitudes' and 'viruses'.  The words 'fun', 'science' and 'outreach' were also found together on the same page of my notebook. 

Collaboration and adventures in research

There were two formal(ish) presentations during the day, followed by a short group activity.  The first presentation was by Professor Kate Jones (ZSI website) from the Centre for Biodiversity and Environmental Research, UCL.  Kate holds a chair in Ecology and Biodiversity and spoke about two things: Bats and Citizen Science.

I remember seeing quite a few bats when I lived in my previous house in Sussex; they would swoop down by the side of the house, almost doing circuits of my garden before they mysteriously disappeared as quickly as they came.  Having noticed them flying around, and having been told that there are so many different types of bats out there, how might we be able to understand how many they are and, importantly, how the bat population is getting along?  Determining change requires us to take measurements, but how on earth can we measure the how many bats there are?!

Kate introduces us to something called Citizen Science (Wikipedia).  This is where an interested member of the public can play a small but important part of a wider research endeavour.  The advantages are that participants can make a contribution, it permits the exposure of different issues to a wider population and also can play an important role in informing members of the public about science.  Plus, it can be pretty fun too.

One way to count bats (I have to admit, I had never ever thought to ask this question before!) is to record the noises that they make.  Different bats make different noises.  Easy, right?  Well, you've got to capture the noises, which means driving around at certain times of the day using special recording devices.  When you've got the noises, there's then the problem of categorising or classifying the noises.  There are a few bits of technology that are being used: some kind of vehicle, a recording gadget, and GPS, a clock, a computer - and you can find quite a few in your mobile phone.

Kate introduced us to a couple of websites, iBats which is a programme about collecting bat sounds and calls from the environment, and Bat Detective which allows members of the public to start to classify recordings, thus providing useful data for the 'bat scientists'.

This kind of approach to science, the crowd sourcing of either data or analysis isn't new, but the availability of powerful computers in the form of your mobile telephone and increased availability of fast internet is facilitating the availability of new types of experiment.  One of the first citizen science projects (as far as I'm aware) is called Galaxy Zoo.  After a period of training, you are able to classify different types of galaxy that, perhaps, no one has ever studied properly before.

Whilst Galaxy Zoo can be used on a desktop PC, I also remember having heard of something called Mappiness.  This is a mobile phone application which asks you to respond to how happy you are at a particular point in time (I remember this featuring in a TED Talk I saw not so long ago, but I can't find its link).

Kate also mentioned another website called Zooniverse.  This site collates different crowd sourcing or citizen science projects together in one place.  I'm certainly struck by the breadth and diversity of the different projects. There is also, of course, an Open University biodiversity observatory project called iSpot, which has over eighteen thousand registered users.

Towards the end of last year there was a lot of press coverage about ash dieback (Wikipedia) and increased awareness of the extent to which this fungal infection is attacking ash trees in Great Britain.  The increased awareness of this problem quickly led to the development of an app called Ashtag (along with other similar projects).  Kate mentioned a website called Naturelocator which links to other projects.

Kate mentioned a project that I had heard of about six months ago through a geek news site called Slashdot.  This was a crowd sourced radiation map.  In the wake of the Japanese Tsunami and resulting nuclear accident, software developers and hardware designers created personal low-cost Geiger counters.  Citizens could were then able to take their own geo-tagged radioactivity readings that were in competition with the official measurements produced by the authorities.

An interesting (and rather obvious) thought that was inspired by Kate's presentation is that science can lead to the creation of technology which, in turn, can then lead to further science.  Technologies such as the mobile phone can (in part) democratise science (and the taking of measurements), but there is also the challenge of ensuring the quality, integrity and reliability of results.  This said (taking an open source software analogy) just as many eyes looking at the same software can potentially lead to fewer programming bugs, many data collection points can lead to more accurate and comprehensive results. All in all, a very thought provoking talk.

Soapbox Science

Seirian Sumner (Bristol University) is a scientist who is interested in bees, wasps and ants (if my notes serve me well).  Seirian also has an interest in popular science writing and sharing her enthusiasm for science with the general public.

Seirian introduced us to the idea of 'soapbox science' and presented us with a challenge - we were asked to imagine that we were at speaker's corner, Hyde Pare.  Let's say we were given a soapbox to stand on - what would you say (about science) that would draw listeners to you?  We were then asked, 'is anyone going to volunteer?'  Within minutes, around six scientists were balancing on tables trying to entice us bystanders (I didn't volunteer) to listen to what they had to say about their subject.  It was a compelling demonstration.  Any Googlers who were passing by must have wondered what was happening, and it was a miracle that the police were not called given how much shouting and impassioned speaking was going on!

Over a course of about an hour or so, we were introduced to Soapbox Science and heard what Seirian and her colleagues had been doing.  It began with a summary of a pilot, followed by a summary of an event (ZSL) that took place on the London Southbank in July 2012.

So, why do all this?  A number of reasons were put forward.  Seirian opened her presentation with what, to me, was perhaps one of the most compelling arguments.  Since scientists are primarily funded through government research grants and teach at publically funded universities, there is the argument that scientists should be giving something back and Soapbox Science is one of many ways to do this.  Other reasons includes enjoyment, understanding and making connection between science and art, public dissemination of work and raising awareness of research and subjects, inspiring others and gaining new ideas.

Towards the end of the day there was quite a debate about gender and science, and an open question of, 'if people leave science, where do they go to?'  Another thought is that although women in science was a very prominent and important theme in Seirian's work, diversity in its broadest sense (gender, socio-economic background, ethnicity and disability) is equally important.

This final presentation of the day made me reflect on whether there might be other ways to inspire people.  I started to wonder whether there was any mileage of trying to connect stand-up comedy and computer science.  I haven't got anywhere with this idea yet; it's something I'm continuing to mull over (!)

Reflections

There are two key things that I gained from this day and both of them are loosely connected to each other.  The first relates to our own discipline.  We can sometimes get so locked into our own subject and trying to solve our own little problems, whether it is creating something or taking part in a larger debate, that we can easily become entrenched in our own way of thinking about things.  Speaking to other people outside our own discipline, whatever those disciplines may be, can be very refreshing.  We're exposed to different scientific (or artistic) language, different types problems and different types of methods.  In doing so we may then become more critical of our own way of solving problems.  There are days when we become so familiar with our own subjects that they don't seem as exciting as the work that other people are carrying out.  When we begin to talk to people outside our discipline we actually realise (again) that the subjects that we find interesting are, actually, very cool.

The other point is that how much more we could each be doing.  Research, teaching and administration represent very important, necessary and all-consuming aspects of an academic role.  So much so, it is easier to forget, as Seirian pointed out, that perhaps we need to consider our role in terms of a wider responsibility too.  Science and research is very much carried out and facilitated by universities and research institutions.  I guess an important thought is that sharing can represent an opportunity for everyone who becomes involved.

Permalink Add your comment
Share post
Christopher Douce

Psychology of Programming Interest Group 2012 workshop: London Metropolitan University

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 14 Oct 2020, 11:41

The 24th Psychology of Programming Interest Group workshop was held at London Metropolitan University between 21st and 23rd November 2012.  I wasn't able to attend the first day of the workshop due to another commitment, but was able to attend the second and third days (this is a shame since I've heard from the other delegates that the first day was pretty good and yielded a number of very thought provoking presentations and discussions).  This blog post is a summary of the days I managed to attend.  I'm sharing this post with the hope that this summary might be useful to someone.

Day 2: Expertise, learning to program, tools and doctorial consortium

Expertise

The first presentation of the day was entitled, 'Thrashing, tolerating and compromising in software development' by Tamara Lopez from the Open University.  I understand thrashing to be the application of problem solving strategies in an ineffective and unsystematic way, and tolerating to be working with temporary solutions with the intention of moving a solution along to another state, and compromising: solving a problem but not being entirely happy with its solution.  An interesting note that I've made during Tamara's presentation relates to the use of feelings.  I have also experienced 'thrashing' in the moments before I recover sufficient metacognitive awareness to understand that a cup of tea and a walk is necessary to regain perspective.

The second presentation of the day was by Rebecca Yates, from LERO based at the University of Limerick.  Rebecca's talk was entitled, 'conducting field studies in software engineering: an experience report' and her focus was all about program comprehension, i.e. what happens when programmers start a new job and start to learn an unfamiliar code base.  I made a special note of her points about the importance of going out into industry and the importance of addressing ethical issues. 

One of the 'take away' points that I got from Rebecca's talk was that getting access to people in industry can be pretty tough - the practical issues of carrying out programming research, such as time, restrictions about access to intellectual property and the importance of persuasion (or making the aim of research clear to those who are going to play a part in it) can all be particularly challenging.

Learning to program

Louis Major, from the University of Keele, started the second session with a paper entitled, 'teaching novices programming using a robot simulator: case study protocol'.  Louis told us about his systematic literature review before introducing us to his robot simulator which could be used to create programs to do simple tasks such as line following and line counting.  Louis also spoke about his research method, a case study approach which applied multiple methods such as tests and interviews.

Louis also spoke about the value of robots, that they were considered to be appealing, enjoyable, exciting and robotics (as a whole subject) had a strong connection with STEM disciplines (science, technology, engineering and mathematics).  The advantage of using simulations is that there are fewer limitations in terms of space, cost and technical barriers.

A couple of months after the workshop I was reminded about the relevance of Louis's research after having been tangentially involved in an introductory Open University module, TM129 Technologies in Practice, which also makes use of a robot simulator.  Students are also given the challenge of solving simple problems, including the challenge of creating line following robots. 

The second talk in this part of the workshop was by PPIG regular, Richard Bornat.  Richard's talk, entitled 'observing mental models in novice programmers' built on earlier work that was presented at PPIG where Richard and his colleague Saeed had designed a test that was claimed could (potentially) predict whether students were able to grasp some of the principles of programming. 

An interesting observation was that when it comes to computer programming the results sometimes have a bi-modal distribution.  What this means that if student pass, they are likely to pass very well.  On the other hand, there is also a peak in numbers when it comes to students who struggle.  During (and after) his talk, he presented that some students found some of the concepts that were connected to programming (such as the assignment operator) fundamentally difficult.

Paul Orlov, who joined us all the way from St. Petersburg, spoke about 'investigating the role of programmers peripheral vision: a gaze-contingent tool and experimental proposal'.  Paul's talk connected with earlier research where experimental tools, such as a 'restricted focus viewers', were used in conjunction with program comprehension experiments. Paul's talk inspired a lot of debate and questions.  I remember one discussion which was about the distinction between attention and seeing (and that we can easily learn not to attend to information should we choose not to).

Ben Du Boulay, formerly from the University of Sussex, was our discussant.  Ben mentioned that when it came to interdisciplinary research conducting systematic literature reviews can be particularly difficult due to the number of different publication databases that researcher have to consider.  Connecting with Richard's paper, Ben asked the question about what might be the fundamental misunderstandings that could emerge when it comes to computer programming.  Regarding Paul's paper which connects to the theme of perception and attention, Ben made the point that we can learn how to ignore things and that attention can be focussed depending on the task that we have to complete.  Ben also commented on earlier discussions, such as the drive to change the current computing curriculum in schools.

One thing that learning programming can do for us is help to teach us problem solving skills.  There is a school of thought that learning programming can be viewed as how Latin was once viewed; that learning to program is inherently good for you. Related points include the importance of task and the relationship to motivation.

Tools

Fraser McKay from the University of Kent presented, 'evaluation of subject-specific heuristics for initial learning environments: a pilot study'.  In human-computer interaction (or interaction design), heuristics are a set of rules of thumb that help you to think about the usability of a system.  General heuristics, such as those by Nielsen are very popular (as well as being powerful), but there is the argument that they may not be best suited to uncovering problems in all situations. 

Fraser focused on two environments that were considered helpful in the teaching of programming: Scratch (MIT) and Greenfoot.  Although this was very much a 'work in progress' paper, it is interesting to learn about the extent to which different sets of heuristics might be used together, and the way in which a new set of heuristics might be evaluated.

Mark Vinkovitis presented the work of his co-authors, Christian Prause and Jan Nonnen, which was entitled, 'a field experiment on gamification of code quality in Agile development'.  Initially I found the term 'gamification' quite puzzling, but I quickly understood it in terms of, 'how to make software development into a game, where the output can be appreciated and recognised by others'.

The idea was to connect code development with the use of quality metrics to obtain a score to indicate how well developers are doing.  This final presentation gave way to a lot of debate about whether developers might be inclined to develop software code in such a way to create high rankings.  (There is also the question of whether different domains of application will yield different quality scores).  I really like the concept.  Gamification exposes of different dimensions of software development which has the potential to be connected to motivation.  It strikes me that the challenge lies with understanding how one might affect the other whilst at the same time facilitating effective software development practice.

Doctorial consortium presentations

Before the start of the workshop on Wednesday, a doctorial consortium session was held where students could share ideas with each other and discuss their work with more experienced (or seasoned) researchers.  This session was all about allowing students to share their key research questions with a wider audience.

Presentation slots were taken by Louis Major, Frazer McKay, Michael Berry, Alistair Stead, Cosmas Fonche and Rebecca Yates (my apologies if I've missed anyone!)  Other research students who were a part of the doctorial consortium included Teresa Busjahn, Melanie Coles, Gail Ollis, Mark Vinkovits, Kshitij Sharma, Tamara Lopez, Khurram Majeed and Edgar Cambranes.

Day 3: Tools and their evaluation and keynotes

Tools and their evaluation

The first presentation of the final day was by Thibault Raffaillac who presented his research, 'exploring the design of compiler feedback'.  I enjoyed this presentation since the feedback that software tools offer developers is fundamental to enabling them to do the job that they need to do.  A couple of questions that I've noted from Thibault's presentation included the question of 'who is the user?' (of the feedback), and what is their expertise.  Another note is that compilers (and other languages) always tend to give negative points and information.  It strikes me that languages offer an opportunity for programmers to interrogate a code-base.  Much food for thought!

Luis Marques Afonso gave the next talk, entitled 'evaluation application programming interfaces as communication artefacts'.  Understanding API usability has a relatively long history within the PPIG community.  The interesting aspect of Luis's work is that three different evaluation techniques were proposed:  semiotic inspection method (which I had never heard of before), cognitive dimensions of notations (Wikipedia) and discourse analysis (Wikipedia).  It was interesting to hear of these different methods - the advantage of using multiple approaches is that each method can expose different issues.

The final paper presentation, entitled 'sketching by programming in the choreographic language agent' was given by Luke Church, University of Cambridge.  Luke described working amongst a group of choreographers.  It was interesting to hear that the tool (or language) that had been created wasn't all about representing choreography, but instead potentially enabling choreographers to become inspired by the representations that were generated by the tool.  Luke's presentation created a lot of interest and debate.   

Keynote: extreme notation design

A computer programming language is a form of notation.  A notation is a system that can be used to represent ideas or actions and can be understood by people (such as music) or machines (as in computer programming), or both.  Thomas Green proposed a set of 'dimensions' or characteristics of notation systems which relate to how people can work with them.  These dimensions can be traded-off against each other depending upon the nature of the particular problem that is to be solved.

One challenge is: how can we understand the characteristic of trade-offs?  Alan Blackwell gave a keynote talk about a programming language that was controversially described as being a hybrid of Photoshop and Excel.

Palimpsest used the idea of different layers which could then contain different elements which could interact with each other (if I understand things correctly).  Methodologically speaking, the idea of creating a tool or a language that aims to explore the extremes of language design is an interesting and potentially very powerful one.  My understanding is that it allows the language designer to gain a wealth of experience, but also provides researchers with an example.  Perhaps there is an opportunity for someone to write a paper that compares and collates the different 'extremities' of language design.

Panel: coding and music

The final session of the workshop was all about programming, music and performance.  We were introduced to a phenomena called 'live coding', which is where programmers 'perform' music by writing software in front of a live audience. The three presentations which were contained within this final part of the day were all slightly different but all very connected.

Alex Mclean

Alex Mclean from the University of Leeds presented two demonstrations and talked about the challenges of live coding.  These included that manipulating and working with music through code is an indirect manipulation.  Syntactic glitches can interrupt the flow of performance and there is the possibility that being wrapped up within the code has the potential to detract from the music.

Live coders can also improvise with musicians who play 'non-programming language' (or 'real') instruments.  Since the notion of 'live' can have different meaning (and can depend on the abstractions that are contained within a language), challenges include the negotiation of time and harmony.  Delays can exist between the having a musical idea and realising it.

Alex mentioned Scheme Bricks, which has been inspired by Scratch (and Sense) which allows you to drag and drop portions of code together.  This also made me realise that if there are two live coders performing at the same time they might use entirely different 'instruments' (or notation systems) to each other. 

Thor Magnusson

Thor Magnusson from the University of Brighton introduced us to a language called ixi that has been derived from SuperCollider (Wikipedia).  Thor set out to make a language that could be understood by an audience.  To demonstrate this, Thor quickly coded a changing of drum and sound loops using a text editor using a notation that has come clear and direct connections to music notation.  Thor spoke of polyrhythms and code to change amplitude, to create harmonics and sound that is musically interesting. 

What I really liked was the metaphor of creating agents which 'play' fragments of code (or music).  Distortions can be applied to patterns and patterns can be nested within other patterns.  Thor also presented some compelling description of the situations in which the language is used; 'programming in a nightclub, late at night, maybe you've had a few beers; you're performing - you've got to make sure the comma is in the right place'.  For those who are interested, you can also see a video recording of Thor giving a live coding performance (YouTube).  In my notebook I have written something that Thor must have said: 'I see code as performance; live coding is a link between performance and improvisation'.

Sam Aaron

When Sam began his short talk, I couldn't believe my eyes - he was using a text editor called Emacs! (Wikipedia).  The last time I used Emacs was when I was a postgraduate student where it persistently confused me.  Emacs, however, uses a language called Lisp which is particularly useful for live coding, since it is a declarative language. 

During his talk Sam gives a brief introduction to Overtone.  You can see a video of a similar introduction to overtone through Vimeo.  One thing that did strike me was way in which aspects of music theory could be elegantly represented within code.

Discussion

This final part of the workshop gave way to quite a lot of energetic debate.  There appeared to be a difference between those who were thinking, 'why on earth would you want to do this stuff?' and, 'I think this stuff is really cool!'  When it comes to live coding there is the question of who is the user of the language - is it the performer, or is it the listener, or viewer (especially if a live coding notation is intended to be understandable by a non-musician-coder)?

But what of the motivations of the people who do all this cool stuff?  When it comes to performance there is the attraction of 'being in the moment', of using technology in an interesting and exciting way to creating something transitory that listeners might like to dance to.  It certainly strikes me that to do it well requires skill, time, persistence and musicality; all the qualities that 'traditional' musicians need.  Live coders can also face the fundamental challenge of keeping things going when things begin to sound a bit odd, to create new and creative code structure on-the fly, and an ability to move from one semi-improvised (by means of programming and musical abstraction) to another.

Other than the performance dimension, there is the intellectual attraction of changing and challenging people's perceptions of how software and programming languages are thought of.   Another dimension is the way that technology can give rise to a community of people who enjoy using different tools to create different styles of music.  All of the tools that were mentioned within the final part of the day are free and open source.  Free code, it can be said, can lead to free musical expression.

Reflections

Like other PPIG workshops this workshop had a great mix of formal presentations, more informal doctorial sessions mixed with many opportunities for discussion.  I think this was the first time that the workshop was held at London Metropolitan University.  Yanguo Jing, our local conference chair, did a fabulous job at ensuring that everything ran as smoothly as possible.  Yanguo also did a great job at editing the proceedings.  All in all, a very successful event and one that was expertly and skilfully organised.

There are two 'take home' points that have stuck in my mind.  The first is that programming languages need not only about programming machines; through their structures code can also be used as a way to gain inspiration for other endeavours, particularly artistic ones.  

The second point is that programming can be a performance, and one that can be fun too.  The music session with certainly stick in my mind for quite some time to come.  Programming performances are not just about music - they can be about education and creation; code can be used to present and share stories. 

Permalink Add your comment
Share post
Christopher Douce

Open University Disability Conference 2012

Visible to anyone in the world
Edited by Christopher Douce, Monday, 19 Nov 2012, 18:27

On 14 November 2012 I attended the Open University Disability Conference held at a conference centre close to the university.  The last time I attended this event was back in 2010.   I wrote a summary of the 2010 conference which might be useful to some (I should add that I've had to mess around a bit to get a link to this earlier summary and there is a possibility that this link might go to different posts since I can't quite figure out how to get a permalink, but that's a side issue...)

The conference was a two day event but due to other things I had to be getting on with I could only attend one of the days.  From my experience of the first conference, the second day tends to be quite dramatic (and this year proved to be no exception).

The legacy of the Paralympics

Julie Young from Disabled Student Services kicked off the day by introducing Tony O'Shea-Poon, head of equality and diversity.  Tony gave a presentation entitled 'A lot can change in 64 years' which described the history of the Paralympic games whilst at the same time putting the games into the context of disability equality.

During the Paralympics I remember a television drama that presented the origins of the games.  Tony reminded us that it began in 1948 at the Stoke Mandeville Hospital.  The first ever Paralympic games (with the 'para' meaning 'alongside') taking place in Rome in 1960.

One of the striking aspects of Tony's presentation is that it was presented in terms of 'forces'; forces which have increased the awareness of issues that impact upon the lives of people with disabilities.  Relating back to the origins of the games, one force is the allies of people with disabilities.  There is also the role that role models can play, particularly in popular media.

Two other forces include disabled peoples involvement and the disability rights movement.  Tony spoke about something that I had not known of before.  During the late 1980s I remember a number of public 'telethon' events - extended TV shows that aimed to raise money for charitable causes.  In 1992 there was a campaign to 'block telethon'.  This is a message that people with disabilities should have rights, not charity.  This connects with a movement away from a more historic medical and charity model of disability to a social model where people with disabilities should have an equal rights and opportunities within society. Tony also mentioned the importance of legislation, particularly the disability rights commission, explicitly mentioning role of Sir Bert Massie.

Tony brought us to the present day, emphasising not only recent successes (such as the Paralympic games), but also current challenges; Tony drew our attention to protests in August of this year by disabled people against government cuts.   Legitimate protest is considered to be another force that can facilitate change.

Deb Criddle: Paralympian

Jane Swindells from the university disability advisory service introduced Deb Criddle (Wikipedia), paralympian gold and silver medallist.  Deb gained one gold medal and two silver medals in London 2012, as well as gaining gold medals in Athens.

This part of the day took the form of a question and answer session, with Jane asking the first questions.  Deb reflected on the recent Paralympic games and described her personal experiences.  One of the key points that Deb made was that it was great that the games focussed people's attention on abilities and not disabilities.  It also had the effect of the making disability more normalised.

One thing that I remember from living in London at the time of the Olympics and Paralympics is that people were more open to talking to each other.  Deb gave us an anecdote that the games created opportunities for conversations (about and with people with disabilities) which wouldn't have otherwise happened.  

Deb said that she 'wasn't expecting the support we had'.  On the subject of support she also made an important point that the facilities and support services that are available within the UK are very different to the facilities that are available in other countries.  At the time of the Paralympics I remember reading stories in the London Metro (the free newspaper that is available ever week day morning) about campaigners who were trying to obtain equipment and resources for some of the competitors.

Deb also shared with us aspects of her personal story.  She said that through accident and circumstance led to opportunities, journeys, growth and amazing experiences.  What was once a passing interest (in equestrianism) became a central interest.  Deb also spoke about the challenge of confronting a disability.  One of Deb's phrases strongly resonated with me (as someone who has an unseen disability), which was, 'I hadn't learnt to laugh at myself'.

Deb is also an OU student.  She studied at the same time as training.  Deb said, 'study gives you something else to focus on... trying too hard prevents you to achieving what you need to [achieve], it is a distraction in a sense'.  She also emphasised the point that study is can often be hard work.

I've made a note of a final phrase of Deb's (which probably isn't word for word) which is certainly worth repeating; its message is very clear: 'please don't be overwhelmed by people with disability; people coming together [in partnership] can achieve', and also, 'take time to engage with people, you can learn from their stories, everyone is different'.

Workshops

Throughout the conference there were a couple of workshops, a number of which were happening in parallel.  I was only able to attend one of them.  The one I chose was entitled 'Asperger's syndrome: supporting students through timely interventions', facilitated by Martina Carroll.  The emphasis on this workshop was about providing information to delegates and I've done my best to summarise the key points that I picked up.

The first point was that people who may have been diagnosed with Asperger's syndrome can be very different; you can't (and shouldn't) generalise about the abilities of someone who may have a diagnosis.   

The workshop touched upon the history of the syndrome.  Martina mentioned Leo Kanner (Wikipedia) who translated some work by Hans Asperger.  Asperger's is understood as a developmental disorder that has a genetic basis (i.e. highly heritable). Martina mentioned a triad of impairments: communication difficulties (both expressive and receptive), potential difficulties with social interaction, and restricted and repetitive behaviours.  A diagnosis will be considered to have two out of the three potential impairments.

Martina also touched upon that some people can have exceptional skills, such as skills in memory and mathematics, but again, it is important to remember that everyone is different.  Due to the nature of the triad of impairments, co-existing conditions need to be considered, such as such as stress, anxiety and depression.

A final question is what accommodations can be made for people who have autism? TEACCH (Wikipedia) was mentioned, which is an educational model for schools which has the potential to offer some useful guidance.  One key point is that providing learning materials that have a clearly defined structure (such as the module calendar) can certainly help everyone.

Towards the end of the session, there was some time for group discussions.  The group that I was (randomly) assigned to discussed the challenges of group work, how important it was to try to facilitate constant communication between different people (which include mentors and advocates) and challenges surrounding examinations and assessment. 

There are a number of resources that were mentioned that may be useful.  I didn't know this, but the Open University runs a module entitled Understanding the autism spectrum (OU website). The module is centred around a book by Ilona Roth called Autism in the 21st Century (publishers website).  Another resource is Francesca Happe's Lecture at the Royal Society, entitled When will we understand Autistic Spectrum Disorders? (Royal Society website) I really recommend this lecture - it is very easy to follow and connects very strongly with the themes of the workshop.  There is also the National Autistic Society website, which might also be useful.

Performance

The final part of the day was very different.  We were introduced to three stand-up comics.  These comics were not disabled comics, they were comics who just happened to incidentally have a disability.  Comedy has the ability to challenge; it allows others to see and understand instances of people's lives in a warm and undeniably human way.  The 'something' that we all have in common with each other is an ability to laugh.  When you laugh at a situation that is tough and challenging and begin to appreciate the absurdity and richness of life. Tough situations don't seem as difficult anymore; laughter gives you a power to rise above a situation.  In a way, the conference reflects this since it was all about sharing experience with a view to empowering and helping people.

The comics were Steve Day, Liam O'Caroll and Lawrence Clark.  All were fabulous, but I especially enjoyed Lawrence's set which I understand was a show that he took to the Edinburgh Festival.  His set had a theme based on the word 'inspiring'; he successfully sent himself up, along with others who may be inclined to use that word.

Reflections

Julie Young closed the conference by emphasising some of the themes that were explored through the conference.   Julie emphasised the importance of working together to deliver a service for our students and how this is connected with equality and rights.  A key point is that the abilities our students are what really matters.  Julie went on to emphasise the continued need to listen attentively to those who we serve.

With conferences that have multiple parallel sessions you can sometimes feel that you're missing out on something, which is always a shame.  During the lunch break, I heard how other delegates had appreciated hearing from students talking about their experiences of studying at the Open University.  Personal stories allow people to directly connect with the challenges and difficulties that people face, and whilst on one hand there may be successes, there are other situations in which we don't do the best that we can or support for people doesn't arrive on time.  Conferences such as these emphasise the importance of keeping our attention on students with disability whilst at the same time emphasising that different departments of the university need to talk to each other to ensure that we can offer the best possible support.  Talking also permits us to learn more about what we can do to change things, so meetings such as these are invaluable.

I also have a recollection from the previous conference I attended.  I remember talking to someone (I'm not sure who this was) who seemed to express surprise that I was from a 'faculty' (i.e. an academic) as opposed to a part of the university that was directly involved in support of students (I tend to conflate the two roles together).  I was surprised that my presence caused surprise.  Although this year I felt that there were more faculty representatives coming along than perhaps there were before, I do (personally) feel that there should be a broader spectrum of delegates attending.

All in all, I felt that I benefitted from the day.  I met people who I had never met before and the objectives of facilitating communication, sharing practice and re-energising delegates had clearly been met. 

Permalink 1 comment (latest comment by Jonathan Vernon, Friday, 23 Nov 2012, 18:09)
Share post
Christopher Douce

First Open University Sense Programming Workshop

Visible to anyone in the world
Edited by Christopher Douce, Tuesday, 8 Oct 2013, 12:23

The first Open University Sense Workshop was held at the London School of Economics on Saturday 11 November 2012.

Sense is a computer programming language that has been derived from Scratch, a language that was developed by Massachusetts Institute of Technology.   The aim of the Sense workshop was to allow TU100 My Digital Life students to become more familiar with the Sense environment helping them to learn some of the fundamental principles of computer programming.

This blog post is intended as a summary of the first ever Sense workshop.  It has been written for both students and tutors. If you feel that anyone might find this summary useful, please don't hesitate to distribute widely.

Introductions

The phase 'computer programming' is one that can easily elicit an anxious response.  Programming is sometimes seen as something that is done through a set of mysterious tools.  The good news is that once you have gained some understanding of the fundamental principles of programming (and how to tackle problems and debug programs), the skills that you learn in one language can be transferred between other languages.

Sense is a programming language that uses the same fundamental concepts of languages that are used in industry (such as C++ and Java) but Sense makes the process of writing computer programs (or code) easier by allowing programs to be created from sets of visual building blocks. In some ways, Sense is a visual programming language that is completely analogous to many other languages.  The fundamental difference between Sense and other languages is that it helps students to focus on the fundamental bits of programming by shielding new programmers from the difficulty of writing program instructions in a language that can be quite cryptic and difficult to understand.

The overarching intention of the Sense workshop day (that is described here) was to demystify Sense and encourage everyone to have fun.  The Sense environment allows programming instructions to be manipulated as a series of lego-like blocks.  These snap together to form 'clumps' of instructions which can be attached to either a background (or stage, where things can more about on), or sprites (which are, in essence, graphical objects).  Through Sense it is (relatively) straightforward to create sets of instructions to build simple animations and games.

The workshop is divided up into three different sections.  The first is a broad overview of some of the ideas about programming, followed by a demonstration about how to use the Sense environment.  The second section was a presentation which contained some useful guidance about how to complete an assignment.  The third section was more open... but more of this later.

The lecture bit - stepping towards programming...

The workshop kicked off by a talk by one of our Open University tutors, Tammy.  Tammy made a really good point that 'we can't teach you programming'.  The implication is that only a student can learn how to do it.  The best way to learn how to do it is, of course, to find the time to play with a programming environment and to tackle, head on, the challenge of grappling with a problem.

Tammy asked a couple of people to come up and draw some shapes on the whiteboard.  Different participants drew very different shapes despite being given exactly the same instructions.  The point of the exercise was clear: that it is absolutely essential to provide sets of instructions that are both completely clear and unambiguous (as otherwise you may well be surprised with the results that you come back with).

Tammy talked about the different categories of program instruction, which were: sequence instructions, selection instructions and iteration instructions.  Pretty much all programs are composed of these three different types of operations.  Put simply, a sequence of instructions is where you do one thing after another.  A selection operation is where you make a choice to do something depending upon the status of a condition (for example, if you are cold, you might turn the heating on).  An iteration operation is where you do something either a number of times.

These sets of operations can be used to describe every day actions, such as making a cup of coffee, for instance.  This simple activity can be split into a sequence of steps, which can include iterations where we check to see if the kettle is boiling.  (We might also do some parallel processing, such as making some toast whilst the kettle is boiling, but multi-threading is a whole other issue!)

The main points were (1) programming cannot be taught, it can only be learnt by those who do it, (2) there are some fundamental building blocks that can be combined together and nested within each other; you can have a sequence of steps within an iteration, for instance, and (3) programming requires things to be defined and described unambiguously.

The demonstration bit - creating an animation...

The second part of the morning was hosted by Leslie.  Building on Tammy's summary of programming Leslie showed us what it meant to actually 'write' a program using the Sense environment.

In some respects, you can create anything within the Sense environment.  It provides a set of tools and it is up to you to come up with an idea and figure out how to combine the pieces together to do what you want to do.  In some respects (and getting slightly philosophical for a moment), you can define a whole universe or a world in software.  You can, in effect, define your own laws of physics.  I can't remember who said it, but I have always remembered the phase, 'the universe is mathematical'.  Given that computers only understand numbers, the Sense environment allows you to create and represent your own universe (and interact with it in some way).

Leslie's universe was a fishtank.  She began by drawing the tank, including water weeds.  She then went onto draw a set of different fish characters.  Script was then added to move the fish around the screen (in the tank), first in one direction (from left to right), and then in both directions (from side to side).  Leslie then added more characters and defined interactions between them using something called the 'broadcast' feature to alert some of the virtual fish that a bigger and more dangerous fish had arrived in the tank.

What was really great was how she demonstrated how to connect different instructions together (to create sequences), to have sequences of instructions operate when certain conditions are met (which represent selections), and introduce repeat loops (which represent iterations; carrying out the same instructions over and over again).

The bit about the assignment...

The final 'lecture' part of the day was by Open University tutor Dave, who took everyone through the structure of the forthcoming assignment (without giving any of the answers).  Dave talked about the use of the on-line discussion forums and this gave way to an interesting discussion about the importance of referencing.  Other points that were mentioned included the importance of things such as including word counts (on the TMA), and the learning objectives that are used by the module.

The programming bit...

During the afternoon, we all split into two different groups and got together into small groups of between two and four people.  The intention of the second part of the day was to try to create a small Sense project by huddling around a single laptop on which the Sense environment had been installed. We would then work on something for an hour, and then we would present what we had done to the other groups, describing some of the problems and challenges that we had encountered along the way.

Not having had much experience at using Sense, I was very happy to play an active role within one of the groups.  One of my main intentions at coming along for the day was to learn more about how to use the language and discover more about what it was capable of.  Our group came up with two different ideas: a representation of a car race track and some kind of athletic game or animation.  We settled on the athletic theme and decided we would try to animate a man running around a very simple athletics track.  (Our track became a square as opposed to an oval shape since we decided that re-discovering the mathematics of the circle was probably going to be quite tricky to master in about an hour!)

Within an hour we had drawn some stick figures, got our character doing a really simple 'run' animation and had our figure run around a really simple athletics track.  From memory, one of the challenges was figuring out how to represent program state and have it shared between different scripts that were running within the same sprite (apologies for immediately going into Sense-speak!)  Another challenge was to figure out how to represent state with Boolean variables and have those embedded within a continuous loop (but given enough time, I'm sure that we would have cracked it!)  A final challenge (and surprise) was to understand that the Sense environment automatically 'remembered' how much a character had been rotated between the different times that we 'ran' our scripts.  (We had instances where our running character ran off the side of the screen, much to our surprise!)

After our time was up, we were all asked to demonstrate and talk through our various projects.  I can remember a simple etch-a-sketch game, a demonstration of some bouncing balls (which bounced at different speeds), a space invader game (where the invader was a cat), a Tom and Jerry animation where Tom chased Jerry across a screen, and an animation that involved a balloon and a plane.   It was great to see very different projects since when we were coding our own, we can easily get into the mindset of just solving our own problem; seeing the work of others is something that is very refreshing.  It was inspiring to see what could be created after an hours of programming.

Reflections

The whole day reminded me of the time when I first tried to learn computer programming and I still remember that it was a pretty difficult challenge (in my day!)  I always wanted to rush ahead and solve the bigger more exciting problems but I was often tripped up because I needed to understand the operation of the fundamental instructions and operators (and the way a language worked).  In my own experience the only way to really understand how things work is to find the time to play, to explore the various operators and instructions, but finding both the time and the confidence to do this is perhaps a challenge itself.

All in all, the first Sense Workshop was a fun day.  I certainly got a lot out of it and I hope that everyone did too.  I certainly hope this is going to become a bi-annual event for all our TU100 students.  From my 'I've never really used Sense before to do anything other than to run a demo program' perspective, I certainly came out learning a lot more than I did when I started.  Large parts of Sense was demystified, and I certainly had a lot fun attending.

Additional resources

After sharing a link to this post my colleague Arosha (who also came along to the Sense workshop) has written a short blog post.  Arosha is loads more skilled when it comes to Sense programming and has re-created one of the projects that were demonstrated on the day.  Thanks Arosha!

Permalink
Share post
Christopher Douce

Mathematics, Breaking Tunny and the First Computers

Visible to anyone in the world
Edited by Christopher Douce, Monday, 15 May 2017, 12:11

Pciture of the Colossus computer

One of my interests is the history of computing. This blog post aims to summarise a seminar that as given by Malcolm MacCallum, University of London, held at the Open University on 30 October 2012.  Malcolm used to be the director of the Heilbronn Institute for Mathematical Research, Bristol.  Malcolm began by saying something about the institute, its history and its research.

This blog complements an earlier blog that I wrote to summarise a lecture that was given at City University.  This earlier lecture was entitled Breaking Enigma and the legacy of Alan Turing in Code Breaking and took place back in April 2012, and was one of a series of events to celebrate the centenary of Alan Turing's birth.  Malcolm's talk was similar in some respects but had different focus: there was more of an emphasis on the story that led to the development of what could be arguably one of the world's first computers.

I'm not going to say much about the historical background that is obviously connected with this post, since a lot of this can be uncovered by visiting the various links that I've given (if you're interested).  Instead, I'm going to rush ahead and introduce a swathe of names, terms and concepts all of which connect with the aim of Malcolm's seminar.

Codes, Cyphers and People

In some respects the story of the Enigma code, which took place at the Government Code and Cypher School, Bletchley Park, is one that gains a lot of the historical limelight.  It is easy to conflate the breaking of the Enigma code (Wikipedia), the Tunny code (Wikipedia) and the work of Alan Turing (Wikipedia).  When it comes to the creating of 'the first computer' (quotes intentional), the story of the breaking of the Tunny code is arguably more important. 

The Tunny code is a code generated by a device called the Lorenz cypher machine.  The machine combined transmission, encryption and decryption.  The Enigma code was very different.  Messages encrypted using Enigma were transmitted by hand in morse code.

I'm not going to describe much of the machines since I've never seen a real one, and cryptography isn't my specialism.  Malcolm informed us that each machine had 12 wheels (or rotors).  Each wheel had a set of cams that were set to either 1 or 0.  These wheel settings were changed every week or month (just to make things difficult).  As each character is transmitted, the wheels rotate (as far as I know) and an electrical circuit is created through each rotor to create an encrypted character.  The opposite happens when you decrypt: you put in an encrypted character one side and a plain text (decrypted) character magically comes out the other side.

For everything to work, the rotors for both the encrypting and decrypting machines have to have the same starting point (as otherwise everything will be gibberish).  These starting points were transmitted in unencrypted plain text at the start of a transmission

Through wireless intercept stations it was possible to capture the signals that the Lorenz cypher machines were transmitting.  The codebreakers at Bletchley Park were then faced with the challenge of figuring out the structure and design of a machine that they had never seen.  It sounds like an impossible challenge to figure out how many rotors and wheels it used, how many states the rotors had, and what these states were.

I'll be the first to admit that the fine detail of how this was done pretty much escapes me (and, besides, I understand that some of the activities performed at Bletchley Park remains classified).  What I'm really interested in is the people who played an important role in designing the physical hardware that helped with the decryption of the Tunny codes.

Depths and machines

Malcolm hinted at how the codebreakers managed to begin to gain an insight into how the Lorenz machine (and code) worked.  He mentioned (and I noted) the use of depths (Wikipedia), which is where two or more messages were sent using the same key (or machine setting).  Another note that I made was something called a Saltman break, which is mentioned in a book I'll reference below (which is one of those books which is certainly on my 'to read' list).

Malcolm mentioned two different sections of Bletchley Park: the Testery (named after Ralph Tester), and the Newmanry (named after Max Newman).  Another character that was mentioned was Bill Tutte who applied statistical methods (again, the detail of which is totally beyond me and this presentation) to the problem of wheel setting discovery.

It was realised that key aspects of code breaking could be mechanised.  Whilst Turing helped to devise the Bombe (Wikipedia) equipment that was used with the decryption of the Enigma code, another machine called the Heath Robinson (Wikipedia) was built.

One of the difficulties with the Heath Robinson was its speed. It made use of electromechanical relays which were slow, restricting the code breaking effort. A new approach was considered: the creation of a calculating machine that made use of thermionic valves (a precursor to the transistor).  Valves were perceived to be unreliable but it was realised that if they were continually powered up they were not stressed.

Colossus

Tommy Flowers (Wikipedia) engineered and designed a computer called Colossus (Wikipedia), drawing experience gained working at the Dollis Hill Post Office research station in North London.  

Although Colossus has elements of a modern computer it could be perhaps best described as a 'special purpose cryptographic device'.  It was not programmable in the same way that a modern computer has become (this is a development that comes later), but its programs could be altered; perhaps by changing its circuitry (I don't yet know how this would work).  It did, however, made use of familiar concepts such as interrupts, it synchronised its operation by a clock-pulse, stored data in memory, used shift registers and did some parallel processing.  Flowers also apparently introduced the term 'arithmetic and logic unit'.

Colossus was first used to break a message on 5 February 1944.  A rather different valve based calculator, the ENIAC (Wikipedia), built by the Moore School of Electrical Engineering, University of Pennsylvania, was used two years later.

Final points

Malcolm told us that ten Collosi were built (I might have spelt that wrong, but what I do know is that Collosus-es certainly isn't the right spelling!), with the last one being dismantled in 1960.  A total of twenty seven thousand messages were collected, of which thirteen thousand messages were decrypted.  Malcolm also said that Flowers was 'grossly under rewarded' for his imaginative and innovative work on Colossus.  I totally agree.

Research into the Colossus was carried out by Brian Randell from the Univerisity of Newcastle in the 1970s.  A general report on the Tunny code was only recently released in 2000.  Other sources of information that Malcolm mentioned was a book about the Colossus by Jack Copeland (Wikipedia)  (which is certainly on my 'to read' list), and a biography of Alan Turing by Andew Hodges (Wikipedia).

Malcom's talk reminded me of how much computing history is, quite literally, on our doorstep.  I regularly pass Bletchley on the way to the Open University campus at Milton Keynes.  There are, of course, so many other places that are close by that have played an important role in the history of computing.  Although I've already been twice to Bletchley Park, I'm definitely going to go again and take a longer look at the various exhibits.

(Picture: Wikipedia)

Permalink 1 comment (latest comment by Robert McCune, Monday, 12 Nov 2012, 12:23)
Share post
Christopher Douce

Accessibility workshop: modules and module team representatives

Visible to anyone in the world
Edited by Christopher Douce, Sunday, 2 May 2021, 12:46

For reasons that currently escape me, I seem to have found myself on three different module teams where I have some responsibility for accessibility.  The first two are design modules (design and innovation qualification) that are currently being developed by the university.  The third is M364 Fundamentals of Interaction Design, a module that I have tutored since its launch in 2006. 

I've been asked to write what is called an accessibility guide for the design modules.  For M364, I was asked to attend an accessibility workshop that was held on 17 October 2012 at the university in Milton Keynes.  This blog post is a rough set of notes that relate to this event (which was intended to inform and help those who are charged with writing an accessibility guide).  As well as being an aide memoir for on-going work, I hope that it might be useful for my H810 Accessible online learning: supporting disabled students groups who may be confronted with similar challenges.  Furthermore, I hope that the summary may be of use to come of my colleagues.

Setting the scene

The workshop began with a bit of scene setting.  Accessibility and support for students with disabilities is provided by a number of different parts of the university.  These include Disabled Student Services, the Institute of Educational Technology (IET) who offer internal consultancy and advice, and the Library.  Responsibility also lies with faculties, such as the Faculty of Mathematics Computing and Technology in which I am primarily based.  Accessibility, it is said, is closely connected with one of the key objectives of the university: to be open to people.

We were all reminded for the fundamental need to anticipate the needs of students during the module production process.  This is especially important at the moment since there are a significant number of modules that are currently in production.  We were also reminded that a tension between content and accessibility can sometimes arise.  Academics may wish to present materials and suggest activities that may be difficult for some learners to engage with, for example.  There is the need to consider the implications of module design choices.

The types of anticipatory adjustments that could be made include figure descriptions, transcripts for videos, subtitling, alternative learning activities and the provision of alternative formats.  It should always be remembered that alternative formats, such as documents supplied in Word, PDFs and ePub formats have the potential to help all students.  Alternative formats (as well as standard provision of materials, such as those offered through the university virtual learning environment) can be consumed and manipulated by assistive technologies, such as screen readers, screen magnifiers, for example.  Other relevant assistive technologies that can be applied include voice recognition software and mobile devices.

Further scene setting consisted of painting a rough picture of the different types of disabilities that are declared by students.  I was interested to learn that only a relatively small number of broad categories make up the majority of declarations.  Although putting people in boxes or categories can be useful in terms of understanding the bigger picture, it's always important to remember that the challenges and conditions that people face can be very varied.  By way of additional information (and guidelines) I also remember a reference to a document by the Quality assurance agency (QAA) entitled code of practice for the assurance of academic quality and standards in higher education, Section 3: Disabled students (QAA website).  This might be worth a look if you are especially interested in these kinds of policy documents and guidance that relate to higher education.

It was also stated that it is important to consider accessibility as early as possible in the module design process.  The reason for this should be obvious: it is far easier to include accessibility during the early stages of the design of a new module than to it is to retrofit accessibility into an existing structure.  This takes us onto one of the aims of the workshop; to explore the role of a dedicated accessibility co-ordinator who sits on a module team.  One of the responsibilities of a co-ordinator is to write an accessibility guide for a module.

Responsibilities of a module team accessibility co-ordinator

Our first main activity of the day was to consider and discuss the different responsibilities of an accessibility co-ordinator.  Working in a small group, we quickly got stuck in.  We soon discovered that we had pretty different roles and responsibilities within the university.

The responsibilities that we considered were important were the necessity supporting module authors and liaise with colleagues, keeping track of what learning materials are being produced within a module and actively obtain support and guidance from different departments where necessary.  A fundamental responsibility was, of course, to produce an accessibility guide (which is now an important part of the module production process).

A co-ordinator must have an understanding of different sources of information, know how modules are produced, know something about the module material and have some facilitating and project management skills.  An ability to write clearly and succinctly is also important too!

Looking and some guides

After a period of discussion about the role of the co-ordinator, we then went onto have a look at a set of different accessibility guides with a view to trying to summarise what works well and what could be done better. 

Accessibility guides for individual modules are now being written for every new module.  The first module that had an accessibility guide was U116 Environment: journeys through a changing world. This was followed by TU100 My digital life.  A very detailed accessibility guide is also available for H810.

A fundamental question is: what is the purpose of the guide and who is it aimed for?  My understanding is that it can be used by a number of different people, ranging from learning support advisors who help students to choose modules, through to tutors and students.  It is a document for different audiences.

One thing that struck me that we don't yet have the perfect document, structure or system to provide all the information that everyone needs.  This very much reflects my own understanding that accessibility isn't producing a document or a standard or set of instructions.  Instead, it is more of a process where the artefacts can mediate and reflect interaction between people who work together to provide effective support.

One of the key difficulties that we uncovered was that there is an obvious tension between generic and specific advice.  There is a clear risk of offering too much information which has the potential to overwhelm the reader, but in some instances potential students may have very specific questions about the accessibility of certain aspects of a module.

I've made a note of some of the shared conclusions and assumptions about the purpose of a module accessibility guide.  Firstly, the guide is there to highlight accessibility challenges.  It should also say something about what alternative resources are available and also offer information and guidance about how to support students.

One really important question that was asked was: at what point in the module production should we create this?  The answer is writing the guide should happen during the module production process.  This allows the co-ordinator to be involved with the module development and allow potential accessibility problems to be addressed early.  

Moving forwards

I found the workshop useful.  One of the main conclusions was that there needed to be more clarity about the role of an accessibility co-ordinator.  I understand that the results from the discussions have been noted and there may well be follow up meetings.

Accessibility (as well as support for individual students) is something that needs to be owned by individuals.  Reflecting my understanding that it is a process, the guide is needed to be something that needs to be refreshed as a module team gains more experience over the years in which a module is delivered.

One thing is very clear for me.  Given my role as co-ordinator on a couple of modules, I clearly need to get more of an appreciation as to what is going on so I can then consider the kinds of potential challenges that students may face. 

A key challenge is to understand the (sometimes implicit) assumptions that module teams make about the extent of adjustments that can be made and present them in a way that can be understood to different audiences.  This strikes me as a pretty tough challenge, but one that is very important.

Permalink 1 comment (latest comment by Jonathan Vernon, Tuesday, 6 Nov 2012, 17:58)
Share post
Christopher Douce

Xerte Project AGM

Visible to anyone in the world
Edited by Christopher Douce, Monday, 18 Feb 2013, 19:13

Xerte is an open source tool that can be used to create e-learning content that can be delivered through virtual learning environments such as Moodle.  This blog post is a summary of a meeting entitled Xerte Project AGM that was held at the East Midlands Conference Centre at the University of Nottingham on 10 October 2012.  The purpose of the day was to share information about the current release about the Xerte tool, to offer an opportunity to different users to talk to each other and also to allow delegates to gain some understanding about where the development of the tool is heading.

One of my main motivations for posting a summary of the event is to share some information about the project with my H810 Accessible online learning: supporting disabled students tutor group.  Xerte is a tool that is considered to create accessible learning material - this means that the materials that are presented through (or using) Xerte may be able to be consumed by people who have different impairments. One of the activities that H810 students have to do is to create digital educational materials with a view to understanding what accessibility means and what challenges students may face when the begin to interact with digital materials.  Xerte can be one tool that could be used to create digital materials for some audiences.

This blog will contain some further description of accessibility (what it is and what it isn't); a subject that was mentioned during the day.  I'll also say something about other approaches that can be used to create digital materials.  Xerte isn't the beginning and end of accessibility - no single tool can solve the challenge of creating educational materials that are functionally and practically accessible to learners.  Xerte is one of many tools that can be used to contribute towards the creation of accessible resources, which is something different and separate to accessible pedagogy.

Introductions

The day was introduced by Wyn Morgan, director of teaching and learning at Nottingham.  Wyn immediately touched upon some of the objectives of the tool and the project - to allow the simple creation of attractive e-learning materials.

Wyn's introduction was followed by a brief presentation by Amber Thomas, who I understand is the manager for the JISC Rapid Innovation programme.  Amber mentioned the importance of a connected project called Xenith, but more of this later.

Project Overview

Julian Tenney presented an overview of the Xerte project and also described its history.  As a computer scientist, Julian's unexpected but very relevant introduction resonated strongly with me.  He mentioned two important and interesting books: Hackers, by Steven Levy, and The Cathedral and the Bazaar by Eric S Raymond.  Julian introduced us to the importance of open source software and described the benefit and strength of having a community of interested developers who work together to create something (in this case, a software tool) for the common good.

I made a note of a number of interesting quotes that can be connected to both books.  These are: 'always yield to the hands on' (which means, just get on and build stuff), 'hackers should be judged by their hacking', 'the world is full of interesting problems to be solved', and 'boredom and drudgery are evil'.  When it comes to the challenge of creating digital educational resources that can be delivered on-line, developers can be quickly faced with similar challenges time and time again.  The interesting and difficult problems lie with how best to overcome the drudgery of familiar problems.

I learnt that the first version of Xerte was released in 2006.  Julian mentioned other tools that can be used to create materials and touched upon the issue of both their cost and their complexity.  Continued development moved from a desktop based application to a set of on-line tools that can be hosted on an institutional web server (as far as I understand things).

An important point from Julian's introductory presentation that I paraphrase is that one of the constants of working with technology is continual change.  During the time between the launch of the original version of Xerte and the date of this blog post, we have seen the emergence of tablet based devices and the increased use of mobile devices, such as smartphones.  The standalone version of Xerte currently delivers content using a technology called Flash (wikipedia), which is a product by Adobe.  According to the Wikipedia article that was just referenced, Adobe has no intention to support Flash for mobile devices.  Instead, Adobe has announced that they wish to develop products for more open standards such as HTML 5. 

This brief excursion into the domain of software technology deftly took us onto the point of the day where the delegates were encouraged to celebrate the release of the new versions of the Xerte software and toolkits.

New Features and Page Types

Ron Mitchell introduced a number of new features and touched upon some topics that were addressed later during the day.  Topics that were mentioned included internationalisation, accessibility and the subject of Flash support.  Other subjects that were less familiar to me included how to support authentication through LDAP (lightweight directory access protocol) when using the Xerte Online Toolkit (as opposed to the standalone version), some hints about how to integrate some aspects of the Xerte software with the Moodle VLE, and how a tool such as Jmol (a Java viewer for molecular structures) could be added to content that is authored through Xerte.

One of the challenges with authoring tools is how to embed either non-standard material or materials that were derived from third party sources.  I seem to remember being told about something called an Embed code which (as far as I understand things) enables HTML code to be embedded directly within content authored through Xerte.  The advantage of this is that you can potentially make use of rich third party websites to create interactive activities.

Internationalisation

I understand the internationalisation as one of those words that is very similar to the term software localisation; it's all about making sure that your software system can be used by people in other countries.  One of the challenges with any software localisation initiative is to create (or harness) a mechanism to replace hardcoded phrases and terms with labels, and have them dynamically changed depending on the locale in which a system is deployed.  Luckily, this is exactly the kind of thing that the developers have been working on: a part of the project called XerteTrans.

Connector Templates

When I found myself working in industry I created a number of e-learning objects that were simply 'page turners'.  What I mean is that you had a learning object that had a pretty boring (but simple) structure - a learning object that was just one page after another.  At the time there wasn't any (easy) way to create a network of pages to take a user through a series of different paths.  It turns out that the new connector templates (which contains something called a connector page), allows you to do just this. 

The way that things work is through a page ID.  Pages can have IDs if you want to add links between them. Apparently there are a couple of different types of connector pages: linear, non-linear and some others (I can't quite make out my handwriting at this point!) The principle of a connector template may well be something that is very useful.  It is a concept that seems significantly easier to understand than other e-learning standards and tools that have tried to offer similar functionality.

A final reflection on this subject is that it is possible to connect sets of pages (or slides) together using PowerPoint, a very different tool that has been designed for a very different audience and purpose.

Xenith and HTML 5

Returning to earlier subjects, Julian Tenney and Fay Cross described a JISC funded project called Xenith. The aim of Xenith is to create a system to allow content that has been authored using Xerte to be presented using HTML 5 (Wikipedia).  The motivation behind this work is to ensure that e-learning materials can be delivered on a wide variety of platforms.  When HTML 5 is used with toolkits such as jQuery, there is less of an argument for making use of Adobe Flash.  There are two problems with continuing to use Flash.  The first is that due to a historic fall out between Apple and Adobe, Flash cannot be used on iOS (iPhone, iPad and iPod) devices.  Secondly, Flash has not been considered to be a technology that has been historically very accessible.

Apparently, a Flash interface will remain in the client version of Xerte for the foreseeable future, but to help uncover accessibility challenges the Xenith developers have been working with JISC TechDis.  It was during this final part of the presentation that the NVDA screen reader was mentioned (which is freely available for download).

Accessibility

Alistair McNaught from TechDis gave a very interesting presentation about some of the general principles of technical and pedagogic accessibility.  Alistair emphasised the point that accessibility isn't just about whether or not something is generally accessible; the term 'accessibility' can be viewed as a label.  I also remember the point that the application of different types of accessibility standards and guidelines don't necessarily guarantee a good or accessible learning experience.

I made a note of the following words.  Accessibility is about: forethought, respect, pragmatism, testing and communication.  Forethought relates to the simple fact that people can become disabled.  There is also the point that higher educational institutions should be anticipatory.  Respect is about admitting that something may be accessible for some people but not for others.  A description of a diagram prepared for a learner who has a visual impairment may not be appropriate if it contains an inordinate amount of description, some of which may be superfluous to an underlying learning objective or pedagogic aim.  Pragmatism relates to making decisions that work for the individual and for the institution.  Testing of both content and services is necessary to understand the challenges that learners face.  Even though educational content may be accessible in a legislative sense, learners may face their own practical challenges.  My understanding is that all these points can be addressed through communication and negotiation.

It was mentioned that Xerte is accessible, but there are some important caveats.  Firstly, it makes use of Flash, secondly the templates offer some restrictions and that access depends on differences between screen readers and browsers.  It is the issue of the browser that reminds us that technical accessibility is a complex issue.  It is also dependent upon the design of the learning materials that we create.

To conclude, Alistair mentioned a couple of links that may be useful.  The first is the TechDis Xerte page.  The second is the Voices page, which relates to a funded project to create an 'English' synthetic voice that can be used with screen reading software.

For those students who are studying H810, I especially recommend Alistair's presentation which can be viewed on-line by visiting the AGM website.  Alistair's presentation starts at about the 88 minute mark.

Closing Discussions and Comments

The final part of the day gave way to discussions, facilitated by Inge Donkervoort, about how to develop the Xerte community site. Delegates were then asked whether they would like an opportunity to attend a similar event next year.

Reflections

One of the things I helped to develop when I worked in industry was a standards compliant (I use this term with a degree of hand waving) 'mini-VLE'.  It didn't take off for a whole host of reasons, but I thought it was pretty cool!  It had a simple navigation facility and users could create a repository of learning objects.  During my time on the project (which predated the release of Xerte), I kept a relatively close eye on which tools I could use to author learning materials.  Two tools that I used was a Microsoft Word based add in (originally called CourseGenie) which allowed authors to create series of separate pages which were then all packaged together to create a single zip file, and an old tool called Reload.  I also had a look at some commercial tools too.

One of the challenges that I came across was that, in some cases, it wasn't easy to determine what content should be created and managed by the VLE and what content was created and managed by an authoring tool.  An administrator of a VLE can define titles and make available on-line communication tools such as forums and wikis and then choose to provide learners with sets of pages (which may or may not be interactive) that have been created using tools like Xerte.  Relating back to accessibility, even though content may be notionally accessible it is also important to consider the route in which end users gain access to content.  Accessible content is pointless if the environment which is used to deliver the content is either inaccessible or is too difficult to navigate.

Reflecting on this issue, there is a 'line' that exists between the internal world of the VLE and the external world of a tool that generates material that can be delivered through (or by) a VLE.  In some respect, I feel that this notional line is never going to be pinned down due to differences between the ways in which systems operate and the environments in which they are used.  Standards can play an important role in trying to defining such issues and helping to make things to work together, but different standards will undoubtedly place the line at different points.

During my time as a developer I also thought the obvious question of, 'why don't we make available other digital resources, such as documents and PowerPoint files to learners?'  Or, to take the opposite view of this question, 'why should I have to use authoring tools at all?'  I have no (personal) objections about using authoring tools to create digital materials.  The benefit of tools such as Xerte is that the output can be simple, directly and clear to understand.  The choice of the mechanisms used to create materials for delivery to students should be dictated primarily by the pedagogic objectives of a module or course of study.

And finally...

One thought did plague me towards the end of the day, and it was this: the emphasis on the day was primarily about technology; there was very little (if at all) about learning and pedagogy.  This can be viewed from two sides - understanding more about the situations in which a particular tool (in combination with other tools) can best be used, and secondly how users (educators or learning technologists) can best begin to learn about the tool and how it can be applied.  Some e-learning tools work well in some situations than others.  Also, educators need to know how to help learners work with the tools (and the results that they generate).

All in all, I had an enjoyable day.  I even recognised a fellow Open University tutor!  It was a good opportunity to chat about the challenges of using and working with technology and to become informed about what interesting developments were on the horizon and how the Xerte tool was being used.  It was also great to learn that a community of users was being established. 

Finally, it was great how the developers were directly tacking the challenge of constant changes in technology, such as the emergence of tablet computers and new HTML standards.  Tackling such an issue head on whilst at the same time trying to establish a community of active open source developers can certainly help to establish a sustainable long-term project.

Permalink 1 comment (latest comment by Jonathan Vernon, Saturday, 20 Oct 2012, 16:58)
Share post
Christopher Douce

Journey: Introduction

Visible to anyone in the world
Edited by Christopher Douce, Monday, 28 Oct 2013, 13:41

It was a glorious September day; a day that echoed many of the best summer days that made the London Olympics so special for Londoners.  It was a day that I knew was going to change my life in a small but significant way - it was the day that I finally got around to changing my old fashioned (or 'classic') mobile telephone into one of those new fangled Smartphones.

'Why did it take you so long?  You work in technology?!', I could hear some of my friends and colleagues exclaiming. 'I was expecting you to be one of those who would jump at a chance to play with new stuff...'  The most obvious reason I can give as to why it took me so long is one that is immediately the most cynical: I've been around long enough to appreciate that early stuff doesn't always work as intended.   I decided to 'hang back' to see how the technology environment changes.  Plus, I was perfectly happy to muddle through with my simple yet elegant mobile phone which efficiently supported its primary purpose, which was to make and receive telephone calls.

I jumped on a red London bus and checked my text messages on my classic phone for the last time (there were none), and settled down to enjoy the ride of around four stops to Lewisham town centre, a bustling part of South East London.  I knew exactly where I was going -  to a shop entitled 'The Carphone Warehouse' (which sounds a bit anomalous, since it was neither a warehouse and I don't know anyone who has a dedicated car phone any more).

Stepping off the bus, I immediately found myself amidst a busy crowd.  One of the things that I love about Lewisham is its fabulous market.  I made my way past the fishmongers and hardware stall, and then past the numerous fruit and veg stalls, all of which seemed to be doing a roaring trade.  I then stepped into an air conditioned shopping centre and into the side entrance of the phone shop.  It was like I had entered another world.

After looking at a couple of 'device exhibits', I decided I needed to chat to someone.  It suddenly struck me how busy the shop was.  I joined an orderly queue had formed in front of the cash desk.  I could see that employees were deep in conversation with customers who had expressions that conveyed concentration.  In the background I could hear a woman speaking in what I understood to be a Nigerian accent expressing unhappiness.  'You can ring the shop...', said the shop assistant.  'But I don't have a phone!' came the flabbergasted reply. 'I want to speak to your manager!'

After about ten or fifteen minutes, it was my turn.  I explained to the harassed shop assistant what model of phone I wanted (I had done a bit of research) told her something about my current contract and mobile telecoms provider, and had a couple of questions.  These were about the costs, whether I could keep my telephone number and how long it would take to move from my old phone to the new phone.  I was told that my phone could have a choice of colours, that the sky is (approximately) the limit in terms of how much I wanted to spend on the contract, and that they can't help me today because the 'genius bar' guy who migrates telephone numbers from one phone system to another had fainted and had to go home.

It was at that point that I decided to leave the shop and theoretically return another day when the 'genius man' was around.  When I was about to go, I was given a really useful nugget of information, which was, 'just go around the corner to that other shop - they can change contracts for you, you don't even have to call up, which you would have to do if you came into the shop later'.

The second telephone shop I went into was a lot quieter and less frantic.  I asked my same questions about model, price and time and was given impeccably clear answers.  Everything was straight forward (if not slightly more expensive).  The helpful assistant cancelled my existing phone by pressing a few buttons, seemed unperturbed that my contract address was about two years out of date, and gave me a new contract to sign.  Plus, there were no (visibly) angry customers.

Within twenty minutes, I was in possession of one of the most powerful computing devices I have ever possessed.  I was sent on my merry way whilst carrying my new mobile friend in a branded bag.  It was as if I had just bought a very expensive shirt from an upmarket fashion boutique - this was a world away from the time when I bought my first ever mobile phone in the mid 1990s.

Heading home, I passed three different mobile telephone shops.  Each shop represented a different mobile phone provider.  I always knew that competition between mobile providers was fierce, but the act of walking past so many very similar shops (which can be found pretty much in every big high street) emphasised the vibrancy and visibility of the mobile telecommunications industry.

As I caught the bus back home, I started to think about the device I had just bought.  The short journey to and from Lewisham made me consider the different forces that all contributed towards making a tiny computing device through which you can almost live your entire life.  Through your phone, you can discover your current location and learn about your onward journey, search for businesses that are close by and explore the depths of human knowledge whilst you stand in the street.  You can even hold up your smartphone and the sights that you see annotated with information.  Your smartphone can become (or, so I've heard!) an extension of yourself; like an additional limb or a sense.  The smartphone is, fundamentally, a technological miracle.  These devices make the internet pervasive and information phenomenally accessible.

Whilst considering magic that has emerged from decades of development and continual technological creativity, I asked myself a fundamental question.  This was, 'where has all this come from?'  We can consider a smartphone to be an emergent application of physics, chemistry, electronics, industrial design, engineering and computing and a whole host of other disciplines and subjects too!  My question, however, was a bit more specific.  Since a smartphone is ultimately a very portable and powerful computer. My question is, 'where does the computer come from?'

Such a question doesn't have an easy answer.  In fact, there are many stories which are closely intertwined and interconnected.  The story of the networking is intrinsically connected with the history of computing and computer science.  Just as today's modern smartphones will be carrying out many different tasks (or threads of operation) running at the same time, there are many different threads of innovation that have happened at different times and at different places throughout the world. 

The development of a technology and its application is situated.  By this, I mean, physically situated within a particular place, but also within a particular societal context or environment.  Devices and technologies don't just magically spring into existence.  There is always a rich and complex back story, and this is often one that is fascinating.

Like so many Londoners, I consider myself to be an immigrant to the city.  Whilst wandering its streets I can easily become aware of a richness and a depth of history that can be connected to the simplest and smallest of streets and intersections.  Just scratching the surface of a geographical location can reveal a rich tapestry of stories and characters.  Some of those stories can be connected to the seemingly simple question of, 'where does the computer come from?'

If I consider my new fangled smartphone, I can immediately ask myself a number of corollary questions.  These are: where do the chips that power it come from?  Where are they designed?  Where do they get manufactured?  Where does the software come from?   But before we begin to answer these questions there is a higher level, almost philosophical question which needs to be answered.  This is: 'where does the idea for the modern computer come from?'

This blog post is hopefully one of many which hope to unpick this precise question.  I hope to (gradually) take a series of journeys in space and time, asking seemingly obvious questions which may not have obvious answers.  This may well take me to different parts of the United Kingdom, but there is also an adventurous part of me that wishes to make a number of journeys to different parts of the world.

But before I even consider travelling anywhere outside of London, there are places in London that are really important in the history of the development of the computer, and a good number of them are only a few miles from my house.  Although the next journey will only be a short distance geographically, we will also go back in time to the nineteenth century.  This is a time when computers were people and machines were powered by steam.

My first journey (whilst carrying my smartphone) will be to an ancient part of London called Elephant and Castle.  It's a part of London that is not known for its glamour and culture of innovation and seems a long way from the conception of a modern computer.  Instead, it is a part of the city that is known for its large concrete tower blocks that were considered to be a symbol for modern urban decay.  In fact, the only times I've spent there was riding through the district on my motorbike on the way to somewhere else.

'What has this area got to do with the development of the computer?', I hear you ask.  I'm going to explain all in my next blog post.  And when I've been to Elephant and Castle, we're going to begin to travel further afield.

Permalink Add your comment
Share post
Christopher Douce

Raspberry Pi : suited and booted

Visible to anyone in the world
Edited by Christopher Douce, Tuesday, 5 Jun 2018, 09:34

I received delivery of my Raspberry Pi computer from RS components about two and a half months ago.  It's taken a bit of time to finally 'get it together' to create a setup that enables me to learn more about what it can do and what I could potentially use it for.  This blog is all about the steps that I took to arrive at a working setup.

When I made my original order I decided on the lazy option - I chose to buy a number of key components at the same time.  Along with my Raspberry Pi board I bought a power supply (which connects to the micro USB port of the device), a HDMI cable and a memory card which contains an operating system.  When you're starting with something new, there's something to be said for going with a standard distribution or setup.  There's the fundamental question of 'will it do stuff when I turn the power on?'  Going with a default or standard setup is a way to get going quickly.

There were, of course, three other things I needed: a mouse, a keyboard and a screen.  For the screen I figured out that I might be able to test my Pi out using my TV (since it had a HDMI port). For a keyboard and mouse, I visited a popular on-line hardware retailer and bought a cheap mouse and a keyboard.  (To get an idea of how cheap they were, both items together cheaper than the price of a single pint of beer; it's astonishing how the price of hardware continues to drop).

I wanted something else, though.  A quick search on eBay using the term 'Raspberry Pi' revealed a number of small companies that had started to make cases for the Pi.  After about ten minutes of searching I found a company called ModMyPi.  Although I didn't strictly need a case, I thought it would be a sensible thing to do.  I could easily imagine myself putting my Pi on the floor and haplessly treading on it whilst carrying a hot cup of tea. 

After ten minutes of agonising decision making I had finally decided that my Pi needed a red case.  Why red?  Well, for two reasons: firstly, to signify that this little box is important (i.e. the red box is where number crunching takes place), and secondly, to make it pretty visible when it's sitting on my beige carpet (so I don't tread on it).

The trouble with buying something new is that things don't always arrive on time, and this was the situation with my tiny Pi case.   Although I soon had my keyboard and mouse, the case took quite a few weeks to arrive due (apparently because I didn't read the small print which said that I was making a pre-order - note to self: read the small print!)

Boot day 1 : Trouble

I had everything: my newly suited (or encased) Raspberry Pi, a power supply, a USB keyboard, a USB  mouse, a HDMI cable and an operating system (a version of Linux) on a memory card.  I attached the USB devices, connected the Pi to my temporary display (my living room telly) and powered everything up.  Through the case I could see that a LED came on and my TV changed display mode - things were happening!  The screen started to fill with boot messages and then suddenly... everything stopped.  I squinted, looked at the screen and I could see that there had been something called a Kernal Panic.

When faced with weird technical stuff going wrong what I tend to do is check all the connections and try again.  Exactly the same thing happened, so I powered down, and scratched my head.  Then, I unplugged the USB device and the USB keyboard and powered up; this time I got a lot further.  I was eventually presented with a Linux login prompt but did not have any way of entering a user id.  This told me that (perhaps) there might have been something wrong with either the mouse of the keyboard.  I plugged both devices, one at a time, into my Windows laptop to see if they were recognised.  The mouse was recognised straight away but Windows had to search for an eternity to find a device driver before the device was recognised, suggesting that there was something special about its design.

Every techie knows that Google is their friend, especially when it comes to weird error messages. I searched for the terms, 'raspberry pi', 'panic' (or dump) and 'keyboard' and quickly found a site called elinux.org that contained a Wiki page which listed keyboards that were known to cause mischief.  I soon figured out that I had ordered the Xenta HK-6106 which was known to cause a kernel panic on a Debian distribution (I obviously had either the same one or a distribution very similar to it).  Mystery solved!

Ordering more stuff

I ordered a new keyboard.  This time I bought one (which cost the price of a half  pint of beer) that was on the 'working peripherals' list.

One of my biggest worries (if you could call it that) is that the screens that I use for my desktop PC are both pretty old (I have a dual screen setup).  One of them only has a VGA input, which is useless for the Pi.  The other screen has a DVI input.  A quick search revealed that it was possible to get HDMI to DVI cables.  I didn't know you could do this, and I have to confess that I don't know much about DVI other than my main PC has got one of these as a video output (in addition to a VGA port).  Still, I decided to buy a cable from eBay and hope for the best.

Boot day 2 : Success

After rummaging in a box that contained an indeterminate number of cables (hasn't every geek got one of these boxes?), I found a network cable.  I took every bit of my Pi setup upstairs to my study area and connected everything together; keyboard, mouse, power supply, screen and network cable (which I physically connected to the back of my router, after dragging it half way across the room since my network cable wasn't quite long enough, and still isn't quite long enough).

I powered up.  A kernel panic didn't occur.  I was presented with a login prompt.   I typed the user id: pi, followed by the password: raspberry.  I then entered 'startx' at the shell prompt.  The screen changed and I was presented with a gui.  My aged screen was working!  I soon discovered an internet browser (accessed through the menu located at the bottom left of the screen).  Within a minute or so I was able to navigate to my favourite news site and open Wikipedia.  Success!

Now that I've got everything working, I asked the question, 'what can I do with it?'  I guess this question has two key answers: you can use it to learn about computing, or you could use it to do stuff.  If I find the time I hope to do both!

Learning with the Pi

Considering the learning aspect, it's obvious that there are loads of things going on from the moment that you turn on the Pi.  There are a couple of pages of screens of mysterious messages which currently don't make much sense to me (it's been a while since I've had a Linux distribution on one of my computers).  When you login to the Pi environment there are loads of menu items, applications and tools that I've never heard of before.  There's also a version of a windowing system that I've never heard of before.  There's also a weird sounding browser which seems to render things pretty well, judging from a brief ten minute play. 

There are also a set of programming tools and utilities.  The learning can go from the low levels of computing (from the level of the operating system) through to higher level applications (that can help to teach fundamentals of programming).  Being a bit of a geek, the most interesting question for me is 'what exactly does the Pi Linux distribution contain?'  This, I think, is going to be my first learning task.

Another geeky question is: how do you build software for the Pi?  My main computers are Intel based desktops or laptops.  The Pi is based around an ARM processor.  How do I take existing Open Source software and compile them up so they work on that ARM chip?  Going down a level even further, how do you get USB peripherals to work with the Pi?  Do I have to write a device driver?  Is the world of ARM device drivers different to Intel device drivers?  I have so many questions!

One thing that I have heard of (in passing, through a quick Google search) is that you can use what is known as a cross compiler.  This means that you can compile software using one processor architecture for another.   Of course, this is getting impossibly deeply technical for a first blog about the Pi so I'm going to stop asking myself difficult questions and wondering (for now) what is and what is not possible!

On another note there are a couple of Open University modules that are tangentially connected to (or might be useful with regards to) the Pi.  The Pi Linux distribution contains an environment called Scratch.  This is a graphical programming language developed by MIT that introduces the fundamentals of computer programming.  The Open University makes use of a derivative of Scratch called Sense, which is used with the TU100 My Digital Life module.  The other module that could be useful is T155 Linux: an introduction.   

Doing stuff with the Pi

So, it boots up.  That's pretty cool.  But what might I practically be able to do with it?  I've heard one of my colleagues talking about potentially using a Pi to create a digital video recorder, which sounds like a fun project.  You can also use it as an embedded system to control other hardware. In fact, looking at the Raspberry Pi blog presents a veritable array of different projects and ideas.

About six or so years ago, perhaps even longer, when I worked in industry, in a company that made educational products that could be used to help teach engineering subjects, I suggested creating a device that could (potentially) be used to help teach the fundamentals of computer networking.  The idea was to make use of an inexpensive embedded microcontroller to create something called a 'computer cube'.  Each cube would have simple input and output (perhaps a couple of switches and a LCD display), as well as a network connection (either a standard network connection, or a proprietary interface that can be easily accessed through software).   The idea was that you could connect a set of computer cubes them together on a desk; you could create your own mini internet and also have the ability to look at the signals transmitted between devices and begin to understand the principles of protocols.

Of course, such an idea was hopelessly ambitious, plus there were increasing numbers of network simulators that did a pretty good job of helping learners to explore the principles of networking.  Fundamentally, at the time, it was a bad idea.

But then the Pi arrived.  The Pi is cheap, small, has its own peripherals and is open.  You can run whatever software you want on it.  A Pi is a web client, but there is no reason why it can't also become a web server.  A Pi could also (potentially) become everything in between too.  You could connect them together using relatively cheap switches and hubs, and explore (in a practical sense) computer networking and the software that supports networking works.  You could set one to transmit data, and perhaps use the general purpose IO ports to indicate output of some kind.

Would it be possible to have a network of Pi devices on a desk?  Possibly.  What software would be useful to learn more about the fundamentals of networking?  I'm not sure.  Could we create some useful curriculum or pedagogic materials to go with this?  I've no idea.  All this sounds like a project that is a bit too big for just one person.  If you accidentally discover this blog post and you think this may be a useful idea (or hold the view that it remains a bad idea), then please do get in touch!

Final notes

There is one clear certainty in computing.  It isn't Moore's Law.  It's that there is always an opportunity to learn new stuff.  As well as looking at the Pi operating system and learning about what the various bits are, I've also heard it mentioned that the language of the Pi is Python (Wikipedia).  This isn't a language that I've used before.  It's certainly about time that I knew something about it!

If you scratch the surface of anything technical you find a set of subjects and technologies that are both interesting and challenging.  Not only is the Raspberry Pi device interesting and challenging in its own right, but I'm sure that the situations in which it can be used and applied will be interesting and challenging too.

Permalink 1 comment (latest comment by Alex Little, Tuesday, 21 Aug 2012, 20:35)
Share post
Christopher Douce

e-Learning Community: Portfolios and Corpora

Visible to anyone in the world

The Open University has something called an e-learning community.  This is a loose group of people who share a common interest in e-learning and the application of information technology for teaching and learning.  Since I was visiting the head office at Milton Keynes for a meeting, I thought I would drop into a seminar that took place on 11 July 2012.

This meeting of the e-learning community comprised of two different talks, both very different from each other.  The first presentation, by Thomas Strasser, was about e-portfolio systems and how they can be used with teacher training.  In some ways, this first talk connected to an earlier HEA event at Birmingham City University which focused on helping people to create an on-line professional presence.  The second talk, by Alannah Fitzgerald, was about how corpora can be used to help with language learning.  I hope that that through these notes I've done justice to both presentations.

The role of self-organised learning using Mahara

The full title of Thomas Strasser's presentation is, Mighty Mahara: the role of self-organised learning within the context of Mahara ePortfolio at Vienna University of Teacher Education.   Mahara (Mahara website) is an open source ePortfolio system which appears to be increasingly used in combination with the Moodle virtual learning environment.  One of the reasons for this is likely to be that both systems make use of the same underlying software, PHP. 

An ePortfolio system can be described as an on-line tool that can be used as a repository to store documents of work performed and reflections to gain further understandings of a particular subject or topic.  I understood that ePortfolios can have different faces: on one hand they can be private (to facilitate personal reflection), or they can be public (to enable the sharing of documents and ideas between different groups).  The public dimension can also allow the user to share information about competencies with other people, and this may include potential employers.

During Thomas's presentation, I was introduced to a slightly different (and more nuanced) view of ePortfolios.  Apparently three authors called Baumgartner, Himpsl and Zauchner proposed three different types: systems that can be used to facilitate reflection (thoughts on work that has been done), development (thoughts about future directions and plans), and presentation (information about what the user or student can do, or has achieved).

One of the most important points that I've noted is that Thomas argued that teachers need to be digitally literate and be able to appreciate the different situations in which digital media might be used. 

One term that was new to me was 'self-organised learning'.  Whilst I had not heard of this term before, its intention feels immediately comprehensible.  Thomas mentioned that it is connected to recent debates surrounding life-long learning.  Four components of self-organised learning were mentioned, a focus on individual strengths and weaknesses, self-reflection (I'm assuming this means on work that has been performed and problems carried out), differentiated systematic reflection (I'm not sure what this means), and documentation (which I understand relates to the creation of documentation, to create evidence).

Why use an ePortfolio?  I understand that teacher training is a field where it is necessary to collect a significant amount of documentation and evidence.  The one thing that an ePortfolio can do is to replace paper based reports and portfolios, thus helping to unburden the lecturer.  The lecturer, however, is not the focus.  Instead, the student or learner should be at the centre.

For any on-line tool to be successful its users need to either see or discover its worth. One way to achieve this is to have a lecturer being a 'role model', i.e. using the same tools as the student.  An important point was that the popularity of a tool can depend on the enthusiasm of the tutors that are using it; acceptance is something that can take time and institutions may have a role to play in terms of making certain tools obligatory.

Through their ePortfolio system, Thomas's students are encouraged to share a lot of their work and activities with others.  The system can store contact information, students can communicate with each other through a reflective blog and can provide peer feedback through task-based reflection. (It was at this point that I thought of the Open University tool, Open Design Studio that is used as a part of the U101 Design Thinking module). 

The question and answer session at the end of Thomas's talk raised a number of familiar questions.  These include what may happen to an ePortfolio when a student leaves their institution, the extent of difference between an ePortfolio and a website, and the issues of privacy and security.

A copy of Thomas's presentation can be found by visiting the presentation section of his Learning Reloaded website.  Further information and research can be found on Thomas's home page.

Addressing academic literacies: corpus-based open educational resources

Alannah Fitzgerald's presentation had a strong connection with the subject of computational linguistics, a subject which I took as a master's module.  I understand a corpus to be a set of texts that can be used by researchers to gain an understanding about how language is used.  I first learnt of the term when I heard of something called the British National Corpus, or BNC, which is a set of carefully sampled texts which can be used by linguists.

One of the themes of Alannah's presentation was teaching of academic English, particularly to people who know English as their second language.  I had never heard of this before, but apparently there is something called an 'academic word list'.  This word list has been published by academic publishers with the intention of helping language learners.  The word list has apparently been produced by the analysis of a corpus of academic articles.

One of the challenges of creating a corpus is to ensure that it is representative.  This means that samples of language use are chosen from different disciplines.  Just as in the social sciences, research that presents conclusions from poorly sampled data can be subjected to challenge.  Such challenges, of course, can lead to new experiments (or new corpora), which may lead to different results.

Another theme of Alannah's talk was open educational resources (Wikipedia), or OER.  OERs are educational resources that anyone can use, free of charge.  Over recent years a number of on-line corpora and linguistic tools have become available.  Such tools can be used by teachers and student alike, potentially to either augment the use of textbooks, or even to gain different or alternative perspectives.

We were introduced to FLAX, or Flexible Language Acquisition Project, Wordandphrase and Lextutor.  Wordandphrase draws upon a corpus called COCA, an abbreviation for Corpus of Contemporary American English.  Apparently, one of its really interesting features is that whilst the BNC is a snapshot of language at a particular period of time, COCA is continually being added to, so it represents 'current' language usage. Another interesting corpus is BAWE, an abbreviation for British Academic Written English.

The main point of Alannah's talk was that teachers of English need not be constrained only by the resources that publishers provide.  Challenges lie in understanding how the different tools work and how they can fit in and be used within classes (a thought which has been drawn from Thomas's earlier comment that the tutor needs to show how tools could and should be used).

Other resources include BALEAP, which is an organisation dedicated to the professional development of those involved in learning, teaching, scholarship and research in English for Academic Purposes (EAP).  Another site that was mentioned being Teacher Training Videos.

Reflections

I enjoyed both presentations.  Regarding the presentation about ePortfolios I do sense that their success in an institution or a course of study will heavily depend on how the advantages of such tools are conveyed to students.  People only use tools if they are perceived to yield some kind of benefit or have a clear purpose (or if you have to use them to gain scores that contribute towards an assessment).  One issue that remains is the unknown consequences of sharing or whether what we write will be 'googleable' and come back to haunt us.

I particularly enjoyed Alannah's talk since the subjects that she spoke about were very different to my current research interest (which is becoming to be more about the history of computing).  What was great (for me) was that it brought back memories of old studies and reminders of tools I had looked at many years ago (such as WordNet).

Alannah's talk also made me wonder about whether it might be possible, or in fact, useful to create to create a corpus of computer programs, which may have the potential to help us to learn more about the ways that software developers perceive and understand different types of software.  Much food for thought.

Permalink 1 comment (latest comment by Jonathan Vernon, Wednesday, 18 Jul 2012, 17:59)
Share post
Christopher Douce

Teaching, Learning and Assessment of Databases

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 30 Jan 2019, 12:30

The 10th Teaching, Learning and Assessment of Databases (TLAD) workshop was held at the University of Hertfordshire on 9 July 2012. The University of Hertfordshire is one of those places that I have heard quite a lot about (from some friends and colleagues who have both visited and worked there), but until 9 July, I had never had the opportunity to visit. 

Although databases isn't my core subject it is one that I do have interest in, having been a software developer for quite a few years before joining the university.  Plus, the subject of databases (and their development) certainly crosses over with another big interest of mine, which is the psychology of computer programming.  Enough about me and my interests, and onto a summary of the event.

An effective higher education academy

Karen Fraser, who works in academic development within the HEA kicked off the day.  Karen once worked as a lecturer in computer science at the University of Ulster, before joining the HEA.  Karen talked about the objectives of the HEA and its current areas of focus.  These include the issue of employability amongst computing graduates and also supporting, promoting and developing teaching (and teacher) excellence. 

Other areas of interest include flexible learning, understanding mobility centred learning (a term that I had not come across before), and sustainable development in the sector.  Another area of focus includes supporting institutional strategy and change.

There were two other key parts to Karen's introduction: funding opportunities that the HEA can offer both individuals and academics, and mechanisms to accredit the teaching and skills of individuals.  In terms of funding, there are the teaching development grants, individual grants, departmental bids, collaborative bids and strategic development bits.  Anyone who is interested in finding out more should, of course, visit their website.

In terms of accrediting or recognising individuals the HEA runs what is called a fellowship scheme, where individuals can apply and submit evidence regarding their skills and practice.  I didn't know this (or, I had forgotten), but there is also something called a senior fellow scheme too.  Karen also mentioned the National Teaching Fellowship Scheme (NTFS) and the UK Professional Standards Framework (UKPSF).

On the subject of teaching quality, Karen drew our attention to a report entitled Dimensions of Quality by Professor Graham Gibbs. Apparently one of the main conclusions was that who performs the teaching was considered to be a more important measure of quality than the number of contact hours.

Towards the end of her talk, Karen briefly mentioned something called the HEA's 2016 strategic plan. The key points I noted were the aims to provide effective support to teachers and those involved in teaching and learning, to increase capacity and reward excellence, and offer influence to national policy.

Analyzing the influence of SQL teaching and learning methods and approaches

The first paper presentation of the day was by Huda Al-Shuaily from Glasgow University.  Huda presented what was a small section of her doctoral research. Huda drew our attention to earlier research by Ogden who presented a three stage cognitive model of working with SQL.  These included query formulation, query translation and query writing.  Huda considered that an additional category named query comprehension was perhaps necessary.

For each of these stages, Huda considered different issues.  For successful query formulation an understandable context is necessary (or set of appropriate examples or situations that are used to teach the concepts of databases) to help learners.  For query translation, where students convert queries between English and SQL, the ambiguity of English can be a particular difficulty.  For query writing, knowing something about the strategies that novices may adopt may be useful too; it was recommended that teachers emphasise the 'what' and 'how'.  An important point was: it is perhaps a good idea to teach students to read SQL before teaching them to how to write SQL.

One of the most interesting parts of her presentation was when she began to talk about patterns and SQL.  I have used generic programming patterns and had heard that they have been applied to other related areas such as usability, but never before databases.  Huda mentioned something called a 'self-join' pattern, which is one of a number of patterns that could be taught to students.

The question and answer section immediately opened up a number of interesting debates.  Regarding the subject of patterns there was some debate was to whether we ought to be teaching students general problem solving approaches rather than higher level abstractions such as patterns.  Another debate related to the type of data that we have within our datasets that are used to teach the underlying concepts.  Should we use real data (or, at least, real data that has been manipulated to avoid disclosure of sensitive records), or artificial made up data?

Temporal support in relational databases

Bernadette Marie Byrne from the University of Hertfordshire spoke about temporal support in relational databases whilst at the same time giving us some useful background information and presenting a case study.  Temporal databases were described as databases that are capable of recording what data has changed and when.  Apparently, there were debates were occurring in the SQL standards bodies about extending SQL to cater for temporal data when the focus of discussions changed due to the arrival of XML.  Some database vendors such as Oracle, however, have implemented certain temporal extensions.

A case study that Bernadette describes centres on a motorcycle and cycle hire business.  It is necessary to record when items are hired (and when they are returned), as well as knowing when items are available for hire.  An added complication is that 'partial hires' can be performed: some bicycles can be hired for, say, two days, and then swapped for another to ensure that an original customer hire request is satisfied.

It was clear to me that such a scenario (which I understand was drawn from a real-life situation) was one that was pretty tough to implement and would clearly show the challenges of working with time-centric data.  Another interesting consideration that sprung to my mind is the question of 'where do we write the code?'  In some cases we should rely on the functions of the database to solve our problem, whereas on other occasions we might want to write more program logic to cater for all the different situations that we come across.  Knowing where (and how) to write code is, of course, a part of the artistry of computer programming.

Roadmap for modernising database curricula

Jagdev Bhogal and Kathleen Maitland from Birmingham City University gave a very thought provoking presentation about we need to do, or could do to enhance the current database curricula.  Kathleen argued that databases are ubiquitous. On one hand, you might be accessing a server hosted database through a call from a mobile app.  On the other hand, your mobile app may contain its own database or data store of some kind.

One of the perceived problems is that databases are taught in bite size chunks in isolation from other modules.  Kathleen also argued that ideally modules should be connected together in some way and emphasised the need for different members of faculty should talk to each other.  Getting staff to work together has the potential to help students being able to create a portfolio of work (perhaps even functioning applications) that can be demonstrated to employers.

Employability is, of course, very important and curriculum design should directly address employability skills.  One such skill is that the professional writing and communication.  One approach to develop professional skills is to teach using substantial case studies such as those relating to the retail, banking, and government sectors.  Using case studies opens up the possibility of making use of very large databases and understanding the contexts in which they are situated.

Some topics that may be included in modules can include data modelling, data acquisition, approaches for data storage (including different ways of using mass storage devices, as well as saving data to the cloud), data searching (of both structured and unstructured data), processing, performance and security (which can include addressing subjects such as authentication and defence through depth).

The final conclusions that I've noted are that employability skills are necessarily important and that it is also important to get employers involved.  It is also important to consider how to improve the student experience by creating realistic scenarios. It also helps students to create assessment portfolios which can be used to demonstrate technical skills and abilities.

Research-informed and enterprise-motivated: Data mining and warehousing for final year undergraduates

Jung Lu from Southampton Solent University gave a presentation that focused on the teaching of data mining.  Jung highlighted that students had to consider a number of advanced research topics include XQuery, Weka (data mining), databases in the cloud, Oracle Apex, distributing and replicating data, accessing and manipulating data programmatically, and PL/SQL (Wikipedia) (stored procedures).

I made a note of a key point that related to the importance of practice.  It is necessary to ensure that students have sufficient time and resources to engage with practice activities and tasks before moving onto formally assessed activities.  'Screen time', as I call it, can give students confidence as well as experience that can stand them in good stead when it comes to the work place.

Subjects such as data warehousing and OLAP (Wikipedia) were said to be taught using a case study and a guest lecture (the importance of case studies being an issue that is featured later on within the workshop). Towards the end of the presentation, professional certifications were also mentioned.  Finally, a connection to employability skills, particularly SFIA, Skills Foundation for the Internet Age, was mentioned.  This framework may be able to offer some guidance about which skills may be particularly relevant or useful.

The teaching of relational on-line analytical processing (ROLAP) in advanced database courses

Bernadette Marie Byrne and Iftikhar Ansari both from the University of Hertfordshire talked about how to teach ROLAP, which is a database extension that I had never heard of before.   They began by referring to a very large dataset which had just under a million rows.  Other important considerations included that of performance.

As well as ROLAP being a new term to me, I was also introduced to a second one, which was 'star schema design'.  I think my unfamiliarity with these terms more relates to my background of using small to medium sized databases, rather than large and extensive data sets. One point was very clear: having hands of practical experience was something that was considered to be both important and necessary for students.

Introducing NoSQL into the database curriculum

The first ever database systems I used were based around the XBase language; early PC based databases such as Dbase, Clipper and Foxpro (which was back in the very early nineties).  From there I was introduced to the rigours of SQL, which is one of those languages that I've used off and on throughout my programming career. 

Clare Stanier from the Staffordshire University introduced what was to me a set of new database developments and innovations that has passed me by, namely NoSQL (or, perhaps post-SQL) databases: systems that enable users to more readily store unstructured data, perhaps in the form of documents.  Clare reminded us that that in the early days of databases there were many different types. Over time the SQL-based relational model approach became dominant.  Clare argued that we're now living in a database environment which is increasingly diverse.

The relational approach requires us to clearly structure our data.  Whilst on this can allow us to carry our complex queries, it can be difficult to create databases which can readily accommodate changing types of data.  NoSQL databases (NoSQL.org) permit weaker concurrency models and (I guess) you might also argue that some of them are more weakly typed.

Clare introduced us to a number of different databases.  Two notable ones include MongoDB which is apparently used to drive Craigslist, and CouchDB.  Apparently these two database projects have similar underlying objectives but there is a healthy rivalry between the two groups (which is no bad thing).

Another database (again, one that I had not heard of before) is Cassandra.  NoSQL databases have clearly made it into the mainstream.  Amazon have developed a database called SimpleDB, which can be used as a part of their cloud services.  Of course, cloud based databases have their own advantages and disadvantages, and developers always need to be mindful of these. Another aspect of NoSQL databases is that they have the potential to more readily (and perhaps easily) integrate with internet applications.  With some systems it might be easy to issue queries over REST (Wikipedia), for instance.

Clare made a very good point, which was that the TLAD community and lecturers who are involved in teaching databases and related subjects need to have a debate about what is taught in the database curriculum and the extent to which NoSQL databases need to feature. 

The distinctions between NoSQL and SQL databases remind me of a simplistic distinction between programming languages.  On one hand there are strictly typed languages, such as Java which require you to define everything.  On the other there are languages such as Perl which are weakly typed and allow developers to get into all kinds of muddles (whilst at the same time permitting certain categories of problems to be solved quickly and effectively, when such tools are placed in skilled hands).  There are, of course, other languages (and language mechanisms) in between.  I have little doubt that SQL and NoSQL databases may influence each other, but it remains a programmer and designers challenge to choose the most appropriate tool for the task in hand.

A ten-year review of workshops

David Nelson from University of Sunderland and Renuga Jayakumar from University of Wales Trinity Saint David presented an analysis of papers presented at TLAD over the last ten years.  David also attempted to present his view of what we might have to teach in the future (whilst also accepting that predicting the future is always a dangerous thing to try and do!)

Some of the broad themes that are covered in the workshop have included database design methods, e-learning tools, curriculum research, student diversity and assessment methods.  Some of the very early papers presented techniques for the automated assessment of database designs.  Over the years, technologies such as OpenMark (Open University) have matured.

Since the inception of TLAD, a range of new technologies have emerged and have been increasingly applied in different situations, such as XML.  With XML it is necessary to understand the fundamentals before fully appreciating its significance within the world of databases.  Papers regarding e-learning have included presentations about games, class participation, recording of lectures and how to best facilitate 'out of hours' learning.

Looking towards the future, we might see curriculum changes to take further account of transaction processing, system and data recovery, security, cloud computing and physical aspects of system design.  Mobility and non-relational databases as well as subjects such as data warehousing are considered to be significant subjects.

During the closing discussion, I also noted down the name of a resource that was new to me, namely, the Database Disciplinary Commons which is hosted by the University of Kent.

Reflections

I think this is my second TLAD workshop, the previous one that I attended was held at the University of Greenwich.  I enjoyed my first one and I enjoyed this one too.  I remain of the opinion that databases is a tough subject to teach, but one that is fundamentally very important to computer science education.  Lecturers need to convey fundamental concepts which, to some, may be significantly difficult to grasp.  The challenge becomes even more acute when we move more advanced subjects where issues such as software and hardware architecture need to be considered.  Security, of course, is another topic that is very important and there is a necessary connection between databases and the teaching of programming.

One point that I remember from my own database education (much of it acquired 'on the job' whilst working in industry), was that it became apparent that there were so many different ways to solve a problem.  I remember being presented with different techniques and having to make a decision about how to apply them.  Should I create a database abstraction layer for my application or use stored procedures, for example.  In my programming career I've even seen the horror of SQL intertwined with HTML tags!  Thankfully, the prevalence of design patterns, particularly MVC have gone a long way to emphasise the importance of separating out different aspects of an application.

All these ruminations suggest an important subject, which is how to most effectively convey best practice to our students.  Understanding the most appropriate ways to design systems and databases comes after acquiring fundamental skills.  This again connects to the view that teaching databases is a tough thing to do.

For me, there were two highlights of this TLAD.  The first relates to being aware of more on-line resources relating to learning and teaching (and being introduced to new technical terms), and secondly, being introduced to the concept of NoSQL.  My next challenge is to try to find some time to explore these new software technologies.  I hope I will be able to find the time and opportunity to do this.

Addendum

A few years after publishing this post, I was contacted by a reader, who mentioned that they had a website about the teaching of PL/SQL that contained a number of useful tutorials. If anyone is interested, here's a link to Ben Brumm's PL/SQL tutorials (Databasestar webite).

Permalink Add your comment
Share post
Christopher Douce

Teaching and learning programming for mobile and tablet devices

Visible to anyone in the world
Edited by Christopher Douce, Monday, 3 Mar 2014, 18:45

I attended a HEA workshop about the teaching and learning of programming for mobile and tablet devices at London Metropolitan University on 15 June 2012.  This is a quick summary of my own take on what happened on the day, combined with a set of personal reflections, some of which I've added in the body of this summary.  I'm writing this with the hope that this summary might be useful for some of the attendees, and for others who were unable to attend.

In some ways, this was a second of a 'mini series' of two workshops about mobile technologies, the first being held in the University of Buckingham back in May 2012.  A quick write up of this earlier workshop, which has more of a focus on employability skills can be viewed by visiting an earlier blog post.

The day began with an introduction by Dominic Palmer-Brown who clearly emphasised the importance of mobile technologies.  Dominic commented that the subject is particularly important 'to ourselves and our students', going on to emphasise that skills working with and developing mobile technologies are in demand by industry.  A number of presentations appeared to confirm that this was the case, particularly the final presentation.

The potential of social media and mobile devices in informal, professional and work-based learning

Professor John Cook, from London Metropolitan University gave an opening keynote about how mobile devices could be used to help facilitate teaching and learning.  John introduced us to a number of different ideas and projects, enabling us to appreciate the variety of ways in which mobile devices may be used.  Mobile devices can be used to 'add information' to physical space, for instance, reminding me of research into wearable computing and the development of Google Goggles, for instance.

Connecting to the themes of location, history and learning, John introduces us to a project that enabled students, through the use of mobile devices, to learn more about the ruins of a Cistercian Abbey (Fountains Abbey, Wikipedia).  Mobile devices facilitate the delivery of different types of media which can be chosen depending upon the location of the user

Whilst technology on its own is always interesting, its use and application can be enhanced through the understanding and application of pedagogic theories.  John made reference to Vygotsky (Wikipedia), who coined the term Zone of Proximal Development (Wikipedia).  Other important points that I've noted is the role that peers play a very important role in learning, and John emphasised the importance of scaffolding of learning activities (the subject of pedagogy, particularly inquiry based learning was the focus of an earlier HEA event).  On a related note, I personally feel I have a fair way to go in terms of understanding how to make the best use of the technologies I have at my disposal.  The pedagogy of technology is something that I am sure that I'll continue to mention in these blogs.

John also introduced an abbreviation that I was not familiar with: BOYD, meaning, Bring Your Own Device.  Perhaps it has already got to a point where it may be surprising if a student doesn't bring some kind of mobile technology to their lectures. 

It was interesting to hear the view that social media used in the work place was considered to be an area that is under researched.  This thought reminded me of an earlier presentation by Vanessa Gough, from IBM at a previous HEA workshop about professional on-line identities where she showed how employees were making use of social media to share information with each other.  Perhaps it is an area that is under researched, but I do sense that social media within the work place is certainly being used and applied.

John also mentioned a new EU funded project called Learning Layers.  Like many EU projects, Learning Layers has a number of collaborators from different countries. Finally, some slides that connect to the ideas and the projects that John spoke of can be found on SlideShare.

Teaching Mobile App Development at Postgraduate level at London Metropolitan

Yanguo Jin gave the first 'main' presentation of the day where he shared with us some of the experience that had been gained at London Met over the past five or six years.  Yanguo made reference to an industry report which predicted that mobile internet will take over fixed internet by 2014.  It was also viewed that mobile technology skills, such as HTML 5, iOS and Android are considered to be increasingly important.

Knowing about a particular skill is one thing, being able to demonstrate mastery in something is a different (but related issue).  To address this challenge Yanguo holds the view that students should ideally create a portfolio of apps (perhaps in combination with other students) to demonstrate their skills and abilities to a prospective employer.

Teaching of mobile technologies at London Met is through an industry-oriented practical approach that emphasises depth (in terms of making use of a single platform) as opposed to breadth (covering a number of different platforms).  I think this is important, since whatever platforms developers end up using, they always have got to 'get into the detail' of the environments and tools that they have to use. 

Key subjects that are covered in the module includes the model-view controller (MVC) design pattern, the use of an integrated development environment (IDE), aspects of visual design, issues relating to power and memory management, web services, development methods and object-oriented programming.

One particular aspect of the teaching that was said to work well is the facilitation of peer-to-peer support (a point which connected to John's keynote).  Another great technique was to encourage students to teach each other through their own seminars, and allowing them to choose their own projects (thus helping to keep students motivated).

Approaches to teaching programming of mobile devices

Gordon Eccleston from Robert Gordon University shared with us some of his experienced he gained whilst teaching students to develop iPod and iPhone apps. Gordon began by asking an interesting question.  He said, 'is programming mobile devices different to other kinds of programming, such as programming using Java or .NET?'  His answer is 'not really'.  Like with other aspects of programming the only real way to learn is to get on and do it.  Gordon also made an argument that we might get to a point where we may not distinguish between different types of device, such as a phone, a tablet or a laptop - we may end up calling them all 'computers' (especially that some mobile phones are now as powerful, computationally speaking, as laptops).  At some point in time, mobility may be an attribute that we automatically assume.

Gordon echoed John's earlier comments about BOYD.  Whilst at the moment Gordon provides his students with a set of iPod Touch devices which they can use (separately from any other device that they may own), one important consideration when teaching mobility may be the availability of effective WiFi in the classroom.

Increasingly, students may wish to work from home or work part time (which connects to John's earlier keynote on the subject of mobile learning).  To facilitate different ways of learning, institutions can make use of technology to allow students to gain access to learning.  Material can, of course, be delivered through an institutional VLE.

Gordon concluded his presentation by speaking about interactive books, which I remember reading was going to be Steve Job's 'next big thing'.  Gordon mentioned a company named Giglets which produces interactive multimedia 'books' for either PCs or eBook readers.  There is also the increasing possibility (or, even, likelihood) that students in primary schools may begin to make use of tablet devices.  

This broader discussion about tablet devices in schools made me begin to wonder about the extent to which digital books and institutional services or systems (such as VLEs) can be connected together and how institutions can support the use of mobile technology through the use of organisational structures.  Whilst technology may sometimes help, organisational structures and support must always facilitate its use, but understanding how to best achieve this can be a whole different challenge.

Teaching Android Programming at Oxford Brookes

Ian Bayley and Faye Mitchell gave a joint presentation about their experience of teaching Android programming at Oxford Brookes.  I remember hearing that they clearly emphasise that mobility is a whole lot more than just the phone.  I completely agree.  One interesting observation is the programming is an activity that is continually difficult.  When it comes to learning how to program, high levels of motivation is really important.  An interesting point is that students who may be strong at mathematics can find programming difficult.  Whilst mathematical skills may be useful, 'algorithmic thinking' may be something that is quite different.

Students are introduced to programming through the use of other tools and languages, such as Alice (which has been mentioned at a number of other HEA events), and Processing (which is a Java-based language that can be used to create graphics and data visualisations, for example).

I also remember hearing about the creation of screencasts to allow students to get a more direct understanding of some of the applications that are used.  Towards the end of the presentation there was time to discuss assessments.  Students are given the opportunity to create their own app.  Examinations, it was argued, was considered to be an inappropriate way to assess knowledge and understanding.  This is especially pertinent given the practical nature of mobile programming.

Bedfordshire's Experiences teaching app development with Lua and Corona SDK

Ian Masters presentation was very different from the others.  Ian's talk was more of a demonstration of two different (and related) developments: a programming language called Lua (which I had never heard of), and a corresponding SDK called Corona (which I had also never heard of).  In combination with each other they can represent a 2D game development environment for different mobile devices.  Interestingly, Lua and Corona are multi-platform, which means that code is (of course) transferrable between different mobile operating systems and devices, making it a really attractive tool.

Ian began his presentation by defining a simple environment in which a game may be played.  This involved defining screen elements, such as a floor, and also blocks.  Another interesting aspect of the environment is that Corona also comes with its own physics engine.  Items that are defined on the screen can bounce on and fall off items that have been defined.  It looks to be really good fun!

Mobile Teaching Experience from University of Buckingham

Harin Sellahewa told us about a new module that is being taught at the University of Buckingham from September onwards.  The aims of the module is to introduce students to mobile application develop, to help them to create a realistic app and to enable students to understand the wider commercial opportunities and issues that surround mobile app development.

Some of the learning objectives include understanding the components of a smartphone (such as its various peripherals), to critically understand the difference between mobile devices and PCs and for students to be able to design, develop and test applications.  Interestingly, the module is using a Windows development platform.  One reason for this different focus is due to familiarity with the Xbox development environment that Buckingham already uses.  I look forward to hearing about how the first presentation went and what challenges were overcome.

Our experience of teaching mobile programming on different platforms at Staffordshire University

Catherine French and Dave Gillibrand presented some of their experiences of teaching mobile programming at Staffordshire University.  It was great to see that mobility has been a subject that has been taught at Staffordshire for quite some time, beginning with Java ME and Windows CE (PDAs) before moving onto Android and iOS.

One of the tasks (or assignments) that students are presented with is the challenge of creating a 2D game, which sounds like a tough challenge.  To address this issue, a very useful and helpful teaching paradigm has been adopted where students are given code examples where students are then encouraged to change the example.  This was considered to be particularly useful with some aspects of programming, such as multi-threading, which students can find difficult.

I hold the view that using examples is a really good idea; I very often used this strategy when I was working in industry.  Examples give students a combination of relatively immediate results (which can be rewarding) whilst also providing the materials that allow learners to gain an understanding of how things work, which may be only acquired over time.

An important point that was made is that a using a real mobile device is so much better than an emulator.  Whilst an emulator can simulate the operation of some mobile peripherals, such as the GPS sensor, for example, other aspects of a mobile device, such as the behaviour of the touch screen are best experienced (and tested) with a real device.

I was impressed by the breadth of subjects that students may be introduced to as a part of their studies.  These may include consuming public web services, development of an application using agile techniques which can include the use of test driven development (TDD) and using tools that are used in industry, such as Subversion.

A final point is that some students may begin a module with the view that developing apps may be something that could be easy.  Programming is something that certainly isn't easy.   I guess a personal reflection is that educators not only need to convey difficult technical concepts and expose problem solving challenges to students, educators also need to work to manage expectations.  Programming, irrespective of whatever form it takes, is a craft and it takes time to acquire craft knowledge (and experience).

From the desktop to devices: teaching interaction design

I have to confess that I was responsible for the penultimate presentation of the day.  Tempting though it is, I'm not going to write in the third person for this part of this blog.  Instead, I'll refer to myself as 'I' as opposed to 'Chris'.

My own presentation was slightly different than all the others since it wasn't about mobile technology or even about programming.  Instead it focused upon the process of designing interactive products and experiences (of which, programming will eventually play an important part).  My presentation was based on experience gained as an Open University associate lecturer over the past six or so years where I have tutored a module entitled Fundamentals of Interaction Design (which I'll call M364).

M364 is a great module.  It introduces students to key concepts such as usability goals, user experience goals and design principles.  It then helps students to appreciate the power of sketching.  Students are introduced to the concepts of evaluation where they are then encouraged to understand the advantages and disadvantages of different approaches.

During my presentation I described a scenario where a mobile device to guide a visitor around a historical location needed to be designed.  I quickly outlined different types of sketches.  The first was a storyboard, which enables designers to think about the broader context in which a product is used.  The second is a card-based prototype which allows designers to consider the sequence of interactions (and even simulate them).  The final sketch was a more detailed interface sketch which contained more detailed design about icons and how information is presented to a user.

The title of my brief presentation reflects the notion that the design process can be applied to many different kinds of platforms and devices.  Not only can the interaction design process be applied to mobile or desktop applications, but also to static devices, such as ticket machines, for example.

Why teaching mobile? An Industry's perspective

The final presentation of the day was by Abdul Hamid.  One of the striking aspects of Abdul's presentation was where he shared with us some graphs from an on-line job site (Indeed) which emphasised the demand for certain mobile skills.  Some older skills, it was argued, were waning in popularity whilst others (particular those that were mobile related) were becoming increasingly popular.

Reflections

I felt that this was a very cohesive event, in the sense that there were a number of presentations that were entirely dedicated to sharing of not only teaching practice (and insights about what works and what doesn't), but there was a lot of commonality in terms of technologies and tools.  Although there were many high points of the day, the highlight for me was finding out about Lua and Corona.  I had never heard of these tools before, which reminded me of how difficult it is sometimes to keep up to date in a fast moving field, such as mobile technology and software development.

As mentioned earlier, technology is a part of a bigger picture.  John's presentation touched upon the importance of theory and history, particularly with regards to the domain of mobile learning.  Mobile has an important role to play within business, commerce and our wider social environment.  Other disciplines will undoubtedly play an increasing role when it to understanding the increasing role that mobile technology plays in our everyday lives.  Just to echo words from John's keynote, pedagogy, usability and content are all important areas.

At the end of the workshop there was a short opportunity to discuss how the participants could potentially work together, collaborate and continue to share practice.  There was also some debate about having a follow up meeting next year: a really positive outcome - congratulations to the organisers at London Met!

Permalink Add your comment
Share post
Christopher Douce

Enhancing the employability of computing students through an online professional presence

Visible to anyone in the world
Edited by Christopher Douce, Monday, 13 Jul 2020, 12:17

A HEA workshop held on Friday 9 June at Birmingham City University set out to answer the following questions, 'how important is our online presence to prospective employers?' and, 'what can students do to increase their online visibility?'  Of course, there are many other related questions that connect to the broader subject of online identity, and a number of these were explored and debated during this workshop.

This blog post is a summary of the workshop.  It is, of course, a personal one, and there's a strong possibility that I might not have picked up on all the debates that occurred throughout the day.  If there are other themes and subjects that some of the delegates think I'm missed, then please do feel free to post comments below.

Pushing employability for computing graduates

Mark Ratcliffe from the HEA kicked off the day by talking about the employability challenges that computing graduates face, connecting with his experience as being head of subject at Aberystwyth  University and his work at eSkills.  An interesting point and observation was that demand for computing skills has increased over the last ten years but the number of computing graduates has been reducing.  There is also a gap between computer graduates and graduates from some other disciplines in terms of gaining full time employment six months after graduation.

Technical skills are fundamental and necessary skills, but so are interpersonal and business skills.  Placements were cited as an important way to enable students to develop and to gain first-hand experience.  Technical skills are important, and evidence of them can be gained through application forms and interviews, but also through approaches such as portfolios of evidence.   Evidence of our work and interests is increasingly available to be seen by others through online sources.

The second introductory presentation was by Mak Sharma, Head of School at Birmingham City University.  Mak spoke about some of the changes that were occurring to the institution, and also mentioned a number of familiar (and unfamiliar) technologies, all of which can play an important role in computing and technology education: Alice, Greenfoot, Scratch and Gadgeteer.  An interesting point was the connections between industry training providers and the university.  I sense that collaborations between the two sectors are going to become increasingly important.

ePortfolios in the big bad world

Andy Hollyhead, from Birmingham City Business School started his presentation (Prezi) by sharing with a video entitled Stories of ePortfolio integration, produced by JISC and BCU (YouTube). The video features a demonstration of an ePortfolio system called Mahara which has been linked to the university's Moodle virtual learning environment.

An ePortfolio is, in essence, a tool which can be used to store data, usually documents.  It is also a tool that can have different uses.  On one hand it can be used to help students to reflect on their own studies.  On the other it can be used to share information with a wider community of people, and this may potentially include potential employers. An ePortfolio can also be used to demonstrate evidence of continuing professional development (CPD) within an organisation.

An important question is 'how long can I have access to my data for?' This question is particularly relevant if a university implements an ePortfolio that can be used to create a professional presence and suggests that institutions need to consider policy as well as technical issues.  To circumvent this challenge, standards bodies have proposed standards to allow the sharing of ePorfolios between different systems.  Andy mentioned other systems such as VisualCV and PebblePad.  One of the greatest challenges is, of course, to understand the variety of different ways in which ePortfolio systems can be used.

Using code repositories in programming modules

Whilst ePortfolios can be used to share information and documents, John Moore from the University of West London spoke about the notion of source code repositories and considered how their use may enhance the employability profile of students.

Version control systems are an essential part of the software development process.  The facilitate collaboration and sharing.  They also enable developers to learn how software has changed over time. 

There are, of course, a wide range of different systems, such as CVS and Subversion (Wikipedia).  John focussed on GIT (Wikipedia), which is a distributed version control system that has been created for Linux kernel development.  John also shared with us a number of different public repositories that may be used, such as Bitbucket, Gitorious  and Github (none of which I had heard of before).

John said that 'logs define you as a programmer' (logs, of course, being commit or change logs, recordings of when a programmer has made an addition or change to a repository).  To boost a 'programmer profile', students are encouraged to participate in open source software development.  Not only may this present evidence of technical abilities and understanding, evidence of participation also represents evidence of team skills.

John's presentation gave way to a really interesting debate about how experience and understanding of version control systems represents an important employability skill.  I also remember hearing that students from different backgrounds (and perhaps different undergraduate degrees) have different levels of expertise.  What is without question is that industry makes extensive use of such tools, and it is the challenge of educators to encourage their use.

Student professional online branding

Thomas Lancaster from Birmingham City University introduced us to the notion of a 'personal brand', before describing what we might be able to do to create an online version.  One thing that students could do is create a LinkedIn profile.  Thomas then went onto mentioning tools such as Facebook and Twitter, which can yield potentially more immediate information about a potential candidate. Thomas argued that computing students should ideally have their own professional website which presents an identity whilst also practically demonstrating their technical skills to other employers.

Sharing information online is, of course, not without its risks, and everyone needs to be mindful of this.  One thought is that no-one can say who is going to be doing the next internet search against your name.  Since the web had now become the 'read-write' web, we now need to be careful about what we share, a balancing act between information availability and information privacy, a point that was returned to time and again throughout the day.

Building professional web presences

Building on some of the points that Thomas made, Shovan Sargunam gave us a practical demonstration of how to create an online professional presence, through the creating of a WordPress (Wikipedia) based website.  A couple of the steps included registering your own domain (if it's not too late), then choosing an internet provider, and then installing or configuring WordPress.

WordPress isn't the only way to go.  In some ways, it very much depends on the tools that you are familiar with.  Shovan also mentioned some other useful sites (in addition to LinkedIn) that enables users to create online profiles, such as About.Me and CreativePool.

Student's online profiles for employability and community

Information about ourselves that we share online can have a number of different uses.  One other use lies with the way in which information can be useful in the development of an online community. Karen Kear and Frances Chetwynd from the Open University described a research project that is aiming to uncover more about how online profiles are used by students who make use of online discussion forums.  Research is carried out by through questionnaires and online synchronous focus groups.  There are, of course, a spectrum of different opinions (and practices).  Some students are happy to share information and photographs of themselves, whereas others have concerns about privacy.

Exploring the employer use of professional presences

Vanessa Gough, from IBM, presented a rather different perspective and one that was very welcome.  Vanessa is responsible for industrial trainees and she makes the point that given the number of applicants that are made to IBM, she (and perhaps some of her colleagues) just don't have the time to go rummaging around on the internet for information about candidates.  This said (and these are my own words here, rather than Vanessa's), it doesn't mean that this doesn't happen.

Vanessa described how new recruits can make use of social media to communicate with each other to become increasingly familiar with the organisation in which they work.  Twitter and Facebook can be used to share information about what it is to work and live in certain locations.

A really good point was the social media offers candidates a way to 'get to know' an organisation and begin to understand a bit about its culture.  Engaging with an organisation's social media streams and learning from them has the potential to enable candidates to stand out from the crowd.

How social media can enhance your employability

The final presentation of the day was by Vanessa Jackson, from Birmingham City University.  Her presentation had the interesting subtitle of, 'can you tweet your way into a job?' (which follows on nicely from the earlier presentation).  Vanessa introduced us to a site called SocialMediaTutorials.  This is a set of Open Educational Resources which are available through Creative commons.  One of the videos describes a case where a student was able to gain a work placement or internship by directly contacting people who worked within a local radio station.

Reflections

One term that I had not heard of before, was DPQ, or drunken post quotient (as introduced by Andy Hollyhead).  The higher the metric, the more trouble we might (potentially) cause ourselves.  It was a concept that was immediately understandable, for a number of reasons that I'm not going to go into.

My own personal opinion is that having an online professional presence is a 'good thing', especially if we work within a technical discipline such as information technology or computing.  This said, there are certainly differences of opinion.  Some of us simply don't want to share aspects of ourselves online, and there are good reasons for this, which we should respect.

These thoughts made me consider online presence in terms of a number of different dimensions.  Firstly, there is the dimension of security and privacy, and the tension that exists between the two.  Then there is dimension of the personal and public (or personal and professional).  Coming back to ePortfolios, there's also the dimension of demonstration (of achieve) and reflection (to achieve).  Finally, there is the dimension of the audience - a difference between the general and specific.

Towards the end of the day there were a number of interesting debates.  Two questions that I've noted are, 'how might we embed the notion of professional presence into the computing (and wider) curriculum?' and 'what is the perception of others if one doesn't have an online professional presence?'

An interesting thought is that it's not always what you share on the internet that is a concern - the people who you know may potentially cause some difficulties.  The canonical example of this where a friend or colleague shares pictures of a 'night out' somewhere, the details of which should have remained personal.  A point here is that we all need to be vigilant.  Performing internet searches against our own names (or 'ego-googling') is no longer an activity that can be mildly interesting or titillating.  Instead it could now be a necessity to ensure that correct and appropriate information is available to be shared with others.

For me, one of the outcomes of the day is a reminder that different tools can be connected together.  For a while I used to be an avid Twitter user until I discovered that it was gradually taking over my life and felt that I had to 'reclaim back' some of my privacy.  I've now reassessed my own online professional presence, and what I want to do is use Twitter more as a feed for other social platforms, such as LinkedIn and Facebook.  So, in time, I hope to increase my online visibility - but I am also very aware that I'm unlikely to have a complete understanding of the implications of doing this.  I guess what I'm going to do is to always be careful about what I share and when.

The workshop slides are available (BCU website). Many thanks to Birmingham City University for organising an interesting and thought provoking event!

Permalink Add your comment
Share post
Christopher Douce

Life in the fast lane? Towards a sociology of technology and time

Visible to anyone in the world

On a recent trip to Milton Keynes on 29 May 2012 I had the opportunity to attend a Society and Information Research Group (SIRG) seminar by Judy Wacjman (LSE).  Judy is a Professor of Sociology at the London School of Economics.   Judy's presentation, very broadly speaking, was about technology and time and whether one affects the other.  Her seminar was related to research that may feed into a book that she is currently working on.  This post is a personal reflection of some of the themes that struck me as being significant and important in my own work.  Others who attended the seminar are very likely to have picked up on other issues (and I encourage them to add comments below).

Timing

For me, the timing of her seminar couldn't have been better.  My last blog was about an event that shared practice about how lecturers and institutions could most effectively help students to develop software for mobile devices.  During this event mobility was portrayed as an opportunity, but there is also was an implicit assertion that mobile technology will change how we work.  In doing so, mobile technology can affect how we spend our time.

Productive work may not cease the moment that we now leave the office, but instead can now continue for the duration of our commute home.  Work may invade on our personal time too, since we can easily take our devices away on holiday with us.  Important messages that are concluded with a succinct, 'sent from my iPhone', clearly suggests that we are working whilst we are on the move.

Judy mentioned that perhaps some of these concerns mainly relate to 'management or professional types', and this might be the case.  But one way to really understand the issue (of time, and how it is affected by technology) is to carry out studies, particularly ethnographic studies to conduct observations about how people really use technology.

Research methods

Such methods are briefly discussed within a module, such as M364 Fundamentals of Interaction Design, which is concerned with how to make devices and systems that are usable to people.  Two approaches used for the evaluation of the success of products includes ethnographic studies (observing users), and asking them to complete diary studies.  Judy's presentation emphasised the point that interdisciplinary research is a necessity if we are to understand the way in which technology impacts our lives.

Judy managed to connect my immediate concerns about mobile technology and its impact on our time with earlier debates.  Introductions of devices, such as washing machines and other labour saving devices were touted to 'save time'.  This raised the questions of 'what happens when we get that time back?  How might we spend it?'  Unpicking these questions leads us into further interesting debates, which relate to the different ways in which men and women use the time that they have available, and towards the broader concerns of capitalism.

One point that Judy mentioned in passing (which I've remembered reading or hearing before) is that perhaps we have been 'cheated by capitalism'.  Perhaps the extra time we have gained hasn't been spent on leisure, but instead has been spent on doing even more work, which allows us to buy more stuff (since, perhaps, everyone else is doing the same).  A personal reflection is that mobile devices also act as devices of consumption.  Not only do they facilitate the extension of work into our 'dead time', but also permit us to browse eBay and on-line stores whilst travelling on a train, for instance.

Technology and speed

Returning to the main debate, does technology cause us to work 'faster' or more?  Is the pace of our lives accelerating because we can access so much more information than ever before? Judy urges caution and asks us to consider causality.  On one hand there is technological determinism (wikipedia), but on the other there is social determinism (wikipedia).  Mobility can facilitate new ways of interacting with people, which may then, in turn, give rise to new technologies.  It could be argued that one helps to shape the other mutually.

Judy cautions against having the individual as the focus of our attention.  People live and work with each other.  Perhaps the household should be the focus of our attention when it comes to understanding the influence of technology on our lives.

What was clear from Judy's seminar was that there were many different areas of literature that could be brought to bear on understanding technology, time and how we spend it.  During her talk I made a note of a number of references that might be interesting to some.  The first was an edited book entitled High-speed society: social acceleration, power, and modernity, edited by Hartmut Rosa and William E Scheuerman.  The second was entitled, Shock of the old: technology and global history since 1900, by David Edgerton.  The final book that I have extracted from my notes is that of, Alone together: why we expect more from technology and less from each other, by Sherry Turkle (MIT, homepage).

Reflections

An enjoyable and thought provoking seminar which highlighted an important point that when you begin to scratch the surface of a question you then open up a broader set of connected and related issues.  Important subjects include the importance of the wider context in which technology is used and what tools and approaches we might use to understand our environment.  I was reminded of the obvious truth that, given technology firmly exists within the human context, learning from disciplines such as history and sociology is as important as drawing upon lessons from science and engineering.

 

Permalink Add your comment
Share post
Christopher Douce

Mobile Application Development: from curriculum design to graduate employability

Visible to anyone in the world
Edited by Christopher Douce, Tuesday, 19 Oct 2021, 11:26

I had never visited the University of Buckingham before.  It was on the morning of Tuesday 15 May 2012 that I found myself travelling to Milton Keynes railway station to meet with a pre-booked taxi that would whisk me into the unknowns of the Buckinghamshire countryside towards an event that was intended to share practice about the teaching of mobile technology.  Although I had never visited Buckingham, I have heard it being spoken of many times before; a radical institution which was founded at approximately the same time as another radical institution, the Open University. 

As well as sharing practice about the teaching of mobile application development another really important theme was the subject of employability and the open question of whether universities are 'teaching the right stuff' to enable graduates to immediately make a contribution in the workplace.

This blog post is a summary of a visit to a HEA event entitled 'Mobile Application Development: from curriculum design to graduate employability'.  If I've missed any key points, I encourage the fellow participants and delegates to add comments below.

Industry keynote

Lee Stott, an academic evangelist from Microsoft kicked off the day with a really interesting keynote.  Lee is from a part of Microsoft that works with university departments (Microsoft Faculty pages).

Lee emphasised the point that users expect connectivity.  I made a note of an interesting quote that went 'mobility plus cloud equals opportunity'.  It's easy to imagine (or even remember) situations where one gained access to information whilst travelling, solving problem, such as finding an address of a location or accessing some urgently needed information.

Lee also made the point that mobile devices are our predominant work tool (or tools).  A tool, of course, might be a phone or a laptop.  This is certainly true in my case; I often haul my laptop between the OU's headquarters in Milton Keynes and my home, sometimes using the dead time on a train to do some marking.  Another thought that comes to mind is whether mobility is causing work time to encroach on our personal time, but this is a whole other debate (and one that I hope to connect with by writing another blog post about a recent seminar).

The usefulness of an app depends on a combination on its functionality, the functionality of a device and the availability of a network.  To be useful, apps need to be useful but also graphically appealing.  Lee emphasised the importance of designers, not just software designers, but graphic designers.  This connects to an important point which is that creating good apps is an interdisciplinary activity - a combination of technology, business and art. Writing commercial apps isn't just about writing software that works - they need to be 'hardened'; tested thoroughly and be checked for vulnerabilities.

Microsoft, along with other mobile platform vendors (such as Google and Apple) have their own ecosystem of tools, technologies and platforms.  Microsoft is but one of many platforms that educators can choose from.

I have to confess (for my sins) that I used to be a software developer who mostly specialised in Microsoft technologies.  I used to use .NET, MS SQL and a bunch of other stuff.  It has been, however, a few years since I've done this.  Lee introduced new technologies that were entirely new to me, such as Microsoft Azure (wikipedia) and Microsoft XNA (wikipedia) for Xbox.  Lee also mentioned other software that was on the near horizon, such as Windows 8 (wikipedia) which can be used on 'slate' (or tablet) devices.

Lee also touched upon the important subject of recruitment.  Lee emphasised that it is important to encourage students to build apps and sell them through apps market places to create a portfolio which can be shown to potential employers.

The question and answer session was interesting.  There was some discussion about cross platform approaches to development and the fact that when you go cross platform, developers lose some functionality from the original host operating system of a mobile device (or phone).  The subject of native code versus multi-platform code was a debate that arose on a number of occasions throughout the day.  HTML 5 (wikipedia) was regularly mentioned, along with a platform such as PhoneGap (PhoneGap website).

Another tension that exists particularly when industry representatives and university representatives debate curriculum, is the difference between education and training.  Industry wants people who are fully trained (and ideally want universities to do this), but the real role of universities when it comes to technology (in my opinion) is to enable students to effectively know how best to learn and adapt to new tools and situations.  Lee made the point that the teaching of fundamentals is essential.  I agree.  Conveying principles through the use of vendor specific tools whilst presenting concepts in a general way to enable other technologies to be understood is a difficult thing to achieve.

Mobile application development: a journey thus far

Harin Sellakewa from the University of Buckingham gave a presentation that described how mobile technology came to be taught, in its current form, at Buckingham.  Harin described how some of the curriculum had changed and outlined the introduction of new modules.  The use of mobile technology had been explored by a number of various projects, including those that were funded by the EU.

Some of the key learning objectives of a module on mobile software was mentioned: how to design applications (or apps), understanding different components and learning about various guidelines and specifications.  All these learning objectives could then contribute to making an application that could be sold on the free market.

Harin also gave us a number of useful tips.  Any module must (of course) satisfactorily complement any existing modules, also aim to get people involved, speak to different vendors, start with student projects, attend training events that are run through industry and take the time to network.

A number of different topics were exposed through the question and answer session.  As well as a discussion about different technologies, an industry representative mentioned the importance of candidates having a portfolio of work to demonstrate to prospective employers.  One point that stuck in my mind was that an unfinished application has the potential to work against an applicant; showing something polished and complete is necessary.

Developing Apps in Schools

Aaron Peck teaches computing and ICT at the Royal Latin School, Buckingham, a school just around the corner from the university.  Aaron began by speaking about wider discussions about the GCSE computing curriculum, mentioning the OCR GCSE which was said to contain three key components: programming, a research project and an examination.

Aaron emphases fun and mentions the use of the MIT Scratch (Scratch website) environment.  He also went onto speak about mobile devices, a technology that the pupils are invariably likely to be familiar with.   Here lies an obvious collision of ideas: why not teach programming through the use of mobile devices?

Scratch has, of course, some distinct advantages - it is immediate and gets around the tyranny of fiddly syntax by providing students a graphical environment in which they can play.  Another programming environment that has a graphical world is the MIT App Inventor (App Inventor website) which allows users to create apps for Android phones.

Students are encouraged to create small projects, which may include a simple calculator, a recipe book or a hangman game.  The creation of apps has the potential to open up further discussion of wider issues, such as how such developments might be commercialised.  I remember an anecdote from Aaron, where he was asked by a student about how much an app programmer might earn; a testament to his ability to instil enthusiasm and engaging choices of technology.

There were some advantages to using App Inventor; it can be used on multiple development platforms, it is relatively simple to install and given that students may have used Scratch during earlier studies, making the graphical nature of the programming environment to be (potentially) more easily grasped by students.

Aaron isn't stopping at creating apps with App Inventor.  He mentioned his intention to try to work with Lego Mindstorms Robots through the Android SDK, where it might be possible to create a 'remote control' app using Bluetooth radio.  Aaron also mentioned that there was also opportunity to share the workings of HTML and Javascript with his students.  If my memory isn't playing tricks on me, I also seemed to recall that he mentioned that one of his students was inspired enough to use C++.

The question and answer session led us to subjects and technology such as Microsoft Kodu and Micrcosoft Gadgeteer.  Other important issues include addressing the gender imbalance, and how to motivate all student groups, including those who may not have a strong technical bias.

I really enjoyed this talk.  Two big parts of tech were familiar to me: Scratch (or as I know it, Sense), and App Inventor.  Both products are used as a part of different Open University computing modules, TU100 My Digital Life and TT284 Web Technologies.  It was an eye opener, for me, to see how these products could be used a way to inspire students at GCSE level. 

Mobile Assessment

The use of mobile technology to help teaching and learning seems to be a hot topic at the moment.  Joan Lu gave a presentation about the use of mobile technology for assessment and also mentions the use of student response systems making reference to an EU funded project entitled Do-IT.  Joan is from the XDIR research group at the University of Huddersfield which has carried out research  projects related to mobile technology.

Designing the mobile syllabus to enhance student employability

Yanguo Jing from London Metropolitan University gave a presentation about his first hand experiences of teaching about mobile technology to his postgraduate students.  It was a really interesting presentation that was packed with useful tips, not just about teaching but also about industrial engagement too.

Returning to the subject of multiple platforms and environments, Yanguo said that initially he tried to teach a little bit about all the major toolsets.  He came to the conclusion that this was less than ideal.  Although students might be given breadth, getting to the 'depth' is always a challenge.  It was decided, therefore, to focus on one particular platform and use the experience with the platform to make points that are important in other platforms too.  This is a very sensible practical decision; there is only so much detail that a lecturer can hold in his or her head at any one time.

Understanding mobile isn't just about understanding technology and the fundamentals of creating some executable code that runs on a device, it is also about understanding the surrounding business and economic area.  Connecting back to the ideal of creating marketable Apps that Harin touched upon in his earlier presentation, Yanguo said something about how he encourages his students to enter application competitions, or Appathons.  He also mentioned that students were also encouraged to attend an industry conference, DroidCon, to gain first hand experience about what is happening within industry.  It was interesting to hear that Yanguo is a part of an industry liaison group.  Not only does this facilitate a connection between academics and industry, it can also act as a connection between industry and students too.

Finally, it is also perhaps worth mentioning that Yanguo is helping to organise a related HEA event on mobile technology on 15 June 2012, entitled Workshop on Teaching and Learning Programming for Mobile and Tablet Devices.  It sounds like it's going to be a great event!

Programming with iOS

Gordon Eccleston from Robert Gordon University, Aberdeen shared some of his experiences of teaching using Apple's iOS.  This platform enabled students to learn something about HCI principles and also about object-oriented programming (through the use of Objective-C).

Gordon offered a key tip which echoed earlier discussions in the event.  He said, 'keep your modules as generic as possible'.  Inspiration and information that informed the creation of his module included looking at different text books and short courses that were designed for industry.  Studying the documentation provided by the vendor can be a very useful source of materials that can help to guide or inform the creation of aspects of a module.

Gordon spoke about lab based teaching (in a lab containing lots of Apple kit) and student course work.  Gordon then went onto present a brief overview of a number of different student projects.  The use of projects cannot be understated.  A good project connects the technology with broader issues of business and also helps to give the student some good materials that can be immediately demonstrated to a potential employer (I have this image of an interviewee handing their phone to an interviewer whilst saying, 'this is what I've done).  One project that stuck in my mind was an app that illustrated a fashion portfolio which demonstrates a connection between apps and marketing.

Gordon's session inspired a really interesting question and answer session.  One point was that PC (or Mac) based simulators are all very well, but it's also important (as well as rewarding) to allow students to run their software on actual devices (such as an iPod touch).  For one thing, it allows the developers to gain access to device only peripherals, such as accelerometers and other sensors that they wouldn't otherwise have access to.

Reflection of curriculum design and delivery in mobile computing

Khawar Hamed from the University of Staffordshire spoke about his experiences of curriculum design.  Khawar's presentation reminded me an app is at the top of a technology pyramid.  Along with the operating system of a device, apps are perhaps the most visible software artefact that users interact with.  Underneath the app and beyond the phone there is a sophisticated digital infrastructure that enables devices to work.  Some of the modules that Khawar mentioned allow students to begin to study these underlying technologies.  Another point is that mobility isn't just about technology, it's also about enabling organisations to achieve their objectives.

Khawar touched upon other issues such as the importance of getting the right name for a course or programme.  Since the names and phrases used to describe technology can change relatively quickly, perhaps the names of modules and programmes should be prepared change too?   An important point was to always seek industrial involvement wherever possible.  Connecting to this point, Khawar mentioned an organisation called The Wireless University Forum.

One really interesting debate that emerged from this presentation centred upon whether an institution should provide devices that students can transfer code to.  The answer was a resounding 'yes'.  Not everyone will have an Android phone, or an iPhone (or even a smartphone, although this is something that is changing).  Plus, providing a device delineates between what is a 'learning' device and what is a 'personal' device.

Mobile app development - creativity, skills and evidence

The final talk of the day was a second keynote.  Andrew Lapham, from Yell Labs gave an enthusiastic presentation about the work that his team carries out and what characteristics in potential employers he is looking for.  Key points include the ability to be creative and generate new and interesting ideas, strong communication skills (the ability to communicate those ideas and to persuade others of their merit), and an underlying enthusiasm for technology and what it might be able to achieve.

The notion of having a portfolio of evidence was also touched upon.  Whilst demonstration of apps or talking through a pet project is impressive, what is more impressive is having evidence that your own product or code has been marketed.  This might include having a blog about a product, and also gathering some evidence about how your customers view your product.

Reflections

There was one thing that surprised me about this day which was an exceptionally strong focus on apps.  In retrospect, it shouldn't have been a surprise at all.  Apps are the way to consume software on mobile devices.

I certainly sense that teaching programming for mobile devices isn't easy.  Each platform comes attached to ecology of tools (and a whole set of accompanying vocabulary) and techniques.  Teaching everything just isn't an option, but teaching in depth is surely the right way to go.  Educators will therefore have to choose a platform and figure out how to connect a technology choice to wider principles to enable graduates to more readily get to grips with the new environments they will inevitably face.

One really interesting question is whether mobility and the technology that goes with it is changing software engineering?  It's not a question seems to have an easy answer, but perhaps user based apps require different design methods than the lower level software that support the networking infrastructure and perhaps those who have stronger connections with the industry would be able to comment.

A final reflection relates to the creation of a portfolio that can help during the recruitment process.  The importance of a personal portfolio was emphasised in a recent HEA event at the University of Greenwich about gaming and animation.  Employers like to see what applicants have done.    Furthermore, it offers opportunities to allow employers to find out about the difficulties that applicants face and how they were overcome.

When it comes to being an app developer, the message was clear: a portfolio of well-crafted working apps was clearly something that employers would like to see.

Congratulations to Buckingham for running a fun and thought provoking event!

Permalink Add your comment
Share post
Christopher Douce

Visit to University of Abertay, Dundee

Visible to anyone in the world
Edited by Christopher Douce, Friday, 24 Feb 2017, 18:05

During a motorcycle touring holiday at the beginning of May, I found a bit of time to pop into the University of Abertay, Dundee, for a couple of hours.  This was the first time I had ever been to Dundee.  One of the reasons for the to visit was to find out more about the university's Dare to Be Digital video game competition which has been running since 2000 (becoming an international event in 2005).

Dare, as it is known, is a tough event to enter; students and teams have to competitively apply.  When students have been accepted, they work together within interdisciplinary teams to create a whole computer game for the duration of the event.  I sense that Dare is unusual and powerful vehicle since representatives from industry play an important role.  Industrial contributors are said to be involved for a number of reasons: to offer support and guidance to student teams, to gain new ideas and inspiration and also to be introduced to participants who may be looking for a foothold within the industry.

In an earlier HEA gaming and animation event I attended I heard it said that the best way to demonstrate one's own technical abilities is to provide a demonstration of a completed game.  I've always felt that a CV and interview is a thoroughly inadequate selection approach, especially for software roles which are, in my opinion, intrinsically creative.  I've always wanted to show an employer what I've coded but have, on occasions, been scuppered by convention and copyright.  In a way, creating something to add to a 'digital portfolio' takes a leaf straight out of the creative arts book.  Showing a development (which is what the Dare participants produce) allows not only a demonstration of technical skill, but also facilitates opportunities for further discussion about some of the challenges that had to be overcome during the production of a game.

I was interested to learn that Dundee has what is known as a Games Festival (BBC News), an event that I hadn't heard of before.  There are film festivals, music festivals and book festivals and games connects with all these different types of media.  I would even go as far as writing that there are some games which strike me as works of art, combining breath taking animation, complex characterisation, awesome sound all of which have the potential to create strong emotional responses.  The thought of a games festival reminds me of a suggestion in the earlier HEA event that students should try to make the time to visit such events.

During my visit to Abertay I remember having a chat about the challenges of working within the games industry.  I remember once hearing that commercial software developers have what is becoming known as a half-life.  This means that after a number of years being really technical and cutting code, the challenge of learning 'yet another tool' and juggle code in the developers short term memory becomes activities that become tiring rather than exciting.  It is felt some roles within the games industry, perhaps the more technically focussed ones, can also have a career or role half-life.

This said, being involved in the games industry isn't just about cutting code (games engines can be utilised and harnessed), there's also roles which relate to the production of a game or product.  Understanding the bigger picture and being able to work with other disciplines (such as graphical design, music and business) are skills that are arguably more important than pure technical talent.

One comment that I remember from the visit was that some students choose to study games because they enjoy playing them.  It strikes me that there is a huge chasm between the attractiveness of the end product and the intense and detailed development activities that must take place to create a game.  It is akin to the difference between watching a film and thoroughly understanding the technical and artistic dimensions of film production.  I came away having confirmed my sense that working in the industry is hard work, and it was encouraging that the staff I met were able to convey first-hand industrial experience to their students.

I'll close this blog with three different thoughts: an observation, a personal reflection and some thoughts about research.  The observation is of a mural that could be seen in the building I visited.  The mural depicted a graphical history of three different things (I hope I'm remembering this correctly!)  The first is a timeline of gaming hardware, the second (I think) is a timeline of important games, and the third seemed to be a timeline of important companies or publishers.  Such a mural offered a visible reminder of the context in which students were working and that we are a part of an emerging history which continually changes as technology changes.

The second thought, the one that is personal, is closely connected to the mural.   Those of us who have grown up with technology have our own unique relationship with games.  In some ways the industry may play a formative role in the way that we interact with technology.  My own history with gaming began with home computers of the 1980s particularly the Sinclair ZX Spectrum (which was apparently built in the now closed Timex factory in Dundee; a fact that had passed me by!)  I remember buying cassette games from specialist computer shops and, later, budget games at my local newsagent (a reflection on how the marketing of games operated at the time).

More powerful technology led to better (and more exciting) games, particularly Elite (Wikipedia), which was played (during my school lunch hours) on a BBC Model B equipped with an exotic piece of technology known as a disk drive.  Elite was astonishing.  It made use of three dimensional wire frame graphics - a player could explore an entire galaxy and cause no end of trouble by shooting at space stations. 

My games history also includes ownership of two different generations of Sony Playstation but concludes with some meddling with on-line worlds and games hosted on mobile devices.  This movement to different platforms and then onto the internet reflects how gaming (and the games industry) has changed with developments in technology.

Finally, onto the subject of research.  I have thoughts that reflect two rather different questions.  The first relates to understanding the career stability and demands placed on those who work within high technology industries, and the ways in which career trajectories can change and develop.  Understanding the quality and diversity of careers within an industry has the potential to offer useful and practical guidance to programmes of study that aim to equip students for work within an industry.  I don't know if the games industry has been subject to any form of systematic study, but perhaps this is an interesting question to ask.

The second question relates to an increasingly strong research interest, namely the effect of geography and which other influences may affect the development of a particular technology or industry.  Perhaps there is something special about Dundee that has affected how the city has emerged as centre for games education.

A few final words: many thanks to those at the University who were able to spare some of their valuable time to talk to me; I felt very welcome.  I was minded of the fact that scratching the surface of gaming revealed a complex creative industry and one that relies on the creativity and talent of people from many disciplines. My visit reminded me of the exciting (and challenging) nature of digital media and emphasised that continual change and evolution in both the industry and technology is a constant.

Permalink Add your comment
Share post

This blog might contain posts that are only visible to logged-in users, or where only logged-in users can comment. If you have an account on the system, please log in for full access.

Total visits to this blog: 1988109