OU blog

Personal Blogs

Christopher Douce

Introduction to the REF

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 24 Oct 2018, 10:42

In November 2018, I had an opportunity to attend something that was called a ‘writing retreat’. The idea behind the event was simple; it was an opportunity to take a bit of time out from day to day activities and focus on writing up various bits of research that colleagues within the school had been working on. There was another reason for running the retreat: there would be a particular emphasis on writing papers that could be submitted to the 2021 REF (the Research Excellence Framework).

What follows is a brief summary of some of the points that were made during the introductory workshop which introduced the REF. Full acknowledgements are extended to Professor Jane Seale who facilitated this workshop. Many of the words here are directly from Jane’s presentation.

Introducing the REF

A key phrase that I’ve heard ever since I’ve been working at a university is: REF submission. A submission relates to particular subjects. Some universities may focus on some submission areas over others to play on their own strengths and weaknesses.

The OU makes a number of submissions, and one of the submission areas that I am connected with is education. This means that ‘education-like’ papers will be grouped together, submitted, and then assessed by an expert panel. The outcome will be a ‘research rating’, and this is directly linked to income that is received by the university. Simply put, the higher the rating, the more research income an institution receives. 

I asked the question: “what does ‘education-like’ actually mean?” These will be papers that more than just describe something, like a system or a tool that has been designed (which is what some computing papers can be). Education research papers need to be firmly linked to an education context. They should also present a critical perspective on the literature, the work that was carried out, or both.

When?

The 2021 REF assesses research that has been published in the public domain that has been carried out between 1 January 2014 and 31 December 2020. Papers that are to be included are typically included within a university repository. (The OU has a repository called ORO).

What?

Every academic whose contract which has a significant research component should submit at least one paper, and there should be a REF average of 2.5 papers per academic (as far as I understand things!)

Publications should be of a 3* or a 4* quality. Three star papers means that a piece of research is of ‘international significance’. The education REF panel will accept different kinds of submissions, including journal papers, conference proceedings and book chapters, but there has been a historic bias towards journal papers for two reasons: they are peer reviewed, and are more readily cited. Books or book chapters should present summaries of research and shouldn’t be student focussed or summarise what is within a field.

In education, some journals contain practice papers or case studies. I noted down that education papers don’t have to be empirical to be considered for the REF. A paper might present a new practice or an innovation or development of an existing practice. A suggestion is to preface it with what you’re doing and why you’re doing something, and offer a thorough criticism of how and why something fits in with existing work. This means that it’s necessary to consider comparisons and contrasts. Descriptions are also necessary to contextualise the research, but the balance needs to be right: practice papers need to be generalisable.

Where?

A question to ask is: where should you publish? It was interesting to hear that the REF panel for Education isn’t particularly concerned with the impact of the journal where research is published; what matters is the research itself. Given that education research tends to be descriptive, a suggestion is to choose journals that have a generous word count. One such journal is: Open Learning, which is thoroughly excellent. 

How?

How are submissions assessed? The submission chair will look at the title, abstract and reviewers and match a set of papers to an expert. Every paper will get read twice, and there will be some kind of process of random sampling. An interesting thought is: ‘don’t make the reviewers work too hard’, which is advice that I also give my students who are writing their end of module assessments or project dissertations. Institutions (including the OU) may having something called a ‘mock REF’, where they try to replicate the official REF submission to get a feel for the direction where the institution is heading.

REF criteria

Papers are judged on three key criteria: significance, originality and rigour; each of these criteria are equally important.

Significance: A paper or piece of research provides a valuable contribution to the field (this relates to the ‘so what?’ question about the purpose of the research). Also, how does the research moves the field forward. The contribution can be theoretical or empirical. A point to note is that a 3* paper contributes ideas that have a lasting influence. A 4* paper has a major influence on a field.

Originality: Is the research engaging with new or complex problems? Perhaps a paper might be using existing methods but in a new way, or challenging accepted wisdom. As a point of reference, a 2* paper contributes a small or incremental development, where as a 4* piece of research is something that is outstandingly novel in the development of concepts, techniques or outcomes.

Rigour: Papers and research offers intellectual precision or robustness of arguments. Some important questions are: is there rigour in the argument, the methods and the analysis? Also, can the readers trust the claims that are made when you ask the question, ‘how have you analysed your data?’ It is also important to include researcher reflexivity (question the role that the researcher had in the analysis of the research), and to critique the work, offering a summary of strengths and weaknesses. A 4* paper is one that is exceptionally rigorous, with the highest standards of intellectual precision.

Reflections

I only have a small amount of time in my staff tutor contract to carry research; I try to do what I can when I can, but I very often get entangled in tutor and student support issues which take me away from exploring really interesting questions and topics. Plus, increasingly, I’ve been playing a role in AL professional development. Carving out time for research is difficult balancing act due to all these competing demands.

When I started the retreat, I planned to write about some of the tutorial observation research that I’ve been doing and reacquaint myself with some of the papers that I had uncovered whilst doing this research. The introduction to the event gave me some really needed context, in terms of what a very good paper actually looks like. It also gave me a bit of direction in terms of what I actually needed to do.

One of the things that I did during the retreat was to consolidate different bits of research into a single document that forms the basis of a paper. I now have a clear and distinct structure. What I need to do now is to do some further close reading of the references to more directly position it within the literature, add a more critical twist to the analysis to broaden its appeal, and to do quite a bit more editing.

Permalink Add your comment
Share post
Christopher Douce

Raspberry Pi : suited and booted

Visible to anyone in the world
Edited by Christopher Douce, Tuesday, 5 Jun 2018, 09:34

I received delivery of my Raspberry Pi computer from RS components about two and a half months ago.  It's taken a bit of time to finally 'get it together' to create a setup that enables me to learn more about what it can do and what I could potentially use it for.  This blog is all about the steps that I took to arrive at a working setup.

When I made my original order I decided on the lazy option - I chose to buy a number of key components at the same time.  Along with my Raspberry Pi board I bought a power supply (which connects to the micro USB port of the device), a HDMI cable and a memory card which contains an operating system.  When you're starting with something new, there's something to be said for going with a standard distribution or setup.  There's the fundamental question of 'will it do stuff when I turn the power on?'  Going with a default or standard setup is a way to get going quickly.

There were, of course, three other things I needed: a mouse, a keyboard and a screen.  For the screen I figured out that I might be able to test my Pi out using my TV (since it had a HDMI port). For a keyboard and mouse, I visited a popular on-line hardware retailer and bought a cheap mouse and a keyboard.  (To get an idea of how cheap they were, both items together cheaper than the price of a single pint of beer; it's astonishing how the price of hardware continues to drop).

I wanted something else, though.  A quick search on eBay using the term 'Raspberry Pi' revealed a number of small companies that had started to make cases for the Pi.  After about ten minutes of searching I found a company called ModMyPi.  Although I didn't strictly need a case, I thought it would be a sensible thing to do.  I could easily imagine myself putting my Pi on the floor and haplessly treading on it whilst carrying a hot cup of tea. 

After ten minutes of agonising decision making I had finally decided that my Pi needed a red case.  Why red?  Well, for two reasons: firstly, to signify that this little box is important (i.e. the red box is where number crunching takes place), and secondly, to make it pretty visible when it's sitting on my beige carpet (so I don't tread on it).

The trouble with buying something new is that things don't always arrive on time, and this was the situation with my tiny Pi case.   Although I soon had my keyboard and mouse, the case took quite a few weeks to arrive due (apparently because I didn't read the small print which said that I was making a pre-order - note to self: read the small print!)

Boot day 1 : Trouble

I had everything: my newly suited (or encased) Raspberry Pi, a power supply, a USB keyboard, a USB  mouse, a HDMI cable and an operating system (a version of Linux) on a memory card.  I attached the USB devices, connected the Pi to my temporary display (my living room telly) and powered everything up.  Through the case I could see that a LED came on and my TV changed display mode - things were happening!  The screen started to fill with boot messages and then suddenly... everything stopped.  I squinted, looked at the screen and I could see that there had been something called a Kernal Panic.

When faced with weird technical stuff going wrong what I tend to do is check all the connections and try again.  Exactly the same thing happened, so I powered down, and scratched my head.  Then, I unplugged the USB device and the USB keyboard and powered up; this time I got a lot further.  I was eventually presented with a Linux login prompt but did not have any way of entering a user id.  This told me that (perhaps) there might have been something wrong with either the mouse of the keyboard.  I plugged both devices, one at a time, into my Windows laptop to see if they were recognised.  The mouse was recognised straight away but Windows had to search for an eternity to find a device driver before the device was recognised, suggesting that there was something special about its design.

Every techie knows that Google is their friend, especially when it comes to weird error messages. I searched for the terms, 'raspberry pi', 'panic' (or dump) and 'keyboard' and quickly found a site called elinux.org that contained a Wiki page which listed keyboards that were known to cause mischief.  I soon figured out that I had ordered the Xenta HK-6106 which was known to cause a kernel panic on a Debian distribution (I obviously had either the same one or a distribution very similar to it).  Mystery solved!

Ordering more stuff

I ordered a new keyboard.  This time I bought one (which cost the price of a half  pint of beer) that was on the 'working peripherals' list.

One of my biggest worries (if you could call it that) is that the screens that I use for my desktop PC are both pretty old (I have a dual screen setup).  One of them only has a VGA input, which is useless for the Pi.  The other screen has a DVI input.  A quick search revealed that it was possible to get HDMI to DVI cables.  I didn't know you could do this, and I have to confess that I don't know much about DVI other than my main PC has got one of these as a video output (in addition to a VGA port).  Still, I decided to buy a cable from eBay and hope for the best.

Boot day 2 : Success

After rummaging in a box that contained an indeterminate number of cables (hasn't every geek got one of these boxes?), I found a network cable.  I took every bit of my Pi setup upstairs to my study area and connected everything together; keyboard, mouse, power supply, screen and network cable (which I physically connected to the back of my router, after dragging it half way across the room since my network cable wasn't quite long enough, and still isn't quite long enough).

I powered up.  A kernel panic didn't occur.  I was presented with a login prompt.   I typed the user id: pi, followed by the password: raspberry.  I then entered 'startx' at the shell prompt.  The screen changed and I was presented with a gui.  My aged screen was working!  I soon discovered an internet browser (accessed through the menu located at the bottom left of the screen).  Within a minute or so I was able to navigate to my favourite news site and open Wikipedia.  Success!

Now that I've got everything working, I asked the question, 'what can I do with it?'  I guess this question has two key answers: you can use it to learn about computing, or you could use it to do stuff.  If I find the time I hope to do both!

Learning with the Pi

Considering the learning aspect, it's obvious that there are loads of things going on from the moment that you turn on the Pi.  There are a couple of pages of screens of mysterious messages which currently don't make much sense to me (it's been a while since I've had a Linux distribution on one of my computers).  When you login to the Pi environment there are loads of menu items, applications and tools that I've never heard of before.  There's also a version of a windowing system that I've never heard of before.  There's also a weird sounding browser which seems to render things pretty well, judging from a brief ten minute play. 

There are also a set of programming tools and utilities.  The learning can go from the low levels of computing (from the level of the operating system) through to higher level applications (that can help to teach fundamentals of programming).  Being a bit of a geek, the most interesting question for me is 'what exactly does the Pi Linux distribution contain?'  This, I think, is going to be my first learning task.

Another geeky question is: how do you build software for the Pi?  My main computers are Intel based desktops or laptops.  The Pi is based around an ARM processor.  How do I take existing Open Source software and compile them up so they work on that ARM chip?  Going down a level even further, how do you get USB peripherals to work with the Pi?  Do I have to write a device driver?  Is the world of ARM device drivers different to Intel device drivers?  I have so many questions!

One thing that I have heard of (in passing, through a quick Google search) is that you can use what is known as a cross compiler.  This means that you can compile software using one processor architecture for another.   Of course, this is getting impossibly deeply technical for a first blog about the Pi so I'm going to stop asking myself difficult questions and wondering (for now) what is and what is not possible!

On another note there are a couple of Open University modules that are tangentially connected to (or might be useful with regards to) the Pi.  The Pi Linux distribution contains an environment called Scratch.  This is a graphical programming language developed by MIT that introduces the fundamentals of computer programming.  The Open University makes use of a derivative of Scratch called Sense, which is used with the TU100 My Digital Life module.  The other module that could be useful is T155 Linux: an introduction.   

Doing stuff with the Pi

So, it boots up.  That's pretty cool.  But what might I practically be able to do with it?  I've heard one of my colleagues talking about potentially using a Pi to create a digital video recorder, which sounds like a fun project.  You can also use it as an embedded system to control other hardware. In fact, looking at the Raspberry Pi blog presents a veritable array of different projects and ideas.

About six or so years ago, perhaps even longer, when I worked in industry, in a company that made educational products that could be used to help teach engineering subjects, I suggested creating a device that could (potentially) be used to help teach the fundamentals of computer networking.  The idea was to make use of an inexpensive embedded microcontroller to create something called a 'computer cube'.  Each cube would have simple input and output (perhaps a couple of switches and a LCD display), as well as a network connection (either a standard network connection, or a proprietary interface that can be easily accessed through software).   The idea was that you could connect a set of computer cubes them together on a desk; you could create your own mini internet and also have the ability to look at the signals transmitted between devices and begin to understand the principles of protocols.

Of course, such an idea was hopelessly ambitious, plus there were increasing numbers of network simulators that did a pretty good job of helping learners to explore the principles of networking.  Fundamentally, at the time, it was a bad idea.

But then the Pi arrived.  The Pi is cheap, small, has its own peripherals and is open.  You can run whatever software you want on it.  A Pi is a web client, but there is no reason why it can't also become a web server.  A Pi could also (potentially) become everything in between too.  You could connect them together using relatively cheap switches and hubs, and explore (in a practical sense) computer networking and the software that supports networking works.  You could set one to transmit data, and perhaps use the general purpose IO ports to indicate output of some kind.

Would it be possible to have a network of Pi devices on a desk?  Possibly.  What software would be useful to learn more about the fundamentals of networking?  I'm not sure.  Could we create some useful curriculum or pedagogic materials to go with this?  I've no idea.  All this sounds like a project that is a bit too big for just one person.  If you accidentally discover this blog post and you think this may be a useful idea (or hold the view that it remains a bad idea), then please do get in touch!

Final notes

There is one clear certainty in computing.  It isn't Moore's Law.  It's that there is always an opportunity to learn new stuff.  As well as looking at the Pi operating system and learning about what the various bits are, I've also heard it mentioned that the language of the Pi is Python (Wikipedia).  This isn't a language that I've used before.  It's certainly about time that I knew something about it!

If you scratch the surface of anything technical you find a set of subjects and technologies that are both interesting and challenging.  Not only is the Raspberry Pi device interesting and challenging in its own right, but I'm sure that the situations in which it can be used and applied will be interesting and challenging too.

Permalink 1 comment (latest comment by Alex Little, Tuesday, 21 Aug 2012, 20:35)
Share post
Christopher Douce

Enhancing Employability of Computing Students

Visible to anyone in the world
Edited by Christopher Douce, Monday, 3 Mar 2014, 18:49

I was recently able to attend what was the first Higher Education Academy (HEA) event that explicitly aimed to discuss how universities might enhance the employability of computing courses.  The intention of this blog is to present a brief summary of the event (HEA website)  and to highlight some of the themes (and issues) that I took away from it.

The day was held at the University of Derby enterprise centre and was organised on behalf of the HEA Information and Computer Sciences subject group.  I had only ever been to one HEA event before, so I wasn't quite sure what to expect.  This said, the title of the workshop (or mini-conference) really interested me, especially after having returned to the higher education sector from industry.

The day was divided into two sets of paper presentations punctuated by two keynote speeches.  The afternoon paper sessions was separated into two streams: a placements workshop and a computing forensics stream.  Feeling that the placements workshop wasn't really appropriate, I decided to sit in on the computing and forensics stream.

Opening Keynote

The opening address was given by Debbie Law, an account management director at Hewlett Packard.  As well as outlining the HP recruitment process (which sounds pretty tough!) Debbie mentioned that through various acquisions, there was a gradual movement beyond technology (such as PCs and servers) through to the application of services.  Business, it was argued, don't particularly care for IT, but they do care for what IT gives them.

So, what makes an employable graduate?  They should be able to do a lot!  They should be able to learn and to apply knowledge (completing a degree should go some way to demonstrating this).  Candidates should demonstrate their willingness to consider (and understand) customer requirements.  They should also demonstrate problem solving and analytical skills and be able to show a good awareness of the organisations in which they work.  They should be performance driven, show good attention to detail (a necessity if you have ever written a computer program!), be able to lead a team and be committed to continuous improvement and developing personal effectiveness. Phew!

I learnt something during this session (something that perhaps I should have already known about).  I was introduced to something called ITIL (Information Technology Infrastructure Library) (wikipedia).  ITIL was later spoken about in the same sentences as PRINCE (something I had heard about after taking M865, the Open University Project Management course).

First paper session

There were a few changes to the published programme.  The first paper was by McCrae and McKinnon : Preparing students for employment through embedding work-related learning.  It was at this point that the notion of employability was defined as: A set of attributes, skills and knowledge that all labour market participants should possess to ensure they have the capability of being effective in the workplace - to the benefit of themselves, their employer and the wider economy.  A useful reference is the Confederation of British Industry's Fit for the Future: preparing graduates for the world of work report (CBI, 2009).

The presentation went on to explore how employability skills (such as team working, business skills and communication skills) may be embedded within the curriculum using an approach called Work Related Learning (WRL).  The underpinning ideas relate to linking theory and practice, using relevant learning outcomes, widening horizons, carrying out active learning and taking account of cultural diversity.  A mixed methodology was used to determine the effectiveness of embedding WRL within a course.

The second paper was by Jing and Chalk and was entitled: An initiative for developing student employability through student enterprise workshops.   The paper outlined one approach to bridge the gap between university education and industry through a series of seminars over a twelve week period given by people who currently work within industry.  A problem was described where there were lower employment rates amongst computing graduates (despite alleged skills shortages), low enrolment to work placement years (sandwich years), lack of employability awareness (which also includes job application and interview skills).

The third presentation was by our very own Kevin Streater and Simon Rae from the Open University Business School.  Their paper was entitled 'Developing professionalism in New IT Graduates? Who Needs It?'  Their paper addressed the notion of what it may mean to be an IT professional, encouraging us to look at the British Computer Society Chartered IT Professional status (CITP) (in addition to the ITIL and Prince), and something called the Professional Maturity Model (which I had never heard of before).

Something else that I had never heard of before is the Skills Framework for the Information Age (SFIA).  By using this framework it was possible to uncover whether new subjects or modules may contribute to enhancing the degrees of undergraduates who may be studying to work within a particular profession.  Two Open University courses were mentioned: T122 Career Development and Employability, and T227 Change, Strategy and Projects at Work.

This final presentation of the morning was interesting since it asked us to question the notion of professionalism, and presented the viewpoint that the IT profession has a long way to go before it could be considered akin to some of the other more established professions (such as law, engineering and accountancy).

During the morning presentations I also remember a reference to E-Skills, which is the Sector Skills Council for Business and Information Technology, a government organisation that aims to help to ensure that the UK has the IT skills it needs.

Computing and Forensics Stream

This stream especially piqued my interest since I had once studied a postgraduate computing forensics course, M886, through the Open University a couple of years ago.

The first paper was entitled Teaching Legal and Courtroom Issues in Digital Forensics by Anderson, Esen and Conniss.  Like so many different subject, both academic and professional skills need to be applied and considered.  Academic education considers the communication of theories and dissemination of knowledge, and learning how to think about problems in a critical way by analysing and evaluating different types and sources of information.

The second paper was about Syllabus Development with an emphasis on practical aspects of digital investigation, by Sukhvinder Hara, who drew upon her extensive experience as working as a forensic investigator.

The third paper was about how a virtualised forensics lab might be established through the application of cloud computing.  I found this presentation interesting for two reasons.  The first was due to the interesting application of virtualisation, and secondly due to a resonance with how parts of the T216 Cisco networking course is taught, where students are able to gain access to physical hardware located within a laboratory just by 'logging on' to their personal computer or laptop.

The final paper of the day was an enthusiastic presentation by David Chadwick who shared with us his approach of using problem-based learning and how it could be applied to computing forensics.

This final session of the day brought two questions to my mind.  The first related to the relationship between teaching the principles of computing forensics and the challenge of providing graduates who know the tools that are used within industry.  The second related to the general question of, 'so, how many computing forensics jobs are there?'

It stuck me that a number of the forensics courses around the UK demonstrate the use of similar technologies.  I've heard two products mentioned on a number of occasions: EnCase (Wikipedia) and FTK (Wikipedia), both of which are featured within the Open University M889 course.  If industry requires trained users of these tools, is it the remit of universities to offer explicit 'training' in commercial products such as EnCase .  Interestingly, the University of Greenwich, like the Open University (in T216 course), enables students to study for industrial certification whilst at the same time acquiring credit points that can count towards a degree.

So, are there enough forensics jobs for forensics graduates?  You might ask a very similar question which also begs an answer: are there enough psychology jobs for the number of psychology graduates?  I've heard it say that studying psychology introduces students to the notion of evidence, different research methodologies and research designs.  It is a demanding subject that requires you to write in a very clear way.  Studying psychology teaches and develops advanced numeracy and literacy as much as it introduces scientific method and the often confusing and complex nature of academic debate. 

Returning to computing forensics, I sensed that there might not be as many jobs in the field as there are graduates, but it very much depends what kind of job you might be thinking of.  Those graduates who took digital forensics courses might find themselves working as IT managers, network infrastructure designers or software developers as opposed to purely within law enforcement.  Knowing the notion of digital evidence and how to capture it is an incredibly important skill irrespective of whether or not a student becomes a fully fledged digital investigator.

Concluding Discussions

One of the best parts of the day was the discussion section.  A number of tensions became apparent.  One of the tensions relates to what a university should be and the role it should play within wider society.  Another tension is the differences that exist between the notions of training and education (and the role that universities play to support these two different aims).

Each organisation and area of industry will have a unique set of training and educational requirements.  There are, of course, more organisations than there are universities.  A particular industry may have a very specific training problem that necessitates the development of educational materials that is particular to its own context.  Universities, it can be argued, can only go so far in meeting very particular needs.

A related question, is of course, the difference between training and education.  When I worked in industry there were some problems that could be only solved by gaining task specific skills.  Within the field of software development this may be learning how to use a certain compiler or software tool set.  Learning a very particular skill (whilst building upon existing knowledge) can be viewed as training.  An engineer can either sit with a user manual and a set of notes and figure things out over a period of a month or two, or alternatively go on an accelerated training course and learn about what to do in a matter of days.

Education, of course, goes much deeper.  Education is about not just knowing have to use a particular set of tools but its about knowing how to think about your tools (and their limits) and understanding how they may fit within the 'big scheme of things'.  Education is also about learning a vocabulary that enables you to begin to understand how to communicate with others who work within your discipline (so you can talk about your tools).

Within the ICT sector the pace of change continues to astonish me.  There was a time when universities in conjunction with research organisations led the development of computing and computer science.  Meanwhile, industry has voraciously adopted ICT in such a way that it pretty much pervades all our lives.

So, where does this leave degree level education when 'general' industry may be asking for effective IT professionals?  It would be naive to believe that the university sector can fully satisfy the needs of industry since the nature of industry is so diverse.  Instead, we may need to consider how to offer education and learning (which the university sector is good at) which leads towards the efficient consumption of training (which satisfies the need of industry).   This argument implies that the university sector is for the 'common good' as opposed to being a mechanism that allows individuals to gain specialist topic specific knowledge that can immediately lead to a lucrative career.  Becoming an ICT professional requires an ability to continually learn due to perpetual innovation.  A university level education can provide a fabulous basis to gain an introduction into this rapidly challenging world.

Permalink 1 comment (latest comment by Roman Furgalski, Friday, 25 Feb 2011, 08:46)
Share post
Christopher Douce

Learning from TV

Visible to anyone in the world
Edited by Christopher Douce, Saturday, 1 Nov 2008, 09:05
I have to admit I do get more information, and dare I say it, learning from the television than I should do. I’m a bit of a sucker for factual documentaries and the odd bit of reality television, but for two episodes I have been totally entranced by can’t read, can’t write that has recently appeared on Channel 4.

The programme traces the learning journeys of a number of adults who are learning to read for the first time. My initial reaction to hearing about this programme was one of astonishment: words, to me are like air. They are something that I barely notice because they surround me. As the programme started, I wondered what would unfold before my eyes and the people who I was presented with astonished me with their determination, intelligence and their love of language.

Phil Beadle’s performance was also astonishing. I’ve done nothing more than read about learning styles and the skepticism that surround them, but he was using them in anger. Jumping out from the screen was the realization that reading (and writing) is an activity that is ultimately synesthetic. To write, you have to integrate the shape of the words with the feeling of the pen. Writing this now seems so obvious. Beadle mentioned something interesting: all his learners had different needs and requirements and no single teaching approach would work for everyone, at the same time.

I connected this need for personalization of education with a project I'm working on that is trying to figure out how to present learning materials that are suited to the needs and preferences of individual learners. A talented teacher will have the skill (and the reflective ability) to undercover what works for which student. Getting this information in to a magical software program that provides learners what they need to help learners to learn is a really tough problem to solve.

I’ve been idly wondering for a while about how much can be done to support the learning of phonics (and writing) using touch screen laptops. I remember from a keynote that learning technologists should be also thinking about what can be cost effective from a learning and teaching perspective. I simplify this terribly: teacher time is expensive but tools that can support learning have the potential to be cheap. The challenge is tuning devices and technologies in a way that is efficient for the educator and effective for the learner.
Permalink 1 comment (latest comment by Ruth Jenner, Tuesday, 17 Mar 2015, 20:55)
Share post

This blog might contain posts that are only visible to logged-in users, or where only logged-in users can comment. If you have an account on the system, please log in for full access.

Total visits to this blog: 1976414