OU blog

Personal Blogs

Anna Greathead

Resistance is (not) futile?

Visible to anyone in the world

What will all of this mean for me?

Human nature is essentially self-centred. Any new project, innovation, change or progress will be assessed by individuals by how it will affect them. However successful learning analytics promises to be in terms of creating better environments and activities to foster better learning - the individuals who will need to change their practices to accommodate change will, at least initially, think about what will change for them. Educators may have concerns about increased workload, they may have concerns about their own ability to manage the newly complex world of blended learning and fear that their inability to grasp and engage with it may have consequences for their own careers, they may not really understand what is being asked of them or why change is being implemented which will compromise their engagement.

Why are we doing this?

Changes in LMS or VLE in any institution is likely to be made at a high level but working out the nuts and bolts of the new technology and process falls to the educators at the 'coal face'. The coal face workers may have less understanding of the 'big picture' or long term aims and objectives but will have to make significant, time consuming and difficult changes to their own daily practice, Without a good understanding of the strategic aims it is hard to enthusiastically participate in the strategy.

Our ancient traditions must endure!

Universities have a 'model' which has endured for many centuries in some cases (and even in new universities the 'model' is often older than the institution). The accepted model determines the selection of students, the learning activities, the curriculum, the assessment methods. Any effort to radically change any part of the model meets resistance. University leaders are expected to inspire but not actually make any changes!


Permalink
Share post
Anna Greathead

Citation Networks

Visible to anyone in the world

This paper attempts to consider 'learning analytics' from a variety of academic perspectives rather than concentrating solely on education.

The aim of the authors was to identify trends and also assess the most influential voices within the field of learning analytics. As well as individual voices the authors also noted that multiple disciplines were writing about learning analytics and that the relative contribution to the overall conversation between different disciplines was not equal in quantity or influence. Their method was to analyse citations and map their use in a structured network. The assumption was the papers most regularly cited, and by the widest range of contributors, could be considered as being more significant and more likely to be moving the discipline forward.

The observation was that the discipline of education – with its easy access to vast quantities of data – was not being as innovative in using that data as one might expect. Education was using simple demographic data alongside easy checkpoints such as student retention and outcomes. The suggestion was made that the data being collected could be used to contribute to better learning and teaching but, at the time of writing, it was not being used that way.

Education may seem the obvious discipline which will both discuss and utilise learning analytics the paper makes clear that other disciplines are also taking the discipline forward including psychology, philosophy, sociology, linguistics, learning sciences, statistics, machine    learning/artificial intelligence and computer science.

The authors found that the major disciplines – computer science and education – were diverging and that learning analytics was thus going in more than one direction.

They also found that the most commonly cited papers were not empirical research but more conceptual in nature.

The use of ‘low hanging fruit’ (readily available data) is also discussed with hope that better and more useful learning analytics will develop.

The use of citation networks enables the authors to see where concentrations of papers are being published and how they link to one another. They can assess where ‘crossover’ papers develop which feed into the discussion in more than one academic discipline.

It would be easy to assume that the most regularly cited papers are the most insightful, methodologically consistent and ground-breaking. This would be, I think, an over simplification. Certain journals are more widely read within certain disciplines and the specific place a paper is published will determine, to a great extent, its audience.

I can see the value in this kind of analysis. Where many different researchers from different academic backgrounds are all looking at the same subject – albeit from different angles and with different motives – the potential for a ‘big picture’ (and overarching theory) to emerge is an engaging prospect. I also can see how the varied angles and motives can enable each different discipline to consider new ideas and take their own understanding of, and use of, learning analytics forward.  


Permalink
Share post
Anna Greathead

An introduction to Social Learning Analytics

Visible to anyone in the world

Permalink
Share post
Anna Greathead

Checkpoints and Processes

Visible to anyone in the world

I got stuck this weekend.

I grasped the concept that Lockyer et al. were communication. The checkpoint vs process comparison is simple but beautifully so. A major criticism of learning analytics, as we have studied to date, is that it necessarily uses the available data (which has not been collected with pedagogical advances in mind) rather than data being collected specifically with analytics in mind, Checkpoint data is the main data we have and shows us the easy to measure metrics - who clicked on the link? When and where? On what device? How long were they logged in to the VLE? How often did they post in the forum? When did they submit their assignment? How good was their assignment? How does any of this correlate with the socio-demographic data we hold about them?

The process data is more difficult to measure being more nuanced and essentially qualitative. Questions which might generate process data could include:

  • What was the quality of the forum postings? Did they demonstrate real understanding and engagement with the materials?
  • Did the individual make thoughtful comments about other people's work? Did those comments lead to a fruitful discussion about differences in perspective?
  • Has the learner managed to apply any of their learning into a new context such as their own workplace? What were the results of this?
  • Is there evidence of meaningful collaboration, knowledge sharing, discussion and debate between learners?
As I assessed Block 4 for this (as per the activity) I found I was simply engaged in a copy and paste activity - whereby I copied and pasted weekly activities into a table and then copied and pasted the same checkpoint and process data points into the relevant columns. I didn't finish it - there didn't seem to be any point!

Am I fundamentally misunderstanding something?!

Table about learning analytics
Permalink
Share post
Anna Greathead

Pedagogy.... what is it again?

Visible to anyone in the world

There have been a number of activities I have got stuck on this week. The material is interesting and accessible but the questions we are supposed to consider as we reflect on it are not!

The activity about the paper by Dyckhoff et al. was really interesting and especially got me ruminating on how learning analytics makes use of data which is incidentally collected - the key word being incidental. The data sets created in learning (and everywhere) are huge and contain a lot of detail about various aspects of life but the data is not collected to be analysed. The analysis happens due to the availability of data, the data is not collected for the purposes of analysis. The prospect is that 'easy' research is done using available data to drive pedagogical change rather than pedagogically useful data being collected in order to drive pedagogy.

This is not to say that learning analytics based on big data are not useful. They might not answer the exact questions which learners, educators and institutions would choose to ask, but they do answer questions. As with any big data set - extracting the useful data from the background noise requires finesse and insight.

This blog about library usage is rich with data driven analysis. Libraries generate data by monitoring access (typically by swipe card, PIN code, login), engagement and activity. Modern libraries - often buildings which could house nothing but internet access to digital books and journals - generate even more specific data. Libraries do still have collections of physical books and journals but as archives are digitalised and new material exclusively published digitally - these will eventually start to shrink. People seem to have an emotional attachment to 'books' (try any conversation comparing a Kindle e-reader to a 'real book' to see!) but researchers are hopefully more pragmatic and appreciate the convenience of not only being able to search for literally millions of publications in seconds but also to search within them for particular chapters, references and sentences.  This access to more and more information must impact on the pedagogy of those who teach learners who use libraries. The blog makes the point that data can show correlations but not necessary causation. However - correlation may be enough to provide interventions when a student may be struggling, or redesign when a learning activity fails to inspire engagement.

The final article by Lockyer et al. describes the difference between checkpoint and process analytics. I like these distinctions. There are echoes of summative and formative assessments within it and I feel confident I can grasp their meaning! Within my OU journey the institution can easily assess me using checkpoint analytics - they can see details of my socio-demographic status, they know when, where and for how long I log into the VLE, they know how often I post in the forums (and in my blog), they know what I search for in the library and they know my assignment scores. What they don't know (because the data cannot be automatically mined) is the quality of my forum and blog posts, the level at which I engage with activities and assignments, how many of the library resources which I click on are actually read in any meaningful sense. My tutor may be able to make a valid guess at these factors. The area in which process activities could generate data would be in evidence of inter-student collaboration and communication but as our group work (and study-buddy friendships) operate outside of the VLE, there is not way for the OU to be able to monitor them. (If they did there could be privacy concerns as well).

 

Permalink
Share post
Anna Greathead

Google Analytics

Visible to anyone in the world
Edited by Anna Greathead, Thursday, 11 July 2019, 12:07

Table about Google Analytics

I know that we use Google Analytics at work but not for Elearning. I have never been involved in the discussions about Google Analytics nor have I seen the data. However - as a data junkie I now carry a strong urge to demand access to all the data sets because I can see how interesting this could be.

With regard to applying Google Analytics to learning I am initially struck by how this model is clearly designed with commercial applications in mind. The services which might be of benefit to a learner or an educator are only useful insomuch as they may enable better, more targeted and more appropriate, learning opportunities to be developed.

Google Analytics is, I believe, a commercial enterprise. As with all such enterprises - the principle aim is profit and other gains are incidental. Gains associated with learning are, therefore, mainly for the aspect of learning which is most associated with profit - the institution.

However - this data *could* be used to help the learner and the educator, The issues with using it that was are not only due to a profit motive of a private company but the way that big data may have to become much more specific to be useful in individual circumstances. I would, for example, be very interested to see how well I am doing in H817 compared to other students on the course, compared to students in previous cohorts. I'd be fascinated to know what expectations the OU may have had for me based on my simple socio-demographic information. I would like to see where I rank in terms of engagement with the material and I would be interested to learn of any aspects of H817 I had simply failed to grasp properly. I would love it if the curriculum I followed in H817 would shift and sway according to my interests, the pace I was most comfortable with and even my personal schedule. If I were to know all of this though it may be at the expense of the privacy of other students, it may be at the expense of the integrity of the course and it may be at the expense of my own work ethic!



Permalink
Share post
Anna Greathead

Ferguson 2012

Visible to anyone in the world

This paper from Rebecca Ferguson gives a concise and ordered review of a burgeoning and almost chaotic subject development! It's ironic that something which sounds as definitive as 'learning analytics' can contain so much nuance and so many different opinions. It seems that the term came into use in several contexts simultaneously and was used differently each time.

I feel that the three bullet points on page 9 crystallise the current issues best:

  • Educational data mining focused on the technical challenge: How can we extract value from these big sets of learning-related data?
  • Learning analytics focused on the educational challenge: How can we optimise opportunities for online learning?
  • Academic analytics focused on the political/economic challenge: How can we substantially improve learning opportunities and educational results at national or international levels?
In short - we are now generating huge amounts of data - shouldn't we use it? Maybe we could help individuals learn better and more by using the data to create and refine excellent opportunities and maybe this data could be applied at a national (and international) level to improve learning for entire populations.
Permalink
Share post
Anna Greathead

National Analysis of National Learning Trends

Visible to anyone in the world

Learning analytics is clearly a term which is used to describe a particular phenomena which can happen on a wide range of scales.

Personally I am interested in how my individual socio-demographic circumstances can predict my success (or otherwise) in any aspect of learning and how interventions can be targeted to assist me. I am not entirely selfish - I am also interested in how this can be applied to other people!

Professionally part of my role is to tutor aspiring medics and dentists for the UCAT exam (formerly UKCAT) which is an aptitude test which must be taken as part of an application to most medical / dental schools. Upon registration the UCAT consortium ask each applicant to complete a series of socio-demographic questions - racial group, gender, level of education reached by parents (the vast majority of applicants are 17 years old) and the job title of their parents (presumably used, alongside education, as a proxy for social class). They already have the applicants post code and know if the applicant lives in a household with low enough income to qualify for a bursary. This information is not supplied to universities but is reportedly used to improve the UCAT for future cohorts. Pilot questions are trialed with a view to ensuring they are as equitable as possible. I imagine that groups of statisticians gather around data sets and assess why various questions performed better with people who had parents with higher educational qualifications, or who shared a similar ethnic group. (Incidentally - I teacher friend of mine got very frustrated one year when the disadvantaged children who attended her inner city primary school were faced with SATS questions about going to the theatre, and using a honey dipper - two activities which were entirely unknown to most of the children she taught. A huge failing in learning analytics or a cynical attempt to maintain existing privilege? Who knows!)

The articles we read in this activity looked at early learning analytics on a national scale. I also find this very interesting. The variation between educational styles and ethos between different countries is driven by many factors including culture, politics, and resources and comparisons can be meaningful and meaningless depending on how well the complexities of the issues are understood.

The first paper compares the USA to other countries - noting not only a lack of progress when compared to nations of similar levels of development and wealth - but also an up and coming threat from developing nations with very large populations.

The second paper introduces the idea of using analytics to shape future US educational policy at the national level with a coherent and unified plan. Affordability and value are key values but a need to match education to likely economic requirements is also significant.

The basic premise of these discussions assumes that there is a direct correlation between graduate level education and national prosperity. There is, at least in these papers in this time, little discussion about what is being studied and to what end - merely that high levels of education are necessary for continued improvement.

As is pointed out - both papers were written before 'learning analytics' was a phrase and though they clearly are referring to a similar process the absence of 'big data' on quite the scale as it is available today means that it's not exactly the same thing being discussed. However - the idea of analytics is clearly here with a vision to use data to improve polcy.



Permalink
Share post
Anna Greathead

Big Data and my favourite companies

Visible to anyone in the world

Costa Coffee is my favourite of the high street chains and I buy coffee from there about once a week on average. This article announces the companies intention to use 'big data'. The article is short of specific details but gives a few broad motivations behind the initiative. These are:

  • "to rapidly generate insights that create value for our business"
  • "provided more accurate decisions"
  • "significantly decreased the time required to understand the impact of each new idea"
  • "technology that can pinpoint cause and effect, allowing management to examine how their decisions alter the performance of their companies"
The detail is commercially sensitive but the big picture is that the behaviour of customers, branches, products and initiatives will be tracked, analysed and the results of the analysis used to make decisions.

Tesco is my main supermarket and the way it uses artificial intelligence and big data is described in this 2017 article. This article interestingly takes the angle that the way big data allows a company to anticipate, or even predict, the buying preferences of its customers is to be applauded and is appreciated by customers. It also describes how 'big data' is being used for supermarkets to regain control which was lost in price wars which left them less able to differentiate between the way customers interact with different brands based on factors other than price. As you would expect with any commercial enterprise the motivation is entirely commercial. Providing the customer with a better experience is only useful in so much as it may generate further spending and therefore greater revenue for the business.

There are 134,000,000 results for the Google search "Big data" Facebook. That's not surprising given the amount of data which Facebook have about their users, and the fact that they have 2 billion users. This 2018 article lists impressive figures about how much data is amassed and how quickly the data held is increasing. It makes the more obvious points about tracking activity of users but then adds these four less obvious ways in which the use of 'big data' can be observed:

  1. Tracking cookies: Facebook tracks its users across the web by using tracking cookies. If a user is logged into Facebook and simultaneously browses other websites, Facebook can track the sites they are visiting.
  2. Facial recognition: One of Facebook’s latest investments has been in facial recognition and image processing capabilities. Facebook can track its users across the internet and other Facebook profiles with image data provided through user sharing.
  3. Tag suggestions: Facebook suggests who to tag in user photos through image processing and facial recognition.
  4. Analyzing the Likes: A recent study conducted showed that is viable to predict data accurately on a range of personal attributes that are highly sensitive just by analyzing a user’s Facebook Likes. Work conducted by researchers at Cambridge University and Microsoft Research show how the patterns of Facebook Likes can very accurately predict your sexual orientation, satisfaction with life, intelligence, emotional stability, religion, alcohol use and drug use, relationship status, age, gender, race, and political views—among many others.
It then lists some features of Facebook  which are only possible because of 'big data' such as the flashback feature, the 'I voted' feature (which may be encouraging more people to vote) and services such as profile photo overlays to show support for various causes or events. 

Many of the ways in which Facebook uses big data seem benign and even fun. The platform uses the data it holds to remain engaging and keep the attention of its users. This ultimately makes advertising on the platform more lucrative and drives Facebook's profits.

A useful run down of how big data is used in other industries can be read here. Analytics have already changed our world. It seems likely that, as technology improves this process will accelerate.

Permalink
Share post
Anna Greathead

Learning Analytics according to Wikipedia

Visible to anyone in the world

As if often the case in Wikipedia, the article on Learning Analytics began as a quick summary and rapidly mushroomed into a far more extensive treatise on the subject. However - the initial definition has had few versions. It changed in the first day, then again a couple of years later, but the sentence written in 2013 is the same as the sentence which opens the article today. The difference is that in today's article this opening sentence is followed by over 4000 words of further information.

Learning analytics is the use of data and models to predict student progress and performance, and the ability to act on that information - 23rd August 2010

Learning analytics is the use of intelligent data, learner-produced data, and analysis models to discover information and social connections, and to predict and advise on learning. - 24th August 2010

Learning analytics is the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimising learning and the environments in which it occurs. - 1st September 2013

I usually like to begin my investigations about an unfamiliar subject with a read of the associated Wikipedia article. I realize that it's not a peer reviewed, 'reliable' source but it is often succinct, accessible (especially to to the non-expert) and well written with good clarity. The learning analytics article is none of these things and it reads as an article written by committee (which is, of course, exactly what it is!).

The impression that the whole article gives me is that the subject of 'Learning Analytics' is as vast, as nebulous, as complex and as multifaceted as the two words would imply. H800 challenged every internal impression and idea I had about the concept of 'learning' so I am keenly aware of how 'simple' ideas can become mosaic when investigated and the word 'analytics' gives us no expectation of a simple and easily defined concept! Put two big concepts together and the creation of a gargantuan concept seems also inevitable!

The simple sentences above describe aspects of learning analytics. My impression is not that those who change the definition claim what is stated is incorrect, but that it's incomplete and inadequate. The extra information, text, ideas and paragraphs don't detract from what has been previously written as much as adding to, augmenting and complementing it. There are a multitude of associated concepts which overlap with Learning Analytics but the edges of each concept is blurry and undefined.

I suspect a concise definition which will satisfy everyone is impossible to develop but by looking at the areas everyone agrees with we can draw some conclusions. Such commonalities include:

  • Data - the data is described as 'intelligent' and processes related to collecting, collating and analysing this data are all part of the definitions. Data is an inescapable part of Learning Analytics. It's the key ingredient and without data there can be no analytics.
  • Discover, understand - data can enable a great deal of knowledge to be amassed and that knowledge can lead to understanding of crucial patterns within learning and teaching
  • Prediction, modelling, advising and optimising - four different but overlapping ideas in this context. The way in which the data is used is part of what makes Learning Analytics what it is. The purpose of LA is, at least in part, the improvement of the learning journey for the individual and the cohort.


Permalink
Share post
Anna Greathead

TMA01 and innovations I would like to see in my context

Visible to anyone in the world
Edited by Anna Greathead, Sunday, 10 Mar 2019, 19:35

TMA01 is not flowing! This is a shame as it's very well suited to me. My context is one in which innovations in eLearning and innovation generally is encouraged. Our ideas are often more innovative that software, technology, person hours and staff capacity can manage! I think I could write the whole document based on my ideas and experience but I know I've got to reference other experiences, ideas and attempts. This is what's killing me! The OU library can be like an Aladdin's cave of wonder.... or it can be otherwise!

So here's the bones of my TMA without the academic underpinnings - just purely a list of which innovations I think could work for us and why.

1. Adaptive learning - this is a grandchild of learning analytics. Learning Analytics tracks the engagement of individual learners - the number of times they login, the length of time they stay logged in, the number of clicks, the results etc. and this is used as a proxy for how hard the student is working. In fairness it's probably reasonably accurate for being so blunt. The use of cohort data - both current and historical - can enable some predictive analysis where patterns can be identified from former learners and recognized in current learners giving the learning institution some statistical insight into likely outcomes. If hundreds of students have been studied in the past and all of those who passed gained over 50% in a particular exercise then students scoring 49% can be identified and intervention can be offered. Adaptive learning is the third generation where the analysis of data from a large number of students is analyzed and a individualized learned program is then automatically generated. Learners who struggle with one subject area, or one type of exercise, or one question format can not only be identified but their own specific needs can be met by refresher sessions in their area of struggle, extension lessons on their area of strength and where subjects have been fully mastered the system can know not to revisit them. 

In my context this would be a HUGE amount of work. To begin with we would need to move to an alternative software provider which could collect tracking data. We would then need to retrospectively code our questions for subject area, skills being tested, question format and any other qualities which may make a difference (length of question, sophistication of language etc). It would take some years to be fully adaptive but we could, I believe, quite quickly be able to identify weaker subject areas and ensure learners were tested in these more regularly.

2. Bring your own device - this is exactly what it sounds like! The courses my company run are mostly to assist learners to prepare for an exam - all of the exams are onscreen. Currently we either show the question on a presentation screen for the number of seconds an average question takes in the exam whilst the students fill in an answer sheet, or we give paper handouts. Both options have weaknesses as neither prepare the learner for their actual exam. Marking can be tricky and the company has no means of tracking what the learners are doing. Under this idea we would require learners to bring a laptop (fully charged) to the course and use some means to enable them to all sit an assessment online. This would replicate the exam experience better and also potentially give the company useful learning analysis data. 

There are problems with this idea. Many venues simply would not have good enough internet access for over 100 learners to all login at once. Even fewer venues would have 100 plug sockets to allow 100 laptops to be plugged in at once. Even the instruction to ensure the device was fully charged would be inadequate if a long exam were to be taken. Arranging access to the exam could be done in a number of ways but from the company's perspective the concerns would be a) ensuring access could only happen at the course for copyright reasons, to protect the integrity of the experience for future delegates and to keep the online and course services separate; b) ensuring access was easily arranged so that there are no complex steps which may delay the start of the mock exam; c) having a few 'back up' laptops for genuine cases of 'laptop death' on the day or laptops being unexpectedly incompatible with our service. 

3. QR Codes - it is surprising how popular physical things are in this digital age and in recent months our physical products (revision cards, books, resources) have been very popular. It is good to be able to direct learners to places for further study in printed matter and the QR code is an excellent way of doing this. Acting as a 'bar code' a mobile phone can scan the code and open up links on the internet which can lead to vast amounts of further information. 

Generating a QR code is easy and adding it to printed matter is little more than a 'copy and paste' scenario. All of our QR codes will lead to pages which we own. If we link to external pages there is every chance the page will eventually be edited or retired and the code be rendered useless. Pages we own can link onwards to external sources but the control of that will remain with us. We are planning for QR codes to lead to mini question banks, video material, written resources and audio files. 



Permalink 1 comment (latest comment by Vicky Devaney, Sunday, 10 Mar 2019, 20:12)
Share post
Anna Greathead

Technology in E-learning in my Context.

Visible to anyone in the world
Edited by Anna Greathead, Tuesday, 5 Mar 2019, 14:24

In my professional context the remaining three technologies are inextricably linked - we don't currently have software which enables us to track our learners so we don't have any data to drive assessment or to analyse learning or even to offer prizes! I would love to do this but I suspect that the cost of tracking software (with the associated legal ramifications and headache) would pale into insignificant against the cost of effective analysis of the data it would produce let alone the effective use of the data analysis in order to drive assessment and target future learning. 

If we were to change the software we use to provide good tracking data the other things we would have to do would include:

1. Retrospective 'coding' of existing questions to classify what skill, subject, knowledge and application they are testing

2. Retrospective and future  'coding' of our customers to enable two sided analysis 

3. The development of an analysis methodology and the appropriation of software to support that

All of this could be very worthwhile to provide our learners with targeted resources, personalized analysis of their strengths and areas for improvement and to identify any globally weak / strong areas so we can target future developments BUT pragmatically I don't think our particular market is large enough to justify the costs associated with this. 

Personally - I'm self obsessed enough to want to see analysis on my learning done by some formally recognized software! I am mercenary enough to want as many virtual badges and points as you can shovel in my direction! 


Permalink
Share post

This blog might contain posts that are only visible to logged-in users, or where only logged-in users can comment. If you have an account on the system, please log in for full access.

Total visits to this blog: 227443