There have been a number of activities I have got stuck on this week. The material is interesting and accessible but the questions we are supposed to consider as we reflect on it are not!
The activity about the paper by Dyckhoff et al. was really interesting and especially got me ruminating on how learning analytics makes use of data which is incidentally collected - the key word being incidental. The data sets created in learning (and everywhere) are huge and contain a lot of detail about various aspects of life but the data is not collected to be analysed. The analysis happens due to the availability of data, the data is not collected for the purposes of analysis. The prospect is that 'easy' research is done using available data to drive pedagogical change rather than pedagogically useful data being collected in order to drive pedagogy.
This is not to say that learning analytics based on big data are not useful. They might not answer the exact questions which learners, educators and institutions would choose to ask, but they do answer questions. As with any big data set - extracting the useful data from the background noise requires finesse and insight.
This blog about library usage is rich with data driven analysis. Libraries generate data by monitoring access (typically by swipe card, PIN code, login), engagement and activity. Modern libraries - often buildings which could house nothing but internet access to digital books and journals - generate even more specific data. Libraries do still have collections of physical books and journals but as archives are digitalised and new material exclusively published digitally - these will eventually start to shrink. People seem to have an emotional attachment to 'books' (try any conversation comparing a Kindle e-reader to a 'real book' to see!) but researchers are hopefully more pragmatic and appreciate the convenience of not only being able to search for literally millions of publications in seconds but also to search within them for particular chapters, references and sentences. This access to more and more information must impact on the pedagogy of those who teach learners who use libraries. The blog makes the point that data can show correlations but not necessary causation. However - correlation may be enough to provide interventions when a student may be struggling, or redesign when a learning activity fails to inspire engagement.
The final article by Lockyer et al. describes the difference between checkpoint and process analytics. I like these distinctions. There are echoes of summative and formative assessments within it and I feel confident I can grasp their meaning! Within my OU journey the institution can easily assess me using checkpoint analytics - they can see details of my socio-demographic status, they know when, where and for how long I log into the VLE, they know how often I post in the forums (and in my blog), they know what I search for in the library and they know my assignment scores. What they don't know (because the data cannot be automatically mined) is the quality of my forum and blog posts, the level at which I engage with activities and assignments, how many of the library resources which I click on are actually read in any meaningful sense. My tutor may be able to make a valid guess at these factors. The area in which process activities could generate data would be in evidence of inter-student collaboration and communication but as our group work (and study-buddy friendships) operate outside of the VLE, there is not way for the OU to be able to monitor them. (If they did there could be privacy concerns as well).