OU blog

Personal Blogs

Christopher Douce

Generative AI and assessment in Computing and Communications

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 27 Nov 2024, 14:06

On 13 November 2024 I attended a workshop that was all about Generative AI and assessment for OU computing modules that was organised by colleagues Michel Wermelinger (academic co-lead GAI in LTA), Mark Slaymaker (C&C Director of Teaching) and Anton Dil (Assessment Lead of C&C Board of Studies). What follows are a set of notes and reflections which relates to the event. I also share some useful links and articles that I need to find the time to follow up on. This summary is, of course, very incomplete; I only give a very broad sketch of some of the discussions that have taken place, since there is such a lot of figuring out to do. Also, for brevity, Generative AI is abbreviated to GenAI.

Introduction and some resources

The workshop opened with a useful introductory session, where we were directed various resources which we could follow up after the event. I’ll pick on a few:

To get a handle on what public research has been published by colleagues within OU, it is worth reviewing ORO papers about Generative AI

The following notable book was highlighted:

There were also a few papers about how GenAI could be used with the teaching of programming:

To conclude this very brief summary, there is also an AL Development Resource which has been produced by Kevin Waugh, available under a Creative Commons Licence.

Short presentations

There were ten short presentations which covered a range of issues, from how GenAI might be used within assessments to facilitate meaningful learning, through to the threats it may offer to academic integrity. What follows are some points that I found particularly interesting.

One of the school’s modules, TM352 Web Mobile and Cloud technologies, has a new block that is dedicated to tools that can be used with software development. Over the last couple of years, it has changed quite a bit in terms of the technologies it introduces to learners. Since software development practices are clearly evolving, I need to find the time to see what has found its way into that module. It will, of course, have an influence on what any rewrite to TM354 Software Engineering might contain.

I learnt about a new software project called Llama. From what I understand, Llama is an open source large language model (LLM) engine that can be locally installed on your desktop computer, from where it can be fed your own documents and resources. The thing is: LLMs can make things up and get things wrong. Another challenge is that LLMs need a lot of computing resources to do anything useful. If students are ever required to play with their own LLMs as a part of a series of learning activities, this raises the subject of computing equity: some students will have access to powerful laptops, whereas other might not. Maybe there will a point when the school may have to deploy AI tools within the OpenSTEM Labs?

Whether you like them or not, a point was made that these tools may begin to find their way closer to the user. Tony Hirst made the point that when models start to operate on your own data (or device) this may open up the possibility of semantically searching sets of documents. Similarly, digital assistants may begin to offer more aggressive help and suggestions about how to write bits of documents. Will the new generation of AI tools and digital assistants be more annoying than the memorable Microsoft Clippy

GenAI appears to be quite good at helping to solve well-defined and well-known programming problems. This leads us to consider a number of related issues and tensions. Knowing how to work with programming languages is currently an important graduate skill; programming also develops problem decomposition and algorithmic thinking skills. An interesting reflection is that GenAI may well help certain groups of students more than others.

Perhaps the nature of programming is changing as development environments draw upon coding solutions of others. Put another way, in the same way that nobody (except for low level software engineers) knows assembly language these days, perhaps the task of software development is moving to a higher level of abstraction. Perhaps developing software will mean less coding, but more about how to combine bits of code together. This said, it is still going to be important understand what those raw materials look like.

An interesting research question was highlighted by Mike Richards: can tutors distinguish between what has been generated by AI and what has been written by students? For more information about this research, do refer to the article Bob or Bot: Exploring ChatGPT’s answers to University Computer Science Assessment (ORO) A follow on question is, of course: what do we do about this?

One possible answer to this question may lie in a presentation which shared practice about the conducting of oral assessments, which is something that is already done on the postgraduate M812 digital forensics module. Whilst oral assessments can be a useful way to assess whether learning has taken place, it is important to consider the necessity of reasonable adjustments, to take account of students who may not be able to make an oral assessment, either due to communication difficulties, or mental health difficulties.

The next presentation, given by Zoe Tompkins, helped us to consider another approach to assessment: asking students to create study logs (which evidence their engagement), accompanied by pieces of reflective writing. On this point, I’m reminded of my current experience as an A334 English literature student, where I’m required to make regular forum postings to demonstrate regular independent study (which I feel that I’m a bit behind with). In addition to this, I also have an A334 reflective blog. A further reflection is that undergraduate apprentices have to evidence a lot of their learning by uploading digital evidence into an ePortfolio tool, which is then confirmed by a practice tutor. Regular conversations strengthen academic integrity.

This leads onto an important question which relates to the next presentation: what can be done to ensure that written assessments are ‘GenAI proof’? Is this something that can be built in? A metaphor was shared: we’re trying to ‘beat the machine’, whilst at the same time teaching everyone about the machine. One way to try to beat the machine is to use processes, and to refer to contexts that ‘the machine’ doesn’t know about. The context of questions is important. 

The final presentation was by one of our school’s academic conduct officers. Two interesting numbers were mentioned. There are 6 points that students need to bear in mind when considering GenAI. If I’ve understood this correctly, there are 19 different points of guidance available for module teams to help them to design effective assessments. There’s another point within all this, which is: tutors are likely to know whether a bit of text has been generated by a LLM.

Reflections

This event reminded me that I have quite an extensive TODO list: I need to familiarise myself with Visual Studio Code, have a good look at Copilot, get up to speed with GitHub for education, look at the TM352 materials (in addition to M813 and M814, which I keep meaning to do for quite a while), and review the new Software Engineering Body of Knowledge (SWEBOK 4.0) that has been recently released. This is in addition to learning more about the architecture of LLMs, and upskill myself when it comes to the ethics, and figure out more about the different dimensions of cloud computing. Computing as moved on since I was last a developer and software engineer. With my TM470 tutor hat on, we need to understand how and where LLMs might be useful, and more about the risks they post to academic integrity.

At the time of writing, there is such a lot of talk about GenAI (and AI in general). I do wonder where we are in the Gartner hype cycle (Wikipedia). As I might have mentioned in other blogs, I’ve been around in computing for long enough to know that AI hype has happened before. I suspect we’re currently climbing up the ‘peak of inflated expectations’. With each AI hype cycle, we always learn new things. I’m of the school of thought that the current developments represent yet another evolutionary change, rather than one that offers revolutionary change.

Whilst studying A334, my tutor talked a little about GenAI in an introductory tutorial. In doing so, he shared something about expectations, in terms of what was expected in a good assessment submission. If I remember rightly, he mentioned the importance of writing that answered the question (a context that was specific, not general), demonstrated familiarity with the module materials (by quoting relevant sections of course texts), and clear and unambiguous referencing. Since the module is all about literature, there is scope to say what we personally think a text might be about. These are all the kind of things that LLMs might be able to do at some level, but not to a degree that is yet thoroughly convincing. To get something convincing, students need to spend time doing ‘prompt engineering’.

This leads us to a final reflection: do we spent a lot of time writing prompts and interrogating a LLM to try to get what we need, or would that time be spent more effectively writing what needed to be written in the first place? If the writing of assessments are all about learning, then does it matter how learning has taken place, as long as the learning has occurred? There is, of course, the important subject of good academic practice, which means becoming aware of what the cultural norms of academic debate and discourse are all about. To offer us a little more guidance, in the coming months I understand there will be some resources about Generative AI available on OpenLearn

Acknowledgments

Many thanks to the organisers and facilitators. Thanks to all presenters; there were a lot of them!

Addendum

Edited on 27 November 24, attributing Zoe Tompkins to one of the sessions. During the event, a session was given by Professor Karen Kear, who demonstrated how Generative AI can struggle with very specific tasks: creating useful image descriptions. Generative AI is general; it doesn't understand the context in which problems are applied.

Permalink 2 comments (latest comment by Christopher Douce, Monday, 18 Nov 2024, 10:58)
Share post
Christopher Douce

Generative AI- AL Professional Development

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 29 May 2024, 12:36

On 23 May 24 I attended an AL development event (in my capacity as an OU tutor) that was all about Generative AI (which is abbreviated to here as GenAI). This blog sits alongside a couple of other blogs that I shared last year that also relate to GenAI and what this means for education, distance learning, and education practice.

What follows is some notes that I made during a couple of the sessions I attended, and what points and themes I took away from them. I also share some critical perspectives. Since GenAI is a fast moving subject, not just in terms of the technology, but in terms of policy and institutional responses, what is presented here is also likely to age quickly.

Opening keynote

The event opened with a keynote by Mychelle Pride which had the subtitle: Generative AI in Learning, Teaching and Assessment I won’t summarise it at length. Instead, I’ll share some key points that I noted down.

One important point was that AI isn’t anything new. A couple of useful resources were shared, one from the popular press, How AI chatbots like ChatGPT or Bard work – visual explainer (The Guardian) and another from industry: The rise of generative AI: A timeline of breakthrough innovations (Qualcomm).

An interesting use case was shared through a YouTube video: Be My Eyes Accessibility with GPT-4. Although clearly choreographed, and without any indication of whether any of this was ‘live’, one immediately wonders whether this technology is solving the right problems. Maybe this scenario implicitly implies that visually impaired people should adapt to the sighted world, whereas perhaps a better solution might be for the world to adapt to people with visual impairments? I digress.

There are clear risks. One significant concern lies with the lack of transparency. Tools can be trained with data that contains biases; in computing there’s the notion of GiGO: garbage in, garbage out. There’s also the clear potential that GenAI tools may accept and then propagate misinformation. It is clear that “risks need to be considered, along with the potential opportunities”.

A point was shared from a colleague Michel Wermelinger who was quoted saying “academic conduct is a symptom, not the problem”, which directly takes us to the university’s academic conduct policies about plagiarism.

In this session I learnt a new term: “green light culture”. The point here was that there are a variety of positions that relate to GenAI: in HE there are policy decisions that range from ‘forbid’ to ‘go ahead’.

I made a note of a range of searching questions. One of them was: how might students use Generative AI? It might become a study assistant, it might facilitate language learning, or support with creative projects. Another question was: how could pedagogies be augmented by AI? Also, is there a risk of over dependence in how we use these tools? Could it prevent us from developing skills? How can we assess in a generative AI world? Some answers to this question may be to have project-based assessment, collaborative assessment, to use complex case studies, and to consider the use of oral assessments. 

A point is that students will be using Generative AI in the future, which means that the university has a responsibility to educate students about it

Towards the end of the keynote, there was some talk about all this being revolutionary (I’ll say more about this later). This led onto a closing provocative question: what differentiates you (the tutor) from Generative AI?

During the keynote, some interesting resources were shared:

Teaching and learning with AI across the curriculum

The aim of a session by Mirjam Hauck was to explore the connection between AI and pedagogy, and to also consider the issue of ethics.

Just like the previous presentation, there were some interesting resources that were shared. One of them was a talk: TED Talk: How AI could save (not destroy) education.

Another resource was a recent book, Practical Pedagogy: 40 New Ways to Teach and Learn by Mike Sharples which students and staff can access through the OU Library.

I had a quick look at it, just to see what these 40 new ways were. Taking a critical perspective, I realised that the vast majority of these approaches were already familiar to me, in one way or another. These are not necessarily ‘new’ but are instead presented in a new way, in a useful compendium. The text also shares a lot of informal web links, which immediately limits the longevity of the text. It does highlight academic articles, but it doesn’t always cite them within a description of a pedagogy. My view is: do consider this text as something that shares a useful set of ideas, rather than something that is definite.

During this session, there were some complementary reflections about how GenAI could be linked with pedagogy: it could be used to help with the generation of ideas (but to be mindful that it might be regenerating ideas and bits of text that may be subject to copyright), play a role within a Socratic dialogue, or act as a digital assistant for learning (which was sometimes called an AIDA – an AI digital assistant).

Power was mentioned in this session, with respect to the power that is exerted by the corporations that develop, run, and deploy AI tools. The point I had in my mind during this part of the session was: ‘do be mindful about who is running these products, why, and that they hope to get from them’.

A brief aside…

Whilst I was prepping this blog, I was sent a related email from Hello World magazine, which is written for computing educators. In that email, there was a podcast which had the title: What is the role of AI in your classroom? 

There was an interesting discussion about assessment, and asking the question of ‘how can this help with pedagogy?’ and ‘how can we adapt our own practices?’ A further question is: ‘is there a risk that we dumb down creativity?’

A scholarship question?

A few times this year tutors have been in touch with me, to ask the question: ‘I’ve seen a students answer in a script that makes me think they may well have used Generative AI. What do I do?’ Copying TMA questions, or any other elements of university materials into a Generative AI tool represents a breach of university policy, and can potentially be viewed as an academic conduct issue. The question is: what do tutors do about this? At the moment, and without any significant evidence, tutors must mark what they have been given.

An important scholarship question to ask is: how many tutors think they are being presented with assessments that may have been produced by Generative AI tools?

Reflections

There was a lot of take on board during this session. I need to find the time to sit down and work through some of the various resources that were shared in this session, which is (in part) the reason for this blog.

When I was a computing undergraduate I went to a couple of short seminars about the development of the internet. When it came to the topic of the web browser, our lecturer said: “this is never going to catch on; who is going to spend time creating all these web pages and HTML tags?” Every day I make use of a web browser; it is, of course, an important bit of software that is embedded within my phone. This connects with an important point: it is notoriously difficult to predict the future, especially when it comes to how technologies are used. There are often unintended consequences, both good and bad.

Being a former student of AI (late in the last century) I’m aware that the fashions that surround AI is cyclical. With each cycle of hype, there are new technologies and tools. Following an early (modern) cycle of AI, I remember a project called SHRDLU, which demonstrated an imaginary world, where users could interact with natural language. This led to an expression that they key challenges had been solved, and all that needs to be done is to scale everything up. Reality, of course, is a whole lot more complicated.

A really important point to bear in mind is that GenAI (in the general sense) cannot reason. You can’t play chess with it. There are, however, other tools within the AI toolset that can do reasoning. As a postgrad student, I had to write an expert system that helped to solve a problem: to figure out a path through a university campus.

I’ve also been around for long enough to see various cycles of hype regarding learning technologies: I joined when e-learning objects were the fashion of the day, then there was the virtual learning environment, and then there was a craze that related to learning analytics. In some respects, the current generation of AI feels like a new (temporary) craze.

Embedding AI into educational tools isn’t anything new. I remember first hearing about the integration of neural networks in the early 2000s.  In 2009 I was employed on a project that intended to provide customised digital resources for learners who have different requirements and needs.

As the models get bigger, more data they hoover up, and the greater potential of these tools generating nonsense. And here lies a paradox: to make effective use of GenAI within education, you need education.

Perhaps there is a difference between generally available generative AI, to generative AI that is aligned to particular contexts. This takes me to an important reflection: no GenAI tool or engine can ever know what your own context is. You might ask it some questions and get a sensible sounding response, but it will not know why you’re asking a question, and what purpose your intended answer may serve. This is why the results produced by a GenAI tool might look terrible, or suspicious if submitted as a part of an assessment. Context is everything, and assessments relate to your personal understanding of very particular learning context.

Although the notion of power and digital corporations was mentioned, there’s another type of power that wasn’t mentioned: electrical power. I don’t have figures to hand, but large language models require an inordinate amount of electrical energy to do what they do. Their use has real environmental consequences. It's easy to forget this.

Here is my view: it is important to be aware of what GenAI is all about, but it is also really important not to get carried away and caught up in what could be thought of as technological froth. It’s also important to always remember that technology can change faster than pedagogy. We need to apply effective pedagogy to teach about technology. 

In my eyes, GenAI, or AI in many of its other forms isn’t a revolution that will change everything, or is an existential threat to humanity; it is an evolution of a set of existing technologies.

It’s important to keep everything in perspective.

Resources

A number of resources were highlighted in this session which are worth having a quick look at:

Acknowledgements

Many thanks to the presenters of this professional development event, and the team that put this event together. Lots to look at, and lots of think about.

Permalink
Share post

This blog might contain posts that are only visible to logged-in users, or where only logged-in users can comment. If you have an account on the system, please log in for full access.

Total visits to this blog: 2372910