OU blog

Personal Blogs

Christopher Douce

Generative AI Professional Development

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 18 June 2025, 10:48

On 11 June 2025 I attended a continuing professional development event concerning Generative AI that was intended for tutors. The ALSPD team has set up an online conference site to complement the session. What follows is a set of notes I have made during the session, along with some accompanying thoughts and reflections.

Keynote Address: Reclaiming the Pace of Digital Productivity in the Age of AI

The opening keynote was by Maren Deepwell. The abstract of her talk emphasised the pace of digital content production, and mentioned the need for uncovering our own ‘critical cadence’. A key phrase I picked up on was, of course, digital literacies. This is now morphing into a related idea of AI literacies where we become more aware of the ethical, cultural and social implications of AI tool use.

I made a note of a couple of links from OpenED Culture projects:

Moving on from AI Literacy, a further theme that was highlighted was digital wellbeing; a term I’ve not heard before, but one that can mean different things to different people. It can relate to the use of technology for healthcare and physical activity, or it could be linked to the notion of a ‘digital detox’, which is a phrase that I struggle with. We were introduced to a resource called the Data Detox Kit which contains a number of guides.

Another curious term was ‘digital rewilding’, which I had never heard before. There was a reference to a site called HackThisCourse. I spent a few minutes looking at this page, and was puzzled; I couldn’t immediately grasp what it was all about – but it does look as if it relates back to the theme of digital pedagogy, and creative uses of digital technology. This point, of course, links to the emergence of GenAI.

An important role of a keynote is to challenge and to guide. An interesting question is: what is next? We were directed to Helen Beetham’s substack, which features articles and podcasts about AI.

A key question asked: what does digital capabilities for the AI age look like? During the presentation, I noted that GenAI can be useful if you’re in a position to critically evaluate what it produces.

Session 1 - Short Talks

Teaching and assessing students' use of GenAI

Jonquil Lowe, Senior Lecturer in Economics, from the Faculty of Arts and Social Sciences (FASS) asks the question: “what exactly should we be teaching and assessing about GenAI?” I noted three important suggestions which relate to taching: include acceptable and ethical use, cover generic Gen AI skills, and discipline specific skills. I noted the suggestion that perhaps every module might need a Gen AI aspect, which would change and develop as student moved to higher levels of study. I also noted an interesting point that Gen AI may offer useful guidance for students for whom English is not their first language.

Within the chat, a link to an OpenLearn course was shared: Getting started with generative artificial intelligence (AI) 

Online academic misconduct 2025: real location, face to face assessment the only effective response

Next up was David Pell, Associate Lecturer from FASS. David was refreshingly direct in his opinions. In comparison to collusion, plagiarism, essay mills, GenAI is something that is different; it is ‘huge’. It is described as a useful tool for potential cheaters. David shared a direct opinion, and one that I agree with, which is: only real life, proctored assessment provides a guarantee against academic misconduct.

Some research was shared:

The very clear point is that GenAI can produce text that is not detectable by educators.

Session 2: The AI Hydra in Assessment - a Nine Point Response to the Nine-headed Beast

The next session, by Mark Campbell, Associate Lecturer, STEM, presents a model of potential responses to GenAI. These are: progression, policing, procedures, pilots, principles and strategy, polities, plans, programmes and modules, and practices.

In addition to these useful responses, I also made a note of the following resources:

  1. OU Generative AI Staff Development
  2. JISC Innovation resources: Embrace artificial intelligence (AI) with confidence
  3. AI maturity toolkit for tertiary education
  4. OU being digital resources (library services)

Session 3 – Short Talks

Transformative Teaching with AI - re-purposing the meaning of student-centred learning

Denise Dear, Associate Lecturer, STEM moves from the topic of assessment to the topic of teaching, and asks “how lecturers can use AI to enhance student topic knowledge retention, increase student engagement, improve student confidence, reduce student stress levels and assist students as a revision tool”. I haven’t used any of these tools as a part of my teaching, so I was curious about how it might be applied.

GenAI was presented as a potential study buddy. It has the potential to provide summaries of passages of text, may be also to provide suggestions about essay structures (which, as an arts student, terrifies me), and generate interactive quizzes. In other words, there is potential that it can provide support that is tailored to the needs of individual students.

During the session tutors were asked: how do you use it? Answers include case scenarios, sample computer code, scenario simulations, generating personas – but a word of caution was highlighted: it gets maths wrong. (LLMs also have no idea how to play chess; they know about language, but they cannot reason).

The discussion of teaching led us back to assessment. How do we assess learning? In an activity, some answers were: viva voce assessments, feedback from small group work, asking students to make an audio recording.

Generative AI and Good Academic English

The final session of the day was by Claire Denton, Associate Lecturer, FASS. Use of GenAI can suggest a lack of confidence with academic English. Telltale signs may be the use of a generic tone, no references to module materials and no supporting evidence. Or alternatively, students might begin with their own voice, which will then switch where text from AI is woven into their answer. A question that tutors face is: how do students provide feedback to students when this happens?

Anyone from any faculty can access the various subject sites. There is something called the Arts and Humanities Writing Centre which contains some useful writing resources. The site also provides a link to good academic English, and a phrase bank. (There are, of course, other study skills resources available. I’ve shared a summary of some of them through this article Study Skills Resources: what is available?)

Claire shared some great tips that could be shared to students, including: if you have spent some hours looking at a piece of writing, stop. Take your time to read it aloud. You will then pick up if you need additional punctuation, have used too many of the same words, or have repeated the same point you have made earlier. The key point is, of course, if tutors spot that GenAI might have been used, there may lie opportunities to provide additional help and guidance.

Reflections

The tone of this event implies that GenAI is going to be profoundly transformative. With this in mind, I remember the tone (and the enthusiasm) that accompanied the development of MOOCs and Open Educational Resources, and the view that they both had the potential to fundamentally change the nature and character of education. I've not heard MOOCs mentioned for a while. A lot of froth has been created in the wake of the emergence of GenAI.

Some of the important themes I picked up on from this session were importance of policies, the challenges to academic conduct and integrity, as well as the potential opportunities that targeted GenAI may present. It was interesting to hear GenAI being pitched in terms of being a ‘study buddy’. This is an interesting use case, but I’m also mildly worried by the fact that these free tools are remembering every single interaction we have ever had with them. 

I’m a GenAI sceptic. In my time as a computer scientist, I have seen AI bubbles come and go. In the 1970, researchers claimed they had solved all the key problems, and it was just a matter of scaling up until we get our own personal robotic butler. When it comes to GenAI I do feel that we’re approaching the Peak of Inflated Expectations and it won’t be long before we crash into the Trough of Disillusionment (see: Gartner hype cycle, Wikipedia). If we’re not at the GenAI hype peak already, we probably will be in 18 months time. (I'll be very happy to eat these words if proved wrong).

I remember a computer book from the mid 1980s. It contained a program that you type into your home computer, so it would ‘talk back to you’. It was written in a programming language called BASIC and was only three or four pages long. It recycled your words; it was a simpler version of a 1967 computer program called ELIZA. I once described GenAI as ‘ELIZA on steroids’. GenAI is seductive, since LLMs uncannily reflect back to us echoes of our own humanity. Looking to the bigger picture, philosophers still can’t really define what intelligence is. I don’t know who said it, but I always return to the phrase that ‘AI is really clever people getting really dumb machines to do things that look clever’.

All this takes me to a mild (and yet) important constructive criticism of the event: it was significantly light on ethics. We need to slow down and ask the question: who runs and owns these GenAI tools? What are the consequences for us, and for society when these are used? To what extent does a LLM create a model of its users? How are these models shared? What terms and conditions have we signed up to? There is, of course, the slightly important issues of transparency and copyright. It is likely that these words are, in some way or other, hoovered up by a LLM engine.

Out of curiosity, I asked Copilot the following question: “Who is Chris Douce and what does he think about artificial intelligence?” Here’s the answer: “Chris Douce is a Senior Lecturer in the School of Computing and Communications at the Open University. He has a background in software engineering and educational technology, with a focus on distance learning and accessibility. Regarding his views on artificial intelligence (AI), Chris Douce has expressed interest in how AI can be integrated into educational technologies to enhance learning experiences. He has discussed the potential of AI to provide personalized learning and support for students, particularly in distance learning environments. However, he also emphasizes the importance of ethical considerations and the need to address challenges related to accessibility and inclusivity”. That’s a pretty good summary, but I think it is, perhaps, an understatement.

One of the useful aspects of this session was the sharing of links to many related resources and references. There is always homework to do. Now, that’s something that GenAI can’t directly help me with. It’s down to me to do the work. Anything else would be cheating.

Addendum

I shared this post with a colleague, who recommended two interesting resources; a book and a podcast:

Book: Bender, E. M. & Hannah, A. (2025) The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want. Bodley Head.

Podcast: Mystery AI Hype Theater 3000 by Bender and Hannah (Distributed AI Research Institute)

Acknowledgements

Many thanks to all the presenters, and for the team who facilitated the event. Thanks are also extended to David Hales.

Permalink Add your comment
Share post