OU blog

Personal Blogs

Christopher Douce

Generative AI Professional Development

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 18 June 2025, 10:48

On 11 June 2025 I attended a continuing professional development event concerning Generative AI that was intended for tutors. The ALSPD team has set up an online conference site to complement the session. What follows is a set of notes I have made during the session, along with some accompanying thoughts and reflections.

Keynote Address: Reclaiming the Pace of Digital Productivity in the Age of AI

The opening keynote was by Maren Deepwell. The abstract of her talk emphasised the pace of digital content production, and mentioned the need for uncovering our own ‘critical cadence’. A key phrase I picked up on was, of course, digital literacies. This is now morphing into a related idea of AI literacies where we become more aware of the ethical, cultural and social implications of AI tool use.

I made a note of a couple of links from OpenED Culture projects:

Moving on from AI Literacy, a further theme that was highlighted was digital wellbeing; a term I’ve not heard before, but one that can mean different things to different people. It can relate to the use of technology for healthcare and physical activity, or it could be linked to the notion of a ‘digital detox’, which is a phrase that I struggle with. We were introduced to a resource called the Data Detox Kit which contains a number of guides.

Another curious term was ‘digital rewilding’, which I had never heard before. There was a reference to a site called HackThisCourse. I spent a few minutes looking at this page, and was puzzled; I couldn’t immediately grasp what it was all about – but it does look as if it relates back to the theme of digital pedagogy, and creative uses of digital technology. This point, of course, links to the emergence of GenAI.

An important role of a keynote is to challenge and to guide. An interesting question is: what is next? We were directed to Helen Beetham’s substack, which features articles and podcasts about AI.

A key question asked: what does digital capabilities for the AI age look like? During the presentation, I noted that GenAI can be useful if you’re in a position to critically evaluate what it produces.

Session 1 - Short Talks

Teaching and assessing students' use of GenAI

Jonquil Lowe, Senior Lecturer in Economics, from the Faculty of Arts and Social Sciences (FASS) asks the question: “what exactly should we be teaching and assessing about GenAI?” I noted three important suggestions which relate to taching: include acceptable and ethical use, cover generic Gen AI skills, and discipline specific skills. I noted the suggestion that perhaps every module might need a Gen AI aspect, which would change and develop as student moved to higher levels of study. I also noted an interesting point that Gen AI may offer useful guidance for students for whom English is not their first language.

Within the chat, a link to an OpenLearn course was shared: Getting started with generative artificial intelligence (AI) 

Online academic misconduct 2025: real location, face to face assessment the only effective response

Next up was David Pell, Associate Lecturer from FASS. David was refreshingly direct in his opinions. In comparison to collusion, plagiarism, essay mills, GenAI is something that is different; it is ‘huge’. It is described as a useful tool for potential cheaters. David shared a direct opinion, and one that I agree with, which is: only real life, proctored assessment provides a guarantee against academic misconduct.

Some research was shared:

The very clear point is that GenAI can produce text that is not detectable by educators.

Session 2: The AI Hydra in Assessment - a Nine Point Response to the Nine-headed Beast

The next session, by Mark Campbell, Associate Lecturer, STEM, presents a model of potential responses to GenAI. These are: progression, policing, procedures, pilots, principles and strategy, polities, plans, programmes and modules, and practices.

In addition to these useful responses, I also made a note of the following resources:

  1. OU Generative AI Staff Development
  2. JISC Innovation resources: Embrace artificial intelligence (AI) with confidence
  3. AI maturity toolkit for tertiary education
  4. OU being digital resources (library services)

Session 3 – Short Talks

Transformative Teaching with AI - re-purposing the meaning of student-centred learning

Denise Dear, Associate Lecturer, STEM moves from the topic of assessment to the topic of teaching, and asks “how lecturers can use AI to enhance student topic knowledge retention, increase student engagement, improve student confidence, reduce student stress levels and assist students as a revision tool”. I haven’t used any of these tools as a part of my teaching, so I was curious about how it might be applied.

GenAI was presented as a potential study buddy. It has the potential to provide summaries of passages of text, may be also to provide suggestions about essay structures (which, as an arts student, terrifies me), and generate interactive quizzes. In other words, there is potential that it can provide support that is tailored to the needs of individual students.

During the session tutors were asked: how do you use it? Answers include case scenarios, sample computer code, scenario simulations, generating personas – but a word of caution was highlighted: it gets maths wrong. (LLMs also have no idea how to play chess; they know about language, but they cannot reason).

The discussion of teaching led us back to assessment. How do we assess learning? In an activity, some answers were: viva voce assessments, feedback from small group work, asking students to make an audio recording.

Generative AI and Good Academic English

The final session of the day was by Claire Denton, Associate Lecturer, FASS. Use of GenAI can suggest a lack of confidence with academic English. Telltale signs may be the use of a generic tone, no references to module materials and no supporting evidence. Or alternatively, students might begin with their own voice, which will then switch where text from AI is woven into their answer. A question that tutors face is: how do students provide feedback to students when this happens?

Anyone from any faculty can access the various subject sites. There is something called the Arts and Humanities Writing Centre which contains some useful writing resources. The site also provides a link to good academic English, and a phrase bank. (There are, of course, other study skills resources available. I’ve shared a summary of some of them through this article Study Skills Resources: what is available?)

Claire shared some great tips that could be shared to students, including: if you have spent some hours looking at a piece of writing, stop. Take your time to read it aloud. You will then pick up if you need additional punctuation, have used too many of the same words, or have repeated the same point you have made earlier. The key point is, of course, if tutors spot that GenAI might have been used, there may lie opportunities to provide additional help and guidance.

Reflections

The tone of this event implies that GenAI is going to be profoundly transformative. With this in mind, I remember the tone (and the enthusiasm) that accompanied the development of MOOCs and Open Educational Resources, and the view that they both had the potential to fundamentally change the nature and character of education. I've not heard MOOCs mentioned for a while. A lot of froth has been created in the wake of the emergence of GenAI.

Some of the important themes I picked up on from this session were importance of policies, the challenges to academic conduct and integrity, as well as the potential opportunities that targeted GenAI may present. It was interesting to hear GenAI being pitched in terms of being a ‘study buddy’. This is an interesting use case, but I’m also mildly worried by the fact that these free tools are remembering every single interaction we have ever had with them. 

I’m a GenAI sceptic. In my time as a computer scientist, I have seen AI bubbles come and go. In the 1970, researchers claimed they had solved all the key problems, and it was just a matter of scaling up until we get our own personal robotic butler. When it comes to GenAI I do feel that we’re approaching the Peak of Inflated Expectations and it won’t be long before we crash into the Trough of Disillusionment (see: Gartner hype cycle, Wikipedia). If we’re not at the GenAI hype peak already, we probably will be in 18 months time. (I'll be very happy to eat these words if proved wrong).

I remember a computer book from the mid 1980s. It contained a program that you type into your home computer, so it would ‘talk back to you’. It was written in a programming language called BASIC and was only three or four pages long. It recycled your words; it was a simpler version of a 1967 computer program called ELIZA. I once described GenAI as ‘ELIZA on steroids’. GenAI is seductive, since LLMs uncannily reflect back to us echoes of our own humanity. Looking to the bigger picture, philosophers still can’t really define what intelligence is. I don’t know who said it, but I always return to the phrase that ‘AI is really clever people getting really dumb machines to do things that look clever’.

All this takes me to a mild (and yet) important constructive criticism of the event: it was significantly light on ethics. We need to slow down and ask the question: who runs and owns these GenAI tools? What are the consequences for us, and for society when these are used? To what extent does a LLM create a model of its users? How are these models shared? What terms and conditions have we signed up to? There is, of course, the slightly important issues of transparency and copyright. It is likely that these words are, in some way or other, hoovered up by a LLM engine.

Out of curiosity, I asked Copilot the following question: “Who is Chris Douce and what does he think about artificial intelligence?” Here’s the answer: “Chris Douce is a Senior Lecturer in the School of Computing and Communications at the Open University. He has a background in software engineering and educational technology, with a focus on distance learning and accessibility. Regarding his views on artificial intelligence (AI), Chris Douce has expressed interest in how AI can be integrated into educational technologies to enhance learning experiences. He has discussed the potential of AI to provide personalized learning and support for students, particularly in distance learning environments. However, he also emphasizes the importance of ethical considerations and the need to address challenges related to accessibility and inclusivity”. That’s a pretty good summary, but I think it is, perhaps, an understatement.

One of the useful aspects of this session was the sharing of links to many related resources and references. There is always homework to do. Now, that’s something that GenAI can’t directly help me with. It’s down to me to do the work. Anything else would be cheating.

Addendum

I shared this post with a colleague, who recommended two interesting resources; a book and a podcast:

Book: Bender, E. M. & Hannah, A. (2025) The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want. Bodley Head.

Podcast: Mystery AI Hype Theater 3000 by Bender and Hannah (Distributed AI Research Institute)

Acknowledgements

Many thanks to all the presenters, and for the team who facilitated the event. Thanks are also extended to David Hales.

Permalink Add your comment
Share post
Christopher Douce

Generative AI and the future of the OU

Visible to anyone in the world
Edited by Christopher Douce, Tuesday, 20 June 2023, 10:24

On 15 June 2023 I attended a computing seminar about generative AI, presented by Michel Wermelinger.

In some ways the title of his seminar is quite provocative. I did feel that his presentation relates to the exploration of a very specific theme, namely, how generative AI can play a role in the future of programming education; a topic which is, of course, being explored by academics and students within the school.

What follows is a brief summary of Michel's talk. As well as sharing a number of really interesting points and accompanying resources, Michel did a lot of screensharing, where he demonstrated what I could only describe as witchcraft.

Generative AI tools

Michel showed us Copilot, which draws on code submitted through GitHub. Copilot is said to use something called OpenAI Codex. The witchcraft bit I mentioned was this: Michel provided a couple of comments in a development environment, which were parsed by the Copilot, which generated readable and understandable Python code. There was no messing about with internet searches or looking through instruction books to figure out how to do something. Copilot offered immediate and direct suggestions.

Copilot isn’t, of course, the only tool that is out there. There are now a bunch of different types of AI tools, or a taxonomy of tools, which are emerging. There are tools where you pay for access. There are tools that are connected with integrated development environments (IDEs) that are available on the cloud, and there are tools where the AI becomes a pair programmer chatbot. There are other tools, such as learning environments that offer both documentation and the automated assessment of programming assignments.

The big tech companies are getting involved. Amazon has something called CodeWhisperer. Apparently Google has something called AlphaCode, which has participated in competitive programming competitions, leading to a paper in Nature which questions whether ChatGPT and AlphaCode going to replace programmers? There’s also something called StarCoder, which has also been trained on GitHub sources.  

AI can, of course, be used in other ways. It could be used to offer help and support to students who have additional requirements. AI could be used to transcribe lectures, and help student navigate across and through learning materials. The potential of AI being a useful learning companion has been a long held dream, and one that I can certainly remember from my undergraduate days, which were in the last century.

Implications

An important reflection is that Copilot and all these other AI tools are here to stay. It wouldn’t be appropriate to try to ban them from the classroom since they are already being used, and they already have a purpose. Michel also mentioned there is already a textbook which draws on Generative AI: Learn AI-assisted Python programming

Irrespective of what these tools are and what they do, everyone still needs to know the fundamentals. Copilot does not replace the need to understand language syntax and semantics and know the principles of algorithmic thinking. Developers and engineers need to know what is meant by thorough testing, how to debug software, and to write helpful documentation. They need to know how to set breakpoints, use command prompts, and also know things about version and configuration management.

An important question to ask is: how do we assess understanding? One approach is an increasing use of technical interviews, which can be used to assess understanding of technical concepts. This won’t mean an academic viva, but instead might mean some practical discussions which both help to assess student’s knowledge, and help them to prepare for the inevitable technical interviews which take place in industry.

New AI tools may have a real impact on not only what is taught but how teaching is carried out, particularly when it comes to higher levels of study. This might mean the reformulation of assignments, perhaps developing less explicit requirements to expose learners to the challenge of working with ambiguity, which students must then intelligently resolve.

Since these tools have the potential to give programmers a performative boost, assignments may become more bigger and more substantial. Irrespective of how assignments might change there is an imperative that students must learn how to critically assess and evaluate whatever code these tools might suggest. It isn’t enough to accept what is suggested; it is important to ask the question: “does the code that I see here make sense of offer any risks, given what I’m trying to do?”

A term that is new to me is: prompt engineering. This need to communicate in a succinct and precise way to an AI to get results that are practical and useful within a particular context. To get useful results, you need to be clear about what you want. 

What is the university doing?

To respond to the emergence of these tools the university has set up something called the Generative AI task and finish group. It will be producing some interim guidance for students and will be offering some guidance to staff, which will include the necessity to be clear about ethical and transparent use about AI. It is also said to highlight capabilities and limitations.  There will also be guidance for award boards and module results panels. The point here is that Generative AI is being looked at. 

Michel suggested the need for a working group within the school; a group to look at what papers coming out, what the new tools are, and what is happening across the sector at other institutions. A thought that it might be useful to widen it out to other schools, such as the School of Physical Sciences, and any others which make use of any aspect of coding and software development.

Reflections

Michel’s presentation was a very quick overview of a set of tools that I knew very little about. It is now pretty clear that I need to know a lot more about them, since there are direct implications for the practice of teaching and learning, implications for the school, and implications for the university. There is a fundamental imperative that must be emphasised: students must be helped to understand that a critical perspective about the use of AI is a necessity.

Although I described Michel’s demonstration of Copilot as witchcraft all he did was demonstrate a new technology.

When I was a postgraduate student, a lecturer once told me that one of the most fundamental and important concepts in computing was abstraction. When developers are faced with a problem that becomes difficult, they can be said to ‘abstract up’ a level, to get themselves out of trouble, and towards another way of solving a problem. In some senses, AI tools represent a higher level of abstraction; it is another way of viewing things. This doesn’t, of course, solve the problem that code still needs to be written.

I have also heard that one of the fundamental characteristics of a good software developer or engineer is laziness. When a programmer finds a problem that requires solving time and time again, they invariably develop tools to do their work for them. In other words, why write more code than you need to, when you can develop a tool that solves the problem for you?

My view is that both abstraction and laziness are principles that are connected together.

Generative AI tools have the potential to make programmers lazy, but programmers must gain an appreciation about how and why things work. They also need to know how to make decisions about what bits of code to use, and when. 

It takes a lot of effort to become someone who is effective at being lazy.

Permalink Add your comment
Share post

This blog might contain posts that are only visible to logged-in users, or where only logged-in users can comment. If you have an account on the system, please log in for full access.

Total visits to this blog: 2857324