OU blog

Personal Blogs

Christopher Douce

Generative AI- AL Professional Development

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 29 May 2024, 12:36

On 23 May 24 I attended an AL development event (in my capacity as an OU tutor) that was all about Generative AI (which is abbreviated to here as GenAI). This blog sits alongside a couple of other blogs that I shared last year that also relate to GenAI and what this means for education, distance learning, and education practice.

What follows is some notes that I made during a couple of the sessions I attended, and what points and themes I took away from them. I also share some critical perspectives. Since GenAI is a fast moving subject, not just in terms of the technology, but in terms of policy and institutional responses, what is presented here is also likely to age quickly.

Opening keynote

The event opened with a keynote by Mychelle Pride which had the subtitle: Generative AI in Learning, Teaching and Assessment I won’t summarise it at length. Instead, I’ll share some key points that I noted down.

One important point was that AI isn’t anything new. A couple of useful resources were shared, one from the popular press, How AI chatbots like ChatGPT or Bard work – visual explainer (The Guardian) and another from industry: The rise of generative AI: A timeline of breakthrough innovations (Qualcomm).

An interesting use case was shared through a YouTube video: Be My Eyes Accessibility with GPT-4. Although clearly choreographed, and without any indication of whether any of this was ‘live’, one immediately wonders whether this technology is solving the right problems. Maybe this scenario implicitly implies that visually impaired people should adapt to the sighted world, whereas perhaps a better solution might be for the world to adapt to people with visual impairments? I digress.

There are clear risks. One significant concern lies with the lack of transparency. Tools can be trained with data that contains biases; in computing there’s the notion of GiGO: garbage in, garbage out. There’s also the clear potential that GenAI tools may accept and then propagate misinformation. It is clear that “risks need to be considered, along with the potential opportunities”.

A point was shared from a colleague Michel Wermelinger who was quoted saying “academic conduct is a symptom, not the problem”, which directly takes us to the university’s academic conduct policies about plagiarism.

In this session I learnt a new term: “green light culture”. The point here was that there are a variety of positions that relate to GenAI: in HE there are policy decisions that range from ‘forbid’ to ‘go ahead’.

I made a note of a range of searching questions. One of them was: how might students use Generative AI? It might become a study assistant, it might facilitate language learning, or support with creative projects. Another question was: how could pedagogies be augmented by AI? Also, is there a risk of over dependence in how we use these tools? Could it prevent us from developing skills? How can we assess in a generative AI world? Some answers to this question may be to have project-based assessment, collaborative assessment, to use complex case studies, and to consider the use of oral assessments. 

A point is that students will be using Generative AI in the future, which means that the university has a responsibility to educate students about it

Towards the end of the keynote, there was some talk about all this being revolutionary (I’ll say more about this later). This led onto a closing provocative question: what differentiates you (the tutor) from Generative AI?

During the keynote, some interesting resources were shared:

Teaching and learning with AI across the curriculum

The aim of a session by Mirjam Hauck was to explore the connection between AI and pedagogy, and to also consider the issue of ethics.

Just like the previous presentation, there were some interesting resources that were shared. One of them was a talk: TED Talk: How AI could save (not destroy) education.

Another resource was a recent book, Practical Pedagogy: 40 New Ways to Teach and Learn by Mike Sharples which students and staff can access through the OU Library.

I had a quick look at it, just to see what these 40 new ways were. Taking a critical perspective, I realised that the vast majority of these approaches were already familiar to me, in one way or another. These are not necessarily ‘new’ but are instead presented in a new way, in a useful compendium. The text also shares a lot of informal web links, which immediately limits the longevity of the text. It does highlight academic articles, but it doesn’t always cite them within a description of a pedagogy. My view is: do consider this text as something that shares a useful set of ideas, rather than something that is definite.

During this session, there were some complementary reflections about how GenAI could be linked with pedagogy: it could be used to help with the generation of ideas (but to be mindful that it might be regenerating ideas and bits of text that may be subject to copyright), play a role within a Socratic dialogue, or act as a digital assistant for learning (which was sometimes called an AIDA – an AI digital assistant).

Power was mentioned in this session, with respect to the power that is exerted by the corporations that develop, run, and deploy AI tools. The point I had in my mind during this part of the session was: ‘do be mindful about who is running these products, why, and that they hope to get from them’.

A brief aside…

Whilst I was prepping this blog, I was sent a related email from Hello World magazine, which is written for computing educators. In that email, there was a podcast which had the title: What is the role of AI in your classroom? 

There was an interesting discussion about assessment, and asking the question of ‘how can this help with pedagogy?’ and ‘how can we adapt our own practices?’ A further question is: ‘is there a risk that we dumb down creativity?’

A scholarship question?

A few times this year tutors have been in touch with me, to ask the question: ‘I’ve seen a students answer in a script that makes me think they may well have used Generative AI. What do I do?’ Copying TMA questions, or any other elements of university materials into a Generative AI tool represents a breach of university policy, and can potentially be viewed as an academic conduct issue. The question is: what do tutors do about this? At the moment, and without any significant evidence, tutors must mark what they have been given.

An important scholarship question to ask is: how many tutors think they are being presented with assessments that may have been produced by Generative AI tools?

Reflections

There was a lot of take on board during this session. I need to find the time to sit down and work through some of the various resources that were shared in this session, which is (in part) the reason for this blog.

When I was a computing undergraduate I went to a couple of short seminars about the development of the internet. When it came to the topic of the web browser, our lecturer said: “this is never going to catch on; who is going to spend time creating all these web pages and HTML tags?” Every day I make use of a web browser; it is, of course, an important bit of software that is embedded within my phone. This connects with an important point: it is notoriously difficult to predict the future, especially when it comes to how technologies are used. There are often unintended consequences, both good and bad.

Being a former student of AI (late in the last century) I’m aware that the fashions that surround AI is cyclical. With each cycle of hype, there are new technologies and tools. Following an early (modern) cycle of AI, I remember a project called SHRDLU, which demonstrated an imaginary world, where users could interact with natural language. This led to an expression that they key challenges had been solved, and all that needs to be done is to scale everything up. Reality, of course, is a whole lot more complicated.

A really important point to bear in mind is that GenAI (in the general sense) cannot reason. You can’t play chess with it. There are, however, other tools within the AI toolset that can do reasoning. As a postgrad student, I had to write an expert system that helped to solve a problem: to figure out a path through a university campus.

I’ve also been around for long enough to see various cycles of hype regarding learning technologies: I joined when e-learning objects were the fashion of the day, then there was the virtual learning environment, and then there was a craze that related to learning analytics. In some respects, the current generation of AI feels like a new (temporary) craze.

Embedding AI into educational tools isn’t anything new. I remember first hearing about the integration of neural networks in the early 2000s.  In 2009 I was employed on a project that intended to provide customised digital resources for learners who have different requirements and needs.

As the models get bigger, more data they hoover up, and the greater potential of these tools generating nonsense. And here lies a paradox: to make effective use of GenAI within education, you need education.

Perhaps there is a difference between generally available generative AI, to generative AI that is aligned to particular contexts. This takes me to an important reflection: no GenAI tool or engine can ever know what your own context is. You might ask it some questions and get a sensible sounding response, but it will not know why you’re asking a question, and what purpose your intended answer may serve. This is why the results produced by a GenAI tool might look terrible, or suspicious if submitted as a part of an assessment. Context is everything, and assessments relate to your personal understanding of very particular learning context.

Although the notion of power and digital corporations was mentioned, there’s another type of power that wasn’t mentioned: electrical power. I don’t have figures to hand, but large language models require an inordinate amount of electrical energy to do what they do. Their use has real environmental consequences. It's easy to forget this.

Here is my view: it is important to be aware of what GenAI is all about, but it is also really important not to get carried away and caught up in what could be thought of as technological froth. It’s also important to always remember that technology can change faster than pedagogy. We need to apply effective pedagogy to teach about technology. 

In my eyes, GenAI, or AI in many of its other forms isn’t a revolution that will change everything, or is an existential threat to humanity; it is an evolution of a set of existing technologies.

It’s important to keep everything in perspective.

Resources

A number of resources were highlighted in this session which are worth having a quick look at:

Acknowledgements

Many thanks to the presenters of this professional development event, and the team that put this event together. Lots to look at, and lots of think about.

Permalink
Share post

This blog might contain posts that are only visible to logged-in users, or where only logged-in users can comment. If you have an account on the system, please log in for full access.

Total visits to this blog: 2352696