OU blog

Personal Blogs

Christopher Douce

ChatGPT and Friends: How Generative AI is Going to Change Everything

Visible to anyone in the world
Edited by Christopher Douce, Sunday, 2 Apr 2023, 10:37

On 23 March 2023 the OU Knowledge Media Institute hosted a hybrid event, which had the curious title: How Generative AI is Going to Change Everything. More information about the details of this event is available through a GenAI KMi site.

I think I was invited to this event after sharing the results of a couple of playful ChatGPT experiments on social media, which may have been seen by John Domingue, the OU KMi director. In my posts, I shared fragments of poetry which had been generated about the failures of certain contemporary political figures.

The KMi event was said to be about “ChatGPT and related technologies, such as DALL E 2 and Stable Diffussion” and was described as an “open forum” to “allow participants to first get an understanding of what lies underneath this type of AI (including limitations)” with a view to facilitating discussions and potentially setting up an ethical workshop.

What follows is a very brief summary of some of the presentations, taken from notes I made during each of the talks. Please do view this blog as simply that, a set of notes. Some of these may well contain errors and misrepresentations, since these textual sketches were composed quite quickly. Do feel free to contact individual speakers.

Introduction and basics of ChatGPT/GPT-3/GPT-4

The event was opened by John who described it as a kick-off event, intended to bring people together. He introduced the topic, characterising the GPT projects as a very sophisticated text predictor, with GPT3 being described as “a text predictor on steroids”. An abbreviation that was regularly used was: LLM. This is short for “large language model”; a term that I hadn't heard before.

We were introduced to the difference between the different versions of GPT. An interesting difference being the amount of text these LLMs have processed and how much text they can generate. We were told that GPT2 was released in 2018 and the current version, GPT4, can make use of images (but I’m not quite sure how).

John shared a slide that described something called the OU’s AI agents ecosystem, which had the subtitle of being an AI strategy for the OU.

There were some pointers towards the future. Some of these new fangled tools are going to find their way into Microsoft 365. I’m curious to learn how these different tools might affect or change my productivity.

What follows is a summary of some of the presentations that were made during the event. Most of the presentations were made over a course of 5 minutes; the presenters had to pack in a lot over a very short amount of time. There is, of course, a risk that I may well have misrepresented some aspects of the presentations, but I hope I have done a fair job in capturing the main points and themes each speaker expressed.

Short presentations

ChatGPT: Safeguards, trustworthiness and social responsibility

The first short presentation was by Shuang Ao from the Knowledge Media Institute. Shuang suggested that LLMs are “uncontrollable, not transparent and unstable” and had limitations in terms of their current ability to demonstrate reasoning and logic. They also may present factual errors, and demonstrate bias and discrimination, which presents real ethical challenges.

But can it make decisions?

Next up was Lucas Anastasiou, also from the Knowledge Media Institute. Lucas had carried out some experiments. ChatGPT can’t play chess at all well, but it does know how to open a game well, since it knows something about chess game opening theory. But how about poker? Apparently there’s something called a poker IQ test. I’m not sure if I remember exactly, but I seem to recall that they’re not great at playing poker. How about a stock portfolio or geo-political forecasting? We were offered a polite reminder that a computer can never be held accountable, but perhaps its users, and developers could be?

ChatGPT attempts OU TMAs

The next speaker was Alistair Willis, School of Computing and Communications. Alistair is a module chair for TM351 Data management and analysis. He asked a simple question, but one that has important implications: can ChatGTP answer one of his TMA questions? 

His TMA was a guided investigation, and was split into two parts: a coding bit, and an interpretation bit. The conclusion that was good at the coding bit (or, potentially, helping with the coding bit), but rubbish at the interpretation. Overall, a student wouldn’t get a very high score.

From the module team perspective, a related question was: could it be used to create module materials?

These questions is all very well, but if text and answers can be generated, is there a way to determine whether a fragment of prose was generated by ChatGPT? Apparently, there is a tool which can highlight which bits of text may have been written using ChatGPT.

Five key learnings from our use of Chatbots

Barry Verdin has an interesting role within the OU; he is an assistant director student support innovation. I have heard of Barry before; he keeps inviting me to meetings about systems thinking, but I keep being too busy to attend (but I do welcome his invitations!) His interest lies in supporting a chatbot that offers support to students. He shared an interesting statistic that the chatbot can answer around 80% of queries. Clearly, AI has the possibility of helping with some types of student enquiries.

Experiments with ChatGPT

It was my turn. I wear a number of hats. I’m a student, an associate lecturer, and a staff tutor.

Wearing a student hat

Whilst wearing my student hat, I’ve been studying a module called A230 Reading and studying literature. When I had completed and submitted one of my Tutor Marked Assignments, I submitted an abridged version of my TMA question to ChatGPT. The question I gave it was: “Compare and contrast Shelly’s Frankenstein with Wordsworth’s Home at Grasmere”. I admit that there was a part of me that took pleasure in asking an artificial intelligence what it thought about Frankenstein.

I found the response that I got interesting. Firstly, it was pretty readable, and secondly, it helped me to understand what I had understood when preparing the assignment. For example, it enabled me to check my own understanding of what literary romanticism was all about. Another point was that there was no way that ChatGPT could have responded to the detail specifics of the essay question, since we were asked to interpret a very specific section of Wordsworth’s epic (and we have already learnt that ChatGPT isn’t good at logic). The text that we was working with was only available to OU students in a very specific form.

My study of literature helps me to develop specific skills, such as close reading, and adopting a critical approach to texts. Students, of course, also need to show an understanding of module materials too. If large language models don’t have access to those texts, they’re not going to even attempt to quote from them. This means that a vigilant tutor is likely to raise a curious eyebrow if a student submits a neatly written essay which is devoid of quotes from texts, or from module materials.

Wearing a tutor hat

Picking up on the role of a tutor, another hat I wear is a tutor for M250 Object-oriented Java programming I confess to doing something similar to Alistair. I fed ChatGPT a part of a TMA question which instructed a student to write bits of code to model a scenario. It did well, but it did too much: it produced bits of code that were not asked for. It produced too much. This said, drawing on my experience of programming (and of teaching) I could understand why it suggested what had been produced.

From the tutor’s perspective, if I had received a copy of what had been produced, I would be pretty suspicious, since I would be asking: “where did our student get all that experience from, when this is module that is all about introducing key concepts?”

Wearing a staff tutor hat

For those who are unfamiliar with the role of a staff tutor, a staff tutor is a tutor line manager. We’re a bit of academic and administrative glue in the OU system which makes things work. We get to deal with a whole number of different issues on a day-to-day basis, and a couple of times a year academic conduct issues cross my desk.

The university has to deal with and work with a number of existing threats to academic integrity, such as well-known websites where students can ask questions from subject matter experts and fellow students. Sometimes solutions to assignments are shared through these sites. Sometimes, these solutions contain obvious errors, which we can identify.

Responses to the threats to academic integrity include the use of plagiarism detection software (such as TurnItIn), the use of collusion detection systems (such as CopyCatch), the vigilance of tutors and module teams, the referral of cases to university Academic Conduct Officers, running of individual support sessions to help students to develop their study skills to ensure they do not accidentally carry out plagiarism, and effective record keeping to tie everything together.

When arriving at this event, one question I did have was: could it be possible to create an AI to detect answers that had been produced by an AI? Alistair’s earlier reference to a checker had partially answered my own question. Further question are, of course: how should such detection tools be used within an institution, and to what extent should academic policies be adapted and changed to take account of large language models?

Bring textual wishes to life

Christian Nold from the School of Engineering and Innovation (E&I) shared some information about an eSTEeM project with Georgy Holden. Students were encouraged to send postcards about their experience at level 1 study, sharing 3 wishes. The question that I have noted down was: wow can we use AI tools to generate personas from 3 wishes? Tools such as ChatGPT integrates different bits of text together and the generation personas could help us to think differently.

Core-GPT

Matteo Cancellieri and David Pride, both from the Knowledge Media Institute gave what was pitched as a KMi product announcement: they introduced CORE-GPT. Their project aims to combine open access materials with AI for credible, trustworthy question answering. The aim is to attempt to reduce the number of ‘hallucinations’ (made up stuff) that might be produced through tools such as ChatGPT, drawing on information from open access papers. More information about the initiative is available through a blog article: Combining Open Access research and AI for credible, trustworthy question answeringMore information is available through the Core website.

ChatGPT and assessment

Dhouha Kbaier from School of Computing and Communications shared some concerns and points about assessment. Dhouha is module chair of TM355 Communications Technology. Following the Covid-19 pandemic, students are assessed through a remote exam. In their exam, students need to draw on discussion materials, and find resources and articles. Educators need to make students aware that there are tools that can detect text generated by large language models, and AI tools can create errors (and hallucinations).

One of the points I noted was: there is the potential need to adapt our assessment approaches. Educators also have a responsibility to do what they can to remove a student’s motivation for cheating. Ultimately, it isn’t in their best interests.

Can students self-learn with ChatGPT?

Irina Rets from the OU Institute of Educational Technology (IET) asked some direct questions, such as: can students learn through ChatGPT? Also, can AI be a teacher? In some respects, these are not new questions; a strand of research that links to AI and education has been running for a very long time. Some further questions were: who gets excluded? Also, what are the learning losses, and learning gains? Finally, how might researchers use these tools?

Chat GPT - Content Creation with AI

Manoj Nanda from the School of Computing and Communications also suggested that AI might be useful for idea generation. Manoj highlighted a couple of tools that I had not heard of before, such as Dall-e2 (OpenAI website) which can generate an image from a textual description. Moving to an entirely different modality, he also highlighted Soundraw.io. Manoj emphasised that a key skill is using appropriate prompts. This relates to an old computing adage: if you put garbage in, you’ll get garbage out (GIGO).

Developing playful and fun learning activities

Nicole Lotz from the School of Engineering and Innovation (E&I) sees tools such as ChatGPT as potentially useful for creative exploration. Nicole is module chair of U101 Design thinking, which is a first level design module. The ethos of the module it all about playfulness, building confidence, and learning through reflection. Subsequently, there may be opportunities to use what ChatGPT might produce as a basis for further reflection, development and refinement.

"I am the artist Riv Rosenfeld" - How ChatGPT is your new neoliberal friend

Tracie Farrell, from the Knowledge Media Institute, works in the intersection between AI and social justice. Tracie asked ChatGPT to write a paragraph about her friend and artist, Riv Rosenfeld. There was a clear error, which was that ChatGPT got their pronouns wrong. An important point is that “ChatGPT doesn’t know your truth”. In other words, the perspective that is generated by large language models comes from what is written or known about you, and this may be at odds with your own perspective. There are clear and obvious risks: marginalised groups are always not as visible. Biases are perpetuated. Some key questions are: who will be harmed, and who will be helped, and to what extent (and how) will these emerging tools reinforce inequality.

Discussion

After the short presentations, we went into a plenary discussion. It wasn’t too long before the history of AI was highlighted. John highlighted the two schools of thought about AI: a symbolic camp, and a statistical camp, and suggested that in the future, there might be a combination of the two. This related to the earlier point that these AI tools can’t (yet) do logic very well.

A further comment reflected an age old intractable problem that hasn’t been solved, and might never be solved, namely: we still haven’t defined what intelligence is. In terms of AI, the measure of intelligence has moved from playing chess, through to having machines do things that humans find intrinsically easy to do, such as assess a visual scene, and communicate with each other using natural language. The key point in the discussion was, of course: we need to ask again, what do we mean by intelligence?

Whenever a technology is discussed, an accompanying discussion of a potential digital divide is never too far away. AI may present its own unique divides: those who know how to use AI tools and can use them effectively, and those who don’t know about them, and are not able to use them. There are clear links to the importance of equity and access.

During the discussion, I noted down the words: “If you’re a novice programmer, what blocks you is your first bug”. In other words, knowing the fundamentals and having knowledge is important. Another phrase I noted down was: “It is perhaps best to view them as fallible assistants”.

Given their fallibility, making judgements about when to trust what an AI tool has produced, and when not to, is really very important. In other words: it is important to think critically, and this is something that only us humans can do.

Reflections

This was a popular event; approximately 250 people attended the first few presentations.

The presentations were quite different to each other. Some explored the question “to what extent might these tools present risks to academic integrity?” Others explored “how can these tools help us with creativity and problem solving?” The important topic of ethics was clearly highlighted. It was also interesting to learn about work being carried out within KMi, and the reference to the emergence of an institutional AI strategy (although I do hold the view that this should be thoroughly and critically evaluated).

I enjoyed the discussion section. In some respects, it felt like coming home. I studied AI as an undergraduate and a postgraduate student over 20 years ago, where the focus was primarily on symbolic AI. At the time, statistical methods, which includes neural networks, was only just beginning to make an appearance. It was really interesting to see the different schools of thought being highlighted and discussed. During the discussion session I shared the following memorable definition: AI is really clever people making really stupid machines to do things that look clever.

I confess to having been around long enough to know of a number of AI hype cycles. When I was a postgraduate student, I learnt about the first generation of AI developments. I learnt about chess and problem solving. I remember that proponents at the time were suggesting that the main problems with AI had been solved, which had the obvious implication that we would soon have our own personal robots to help us with our everyday chores.

The reality, of course, turned out to be different, since some of those very human problems, such as vision, sound and language were a lot harder to figure out. This meant there were no personal robotic assistants, but instead we did get a different kind of personal digital assistant.

Despite my cynicism, one aspect of AI that I do like is that it has been described as “applied philosophy”. When you start to think about AI, you cannot get away from trying to define what intelligence is. In other words, the machine becomes a mirror to ourselves; the computer helps us to think about our own thinking.

I once heard a fellow computer scientist say that one of the greatest contributions of computing is abstraction. In other words, when making sense of a difficult problem, you look at all its elements, and then you go on to create a new representation (or form) of the problem which then enables you to make sense of it all. I remember another computer science colleague saying, “when you get into trouble, abstract your way out of difficulty”. This can also be paraphrased as: “go up a level”.

We’ve all been in that situation when we’ve had multiple search engine tabs open, and we’re eyeballing tens of thousands of different search results. In these circumstances, we don’t know where to begin. Perhaps this is the problem that these large language models aim to resolve: to produce a neat summary of an answer we’re searching for in a neatly digestible format.

To some degree, generative AI can be though as “going up a level”, but the way you go up a level may well be driven by the data that is contained within a large language model. That data, of course, might well be incorrect. Even if you do “go up a level” you might be going up in entirely the wrong direction.

All these points emphasise the importance of taking a critical perspective of what all these new-fangled AI tools produce, but this does require those interpreting any results to have developed a critical perspective in the first place. We need a critical perspective to deal with instances where an AI tool might well provide us with not just machine generated “hallucinations” but also misinformation.

During my bit of the talk, I shared a perspective that I feel is pretty important, which is: “the most important thing in education isn’t machines or technologies, its people”. When we’re thinking about AI, this is even more true than ever. A screen of text looks like a screen of text. A teacher, tutor or lecturer can tell you not only what is important, but why, and what its consequences might mean to others.

I do feel that it is very easy to get carried away by the seemingly magical results that ChatGPT can produce. I also feel that it is important to view these tools with a healthy dose of AI cynicism and scepticism. If AI is applied philosophy, and this new form of AI enables us to more readily hold up a mirror to ourselves, it is entirely possible that we might not like what we see.

It is entirely possible that generative AI tools may well “read” this summary, and these reflections might well help these uncanny tools answer the question “how do humans perceive generative AI?” I’ll be interested to see what answer it produces.

Returning to the implicit question presented in the title of this event: “how generative AI going to change everything?” The cynic in me answers: “I doubt it”. It is, however, likely to change some things.

Other resources

A few weeks before this event, I was made aware of another related event which took place on 16 March, entitled Teaching with ChatGPT: Examples of Practice (YouTube playlist)This event was a part of a series of Digitally Enhanced Education Webinars from the University of KentThese presentations are certainly worth a visit, if only to hear other voices sharing their perspectives about this topic.

After this blog was published, Arosha Bandara sent me a link to the following article: Stephen Wolfram writings: What Is ChatGPT Doing ... and Why Does It Work? It is quite a long read, and it is packed with detail. It's also one of those articles that will take more than a few hours to work through. I'm sharing it here for two reasons: so I know where to find it again, and just in case others might find it of interest.

Acknowledgements

The event was a KMi Knowledge Makers event. Many thanks to John for inviting me, and encouraging me to participate. Many thanks to all the presenters; I hope I have managed to share some of the key points of your presentation, and apologies that I haven’t managed to capture everyone’s presentation. The event was organised by Lucas Anastasiou (PhD Research Student), Shuang Ao (PhD Research Student), Matteo Cancellieri (Lead Developer - Open Research), John Domingue (Professor of Computer Science), David Pride (Research Associate) and Aisling Third (Research Fellow). Thanks are also extended to Arosha for sending me the Wolfram article.

Addendum

A couple of weeks after the event, I was sent a note by a colleague. Someone in KMi may have asked ChatGPT to write a summary of this article. A link to that summary is available through a KMi blog. I have no idea to what extent it may have been edited by humans. This made me wonder: I wonder how ChatGTP might summarise the summary.

Permalink Add your comment
Share post

This blog might contain posts that are only visible to logged-in users, or where only logged-in users can comment. If you have an account on the system, please log in for full access.

Total visits to this blog: 1988331