OU blog

Personal Blogs

Christopher Douce

Generative AI and the future of the OU

Visible to anyone in the world
Edited by Christopher Douce, Tuesday, 20 June 2023, 10:24

On 15 June 2023 I attended a computing seminar about generative AI, presented by Michel Wermelinger.

In some ways the title of his seminar is quite provocative. I did feel that his presentation relates to the exploration of a very specific theme, namely, how generative AI can play a role in the future of programming education; a topic which is, of course, being explored by academics and students within the school.

What follows is a brief summary of Michel's talk. As well as sharing a number of really interesting points and accompanying resources, Michel did a lot of screensharing, where he demonstrated what I could only describe as witchcraft.

Generative AI tools

Michel showed us Copilot, which draws on code submitted through GitHub. Copilot is said to use something called OpenAI Codex. The witchcraft bit I mentioned was this: Michel provided a couple of comments in a development environment, which were parsed by the Copilot, which generated readable and understandable Python code. There was no messing about with internet searches or looking through instruction books to figure out how to do something. Copilot offered immediate and direct suggestions.

Copilot isn’t, of course, the only tool that is out there. There are now a bunch of different types of AI tools, or a taxonomy of tools, which are emerging. There are tools where you pay for access. There are tools that are connected with integrated development environments (IDEs) that are available on the cloud, and there are tools where the AI becomes a pair programmer chatbot. There are other tools, such as learning environments that offer both documentation and the automated assessment of programming assignments.

The big tech companies are getting involved. Amazon has something called CodeWhisperer. Apparently Google has something called AlphaCode, which has participated in competitive programming competitions, leading to a paper in Nature which questions whether ChatGPT and AlphaCode going to replace programmers? There’s also something called StarCoder, which has also been trained on GitHub sources.  

AI can, of course, be used in other ways. It could be used to offer help and support to students who have additional requirements. AI could be used to transcribe lectures, and help student navigate across and through learning materials. The potential of AI being a useful learning companion has been a long held dream, and one that I can certainly remember from my undergraduate days, which were in the last century.

Implications

An important reflection is that Copilot and all these other AI tools are here to stay. It wouldn’t be appropriate to try to ban them from the classroom since they are already being used, and they already have a purpose. Michel also mentioned there is already a textbook which draws on Generative AI: Learn AI-assisted Python programming

Irrespective of what these tools are and what they do, everyone still needs to know the fundamentals. Copilot does not replace the need to understand language syntax and semantics and know the principles of algorithmic thinking. Developers and engineers need to know what is meant by thorough testing, how to debug software, and to write helpful documentation. They need to know how to set breakpoints, use command prompts, and also know things about version and configuration management.

An important question to ask is: how do we assess understanding? One approach is an increasing use of technical interviews, which can be used to assess understanding of technical concepts. This won’t mean an academic viva, but instead might mean some practical discussions which both help to assess student’s knowledge, and help them to prepare for the inevitable technical interviews which take place in industry.

New AI tools may have a real impact on not only what is taught but how teaching is carried out, particularly when it comes to higher levels of study. This might mean the reformulation of assignments, perhaps developing less explicit requirements to expose learners to the challenge of working with ambiguity, which students must then intelligently resolve.

Since these tools have the potential to give programmers a performative boost, assignments may become more bigger and more substantial. Irrespective of how assignments might change there is an imperative that students must learn how to critically assess and evaluate whatever code these tools might suggest. It isn’t enough to accept what is suggested; it is important to ask the question: “does the code that I see here make sense of offer any risks, given what I’m trying to do?”

A term that is new to me is: prompt engineering. This need to communicate in a succinct and precise way to an AI to get results that are practical and useful within a particular context. To get useful results, you need to be clear about what you want. 

What is the university doing?

To respond to the emergence of these tools the university has set up something called the Generative AI task and finish group. It will be producing some interim guidance for students and will be offering some guidance to staff, which will include the necessity to be clear about ethical and transparent use about AI. It is also said to highlight capabilities and limitations.  There will also be guidance for award boards and module results panels. The point here is that Generative AI is being looked at. 

Michel suggested the need for a working group within the school; a group to look at what papers coming out, what the new tools are, and what is happening across the sector at other institutions. A thought that it might be useful to widen it out to other schools, such as the School of Physical Sciences, and any others which make use of any aspect of coding and software development.

Reflections

Michel’s presentation was a very quick overview of a set of tools that I knew very little about. It is now pretty clear that I need to know a lot more about them, since there are direct implications for the practice of teaching and learning, implications for the school, and implications for the university. There is a fundamental imperative that must be emphasised: students must be helped to understand that a critical perspective about the use of AI is a necessity.

Although I described Michel’s demonstration of Copilot as witchcraft all he did was demonstrate a new technology.

When I was a postgraduate student, a lecturer once told me that one of the most fundamental and important concepts in computing was abstraction. When developers are faced with a problem that becomes difficult, they can be said to ‘abstract up’ a level, to get themselves out of trouble, and towards another way of solving a problem. In some senses, AI tools represent a higher level of abstraction; it is another way of viewing things. This doesn’t, of course, solve the problem that code still needs to be written.

I have also heard that one of the fundamental characteristics of a good software developer or engineer is laziness. When a programmer finds a problem that requires solving time and time again, they invariably develop tools to do their work for them. In other words, why write more code than you need to, when you can develop a tool that solves the problem for you?

My view is that both abstraction and laziness are principles that are connected together.

Generative AI tools have the potential to make programmers lazy, but programmers must gain an appreciation about how and why things work. They also need to know how to make decisions about what bits of code to use, and when. 

It takes a lot of effort to become someone who is effective at being lazy.

Permalink Add your comment
Share post

This blog might contain posts that are only visible to logged-in users, or where only logged-in users can comment. If you have an account on the system, please log in for full access.

Total visits to this blog: 2335105