OU blog

Personal Blogs

Christopher Douce

ChatGPT school seminar

Visible to anyone in the world
Edited by Christopher Douce, Sunday, 21 May 2023, 09:49

On 19 April 2023, I arrived slightly late for an online seminar about ChatGPT and generative AI. This blog post share some of the notes that I made during the session. It might be useful to read this post in conjunction with an earlier blog that was written on the same topic that summarises a workshop organised by the OU Knowledge Media Institute (KMI). These notes are pretty rough-and-ready, since they were edited together a month after the event took place.

Seeking opinions

Mike Richards, from the School of Computing and Communications, began by summarising some research that he had carried out with a number of colleagues. Five tutors were interviewed. When it comes to reviewing and marking assignments, it was noted that tutors are sensitive to changes in formatting style, voice and vocabulary.

Tutors rely on module teams and central systems for plagiarism detection, but they can and do pick up on things themselves. ALs don’t like referring students to disciplinary processes. They are cautious; they usually have a very high level of suspicion before they contact staff tutors and invoke the academic conduct processes. In the cases where the identify issues, they take opportunities to make a teaching point to students.

Tutors wish to maintain positive relationships with students, but they are worried about the implications of raising academic conduct referrals and potential professional consequences if they raised unwarranted academic conduct concerns. Of course, there are no consequences for tutors. It is, of course, the academic conduct officers who make the decisions.

Key points

During the session, I captured the following important points. The first point was that assessment is vulnerable to ChatGPT. Specifically, highly structured essays are vulnerable, but these type of essays are used to develop student skills.

ChatGPT perform less well with anything to do with reflections about learning, since anything that is produced will not sound genuine.

There is a role for ChatGPT (or generative AI) detection software, but there are issues with detection tools, since they present a high rate of false positives. Detectors only gives you a probability that something is synthetic, but doesn’t provide evidence like TurnItIn.

Tutors are very important. They are able to spot synthetic solutions; they can identify bland, superficial, repetitive and irrelevant materials in a way that automated tools cannot. To assist with this, and to help our tutors, the university needs to provide better plagiarism training.

A recognised issue is that ChatGPT will generate superficially compelling references that are completely fake. Asking ALs to scrutinise the referencing would go some way to determine whether a chunk of text has been automatically generated. ChatGPT doesn’t currently do referencing at the moment, but there is a possibility this might change if it is connected with public databases.

The next step of this project is to write up findings and to have conversations with other faculties. There is also a university working group which aims to generate an assessment authoring guide to mitigate against generative AI. There is, of course, the need to do more studies. There might also be the need to adopt subject or discipline specific approaches. 

The closing thoughts shared during the seminar are important: we need to teach all students about the consequences of AI. Perhaps there needs to be some Open Educational Resources on the topic, perhaps something on OpenLearn that offers a sketch of what it can and cannot do. A closing point was that there are no ‘no-cost’ options. The university needs to carefully consider the role and purpose of assessments. Doing nothing is not an option.

During the discussion session, I noted down a couple of interesting questions: what question types would cause large language modules to perform sufficiently bad from caring to not caring? Also, what limits its abilities? ChatGPT writes in generalities. Its responses comes from how questions are worded. There is also the issue of concreteness. Assessment tasks are often related to specifics, in terms of activities texts, module materials, and forum posts. If generative AI cannot access the texts that students need to access and critically evaluate to develop their skills, its uses are, of course, limited.

Reflections

One of the key points that was emphasised was the importance of the tutor. They have such an important role to play in not only identifying instances of potential academic misconduct, but also in educating students about generative AI, and the risks these tools present.

It is also useful to reflect on the point that tutors can spot changes in writing style. There is the possibility that the stylistic quality of generated text is a characteristic that could be used to respond to not only ChatGPT, but also contract cheating. At the time of writing, anti-plagiarism detection tools such as TurnItIn only evaluate individual assignments. In the arms race to ensure academic integrity, the next generation of tools might analyse text across a number of submissions whilst taking into account the characteristics or structure of individual assessments.

I expect there will be a multi-faceted institutional response to generative AI. There will be education: of students, tutors, and module teams. Students will be informed about the ethical risks of using generative AI, and the practical consequences of academic misconduct. Tutors will be provided with more information about what generative AI is, and offered more development to facilitate sessions to help students. Module teams will have an increasing responsibility to develop assessment approaches that proactively mitigate against the development of generative AI. Also, technology will play a role in detecting academic misconduct, and new procedures will be developed to assist academic conduct officers.

Acknowledgements

An acknowledgement is due to Mike Richards and everyone who took part in aspects of research which is summarised here. A thank you goes to Daniel Gooch, who facilitated the event.

Permalink Add your comment
Share post
Christopher Douce

ChatGPT and Friends: How Generative AI is Going to Change Everything

Visible to anyone in the world
Edited by Christopher Douce, Sunday, 2 Apr 2023, 10:37

On 23 March 2023 the OU Knowledge Media Institute hosted a hybrid event, which had the curious title: How Generative AI is Going to Change Everything. More information about the details of this event is available through a GenAI KMi site.

I think I was invited to this event after sharing the results of a couple of playful ChatGPT experiments on social media, which may have been seen by John Domingue, the OU KMi director. In my posts, I shared fragments of poetry which had been generated about the failures of certain contemporary political figures.

The KMi event was said to be about “ChatGPT and related technologies, such as DALL E 2 and Stable Diffussion” and was described as an “open forum” to “allow participants to first get an understanding of what lies underneath this type of AI (including limitations)” with a view to facilitating discussions and potentially setting up an ethical workshop.

What follows is a very brief summary of some of the presentations, taken from notes I made during each of the talks. Please do view this blog as simply that, a set of notes. Some of these may well contain errors and misrepresentations, since these textual sketches were composed quite quickly. Do feel free to contact individual speakers.

Introduction and basics of ChatGPT/GPT-3/GPT-4

The event was opened by John who described it as a kick-off event, intended to bring people together. He introduced the topic, characterising the GPT projects as a very sophisticated text predictor, with GPT3 being described as “a text predictor on steroids”. An abbreviation that was regularly used was: LLM. This is short for “large language model”; a term that I hadn't heard before.

We were introduced to the difference between the different versions of GPT. An interesting difference being the amount of text these LLMs have processed and how much text they can generate. We were told that GPT2 was released in 2018 and the current version, GPT4, can make use of images (but I’m not quite sure how).

John shared a slide that described something called the OU’s AI agents ecosystem, which had the subtitle of being an AI strategy for the OU.

There were some pointers towards the future. Some of these new fangled tools are going to find their way into Microsoft 365. I’m curious to learn how these different tools might affect or change my productivity.

What follows is a summary of some of the presentations that were made during the event. Most of the presentations were made over a course of 5 minutes; the presenters had to pack in a lot over a very short amount of time. There is, of course, a risk that I may well have misrepresented some aspects of the presentations, but I hope I have done a fair job in capturing the main points and themes each speaker expressed.

Short presentations

ChatGPT: Safeguards, trustworthiness and social responsibility

The first short presentation was by Shuang Ao from the Knowledge Media Institute. Shuang suggested that LLMs are “uncontrollable, not transparent and unstable” and had limitations in terms of their current ability to demonstrate reasoning and logic. They also may present factual errors, and demonstrate bias and discrimination, which presents real ethical challenges.

But can it make decisions?

Next up was Lucas Anastasiou, also from the Knowledge Media Institute. Lucas had carried out some experiments. ChatGPT can’t play chess at all well, but it does know how to open a game well, since it knows something about chess game opening theory. But how about poker? Apparently there’s something called a poker IQ test. I’m not sure if I remember exactly, but I seem to recall that they’re not great at playing poker. How about a stock portfolio or geo-political forecasting? We were offered a polite reminder that a computer can never be held accountable, but perhaps its users, and developers could be?

ChatGPT attempts OU TMAs

The next speaker was Alistair Willis, School of Computing and Communications. Alistair is a module chair for TM351 Data management and analysis. He asked a simple question, but one that has important implications: can ChatGTP answer one of his TMA questions? 

His TMA was a guided investigation, and was split into two parts: a coding bit, and an interpretation bit. The conclusion that was good at the coding bit (or, potentially, helping with the coding bit), but rubbish at the interpretation. Overall, a student wouldn’t get a very high score.

From the module team perspective, a related question was: could it be used to create module materials?

These questions is all very well, but if text and answers can be generated, is there a way to determine whether a fragment of prose was generated by ChatGPT? Apparently, there is a tool which can highlight which bits of text may have been written using ChatGPT.

Five key learnings from our use of Chatbots

Barry Verdin has an interesting role within the OU; he is an assistant director student support innovation. I have heard of Barry before; he keeps inviting me to meetings about systems thinking, but I keep being too busy to attend (but I do welcome his invitations!) His interest lies in supporting a chatbot that offers support to students. He shared an interesting statistic that the chatbot can answer around 80% of queries. Clearly, AI has the possibility of helping with some types of student enquiries.

Experiments with ChatGPT

It was my turn. I wear a number of hats. I’m a student, an associate lecturer, and a staff tutor.

Wearing a student hat

Whilst wearing my student hat, I’ve been studying a module called A230 Reading and studying literature. When I had completed and submitted one of my Tutor Marked Assignments, I submitted an abridged version of my TMA question to ChatGPT. The question I gave it was: “Compare and contrast Shelly’s Frankenstein with Wordsworth’s Home at Grasmere”. I admit that there was a part of me that took pleasure in asking an artificial intelligence what it thought about Frankenstein.

I found the response that I got interesting. Firstly, it was pretty readable, and secondly, it helped me to understand what I had understood when preparing the assignment. For example, it enabled me to check my own understanding of what literary romanticism was all about. Another point was that there was no way that ChatGPT could have responded to the detail specifics of the essay question, since we were asked to interpret a very specific section of Wordsworth’s epic (and we have already learnt that ChatGPT isn’t good at logic). The text that we was working with was only available to OU students in a very specific form.

My study of literature helps me to develop specific skills, such as close reading, and adopting a critical approach to texts. Students, of course, also need to show an understanding of module materials too. If large language models don’t have access to those texts, they’re not going to even attempt to quote from them. This means that a vigilant tutor is likely to raise a curious eyebrow if a student submits a neatly written essay which is devoid of quotes from texts, or from module materials.

Wearing a tutor hat

Picking up on the role of a tutor, another hat I wear is a tutor for M250 Object-oriented Java programming I confess to doing something similar to Alistair. I fed ChatGPT a part of a TMA question which instructed a student to write bits of code to model a scenario. It did well, but it did too much: it produced bits of code that were not asked for. It produced too much. This said, drawing on my experience of programming (and of teaching) I could understand why it suggested what had been produced.

From the tutor’s perspective, if I had received a copy of what had been produced, I would be pretty suspicious, since I would be asking: “where did our student get all that experience from, when this is module that is all about introducing key concepts?”

Wearing a staff tutor hat

For those who are unfamiliar with the role of a staff tutor, a staff tutor is a tutor line manager. We’re a bit of academic and administrative glue in the OU system which makes things work. We get to deal with a whole number of different issues on a day-to-day basis, and a couple of times a year academic conduct issues cross my desk.

The university has to deal with and work with a number of existing threats to academic integrity, such as well-known websites where students can ask questions from subject matter experts and fellow students. Sometimes solutions to assignments are shared through these sites. Sometimes, these solutions contain obvious errors, which we can identify.

Responses to the threats to academic integrity include the use of plagiarism detection software (such as TurnItIn), the use of collusion detection systems (such as CopyCatch), the vigilance of tutors and module teams, the referral of cases to university Academic Conduct Officers, running of individual support sessions to help students to develop their study skills to ensure they do not accidentally carry out plagiarism, and effective record keeping to tie everything together.

When arriving at this event, one question I did have was: could it be possible to create an AI to detect answers that had been produced by an AI? Alistair’s earlier reference to a checker had partially answered my own question. Further question are, of course: how should such detection tools be used within an institution, and to what extent should academic policies be adapted and changed to take account of large language models?

Bring textual wishes to life

Christian Nold from the School of Engineering and Innovation (E&I) shared some information about an eSTEeM project with Georgy Holden. Students were encouraged to send postcards about their experience at level 1 study, sharing 3 wishes. The question that I have noted down was: wow can we use AI tools to generate personas from 3 wishes? Tools such as ChatGPT integrates different bits of text together and the generation personas could help us to think differently.

Core-GPT

Matteo Cancellieri and David Pride, both from the Knowledge Media Institute gave what was pitched as a KMi product announcement: they introduced CORE-GPT. Their project aims to combine open access materials with AI for credible, trustworthy question answering. The aim is to attempt to reduce the number of ‘hallucinations’ (made up stuff) that might be produced through tools such as ChatGPT, drawing on information from open access papers. More information about the initiative is available through a blog article: Combining Open Access research and AI for credible, trustworthy question answeringMore information is available through the Core website.

ChatGPT and assessment

Dhouha Kbaier from School of Computing and Communications shared some concerns and points about assessment. Dhouha is module chair of TM355 Communications Technology. Following the Covid-19 pandemic, students are assessed through a remote exam. In their exam, students need to draw on discussion materials, and find resources and articles. Educators need to make students aware that there are tools that can detect text generated by large language models, and AI tools can create errors (and hallucinations).

One of the points I noted was: there is the potential need to adapt our assessment approaches. Educators also have a responsibility to do what they can to remove a student’s motivation for cheating. Ultimately, it isn’t in their best interests.

Can students self-learn with ChatGPT?

Irina Rets from the OU Institute of Educational Technology (IET) asked some direct questions, such as: can students learn through ChatGPT? Also, can AI be a teacher? In some respects, these are not new questions; a strand of research that links to AI and education has been running for a very long time. Some further questions were: who gets excluded? Also, what are the learning losses, and learning gains? Finally, how might researchers use these tools?

Chat GPT - Content Creation with AI

Manoj Nanda from the School of Computing and Communications also suggested that AI might be useful for idea generation. Manoj highlighted a couple of tools that I had not heard of before, such as Dall-e2 (OpenAI website) which can generate an image from a textual description. Moving to an entirely different modality, he also highlighted Soundraw.io. Manoj emphasised that a key skill is using appropriate prompts. This relates to an old computing adage: if you put garbage in, you’ll get garbage out (GIGO).

Developing playful and fun learning activities

Nicole Lotz from the School of Engineering and Innovation (E&I) sees tools such as ChatGPT as potentially useful for creative exploration. Nicole is module chair of U101 Design thinking, which is a first level design module. The ethos of the module it all about playfulness, building confidence, and learning through reflection. Subsequently, there may be opportunities to use what ChatGPT might produce as a basis for further reflection, development and refinement.

"I am the artist Riv Rosenfeld" - How ChatGPT is your new neoliberal friend

Tracie Farrell, from the Knowledge Media Institute, works in the intersection between AI and social justice. Tracie asked ChatGPT to write a paragraph about her friend and artist, Riv Rosenfeld. There was a clear error, which was that ChatGPT got their pronouns wrong. An important point is that “ChatGPT doesn’t know your truth”. In other words, the perspective that is generated by large language models comes from what is written or known about you, and this may be at odds with your own perspective. There are clear and obvious risks: marginalised groups are always not as visible. Biases are perpetuated. Some key questions are: who will be harmed, and who will be helped, and to what extent (and how) will these emerging tools reinforce inequality.

Discussion

After the short presentations, we went into a plenary discussion. It wasn’t too long before the history of AI was highlighted. John highlighted the two schools of thought about AI: a symbolic camp, and a statistical camp, and suggested that in the future, there might be a combination of the two. This related to the earlier point that these AI tools can’t (yet) do logic very well.

A further comment reflected an age old intractable problem that hasn’t been solved, and might never be solved, namely: we still haven’t defined what intelligence is. In terms of AI, the measure of intelligence has moved from playing chess, through to having machines do things that humans find intrinsically easy to do, such as assess a visual scene, and communicate with each other using natural language. The key point in the discussion was, of course: we need to ask again, what do we mean by intelligence?

Whenever a technology is discussed, an accompanying discussion of a potential digital divide is never too far away. AI may present its own unique divides: those who know how to use AI tools and can use them effectively, and those who don’t know about them, and are not able to use them. There are clear links to the importance of equity and access.

During the discussion, I noted down the words: “If you’re a novice programmer, what blocks you is your first bug”. In other words, knowing the fundamentals and having knowledge is important. Another phrase I noted down was: “It is perhaps best to view them as fallible assistants”.

Given their fallibility, making judgements about when to trust what an AI tool has produced, and when not to, is really very important. In other words: it is important to think critically, and this is something that only us humans can do.

Reflections

This was a popular event; approximately 250 people attended the first few presentations.

The presentations were quite different to each other. Some explored the question “to what extent might these tools present risks to academic integrity?” Others explored “how can these tools help us with creativity and problem solving?” The important topic of ethics was clearly highlighted. It was also interesting to learn about work being carried out within KMi, and the reference to the emergence of an institutional AI strategy (although I do hold the view that this should be thoroughly and critically evaluated).

I enjoyed the discussion section. In some respects, it felt like coming home. I studied AI as an undergraduate and a postgraduate student over 20 years ago, where the focus was primarily on symbolic AI. At the time, statistical methods, which includes neural networks, was only just beginning to make an appearance. It was really interesting to see the different schools of thought being highlighted and discussed. During the discussion session I shared the following memorable definition: AI is really clever people making really stupid machines to do things that look clever.

I confess to having been around long enough to know of a number of AI hype cycles. When I was a postgraduate student, I learnt about the first generation of AI developments. I learnt about chess and problem solving. I remember that proponents at the time were suggesting that the main problems with AI had been solved, which had the obvious implication that we would soon have our own personal robots to help us with our everyday chores.

The reality, of course, turned out to be different, since some of those very human problems, such as vision, sound and language were a lot harder to figure out. This meant there were no personal robotic assistants, but instead we did get a different kind of personal digital assistant.

Despite my cynicism, one aspect of AI that I do like is that it has been described as “applied philosophy”. When you start to think about AI, you cannot get away from trying to define what intelligence is. In other words, the machine becomes a mirror to ourselves; the computer helps us to think about our own thinking.

I once heard a fellow computer scientist say that one of the greatest contributions of computing is abstraction. In other words, when making sense of a difficult problem, you look at all its elements, and then you go on to create a new representation (or form) of the problem which then enables you to make sense of it all. I remember another computer science colleague saying, “when you get into trouble, abstract your way out of difficulty”. This can also be paraphrased as: “go up a level”.

We’ve all been in that situation when we’ve had multiple search engine tabs open, and we’re eyeballing tens of thousands of different search results. In these circumstances, we don’t know where to begin. Perhaps this is the problem that these large language models aim to resolve: to produce a neat summary of an answer we’re searching for in a neatly digestible format.

To some degree, generative AI can be though as “going up a level”, but the way you go up a level may well be driven by the data that is contained within a large language model. That data, of course, might well be incorrect. Even if you do “go up a level” you might be going up in entirely the wrong direction.

All these points emphasise the importance of taking a critical perspective of what all these new-fangled AI tools produce, but this does require those interpreting any results to have developed a critical perspective in the first place. We need a critical perspective to deal with instances where an AI tool might well provide us with not just machine generated “hallucinations” but also misinformation.

During my bit of the talk, I shared a perspective that I feel is pretty important, which is: “the most important thing in education isn’t machines or technologies, its people”. When we’re thinking about AI, this is even more true than ever. A screen of text looks like a screen of text. A teacher, tutor or lecturer can tell you not only what is important, but why, and what its consequences might mean to others.

I do feel that it is very easy to get carried away by the seemingly magical results that ChatGPT can produce. I also feel that it is important to view these tools with a healthy dose of AI cynicism and scepticism. If AI is applied philosophy, and this new form of AI enables us to more readily hold up a mirror to ourselves, it is entirely possible that we might not like what we see.

It is entirely possible that generative AI tools may well “read” this summary, and these reflections might well help these uncanny tools answer the question “how do humans perceive generative AI?” I’ll be interested to see what answer it produces.

Returning to the implicit question presented in the title of this event: “how generative AI going to change everything?” The cynic in me answers: “I doubt it”. It is, however, likely to change some things.

Other resources

A few weeks before this event, I was made aware of another related event which took place on 16 March, entitled Teaching with ChatGPT: Examples of Practice (YouTube playlist)This event was a part of a series of Digitally Enhanced Education Webinars from the University of KentThese presentations are certainly worth a visit, if only to hear other voices sharing their perspectives about this topic.

After this blog was published, Arosha Bandara sent me a link to the following article: Stephen Wolfram writings: What Is ChatGPT Doing ... and Why Does It Work? It is quite a long read, and it is packed with detail. It's also one of those articles that will take more than a few hours to work through. I'm sharing it here for two reasons: so I know where to find it again, and just in case others might find it of interest.

Acknowledgements

The event was a KMi Knowledge Makers event. Many thanks to John for inviting me, and encouraging me to participate. Many thanks to all the presenters; I hope I have managed to share some of the key points of your presentation, and apologies that I haven’t managed to capture everyone’s presentation. The event was organised by Lucas Anastasiou (PhD Research Student), Shuang Ao (PhD Research Student), Matteo Cancellieri (Lead Developer - Open Research), John Domingue (Professor of Computer Science), David Pride (Research Associate) and Aisling Third (Research Fellow). Thanks are also extended to Arosha for sending me the Wolfram article.

Addendum

A couple of weeks after the event, I was sent a note by a colleague. Someone in KMi may have asked ChatGPT to write a summary of this article. A link to that summary is available through a KMi blog. I have no idea to what extent it may have been edited by humans. This made me wonder: I wonder how ChatGTP might summarise the summary.

Permalink Add your comment
Share post
Christopher Douce

Study Skills Resources: what is available?

Visible to anyone in the world
Edited by Christopher Douce, Monday, 1 Mar 2021, 17:05

The Open University provides a lot of study skills resources, but these are scattered across a number of different sites. This blog post is intended to provide a quick 'summary page' of some of the resources that might be useful for anyone is are studying with the OU (or, in fact, studying at any other universities).

Firstly, a book

After enrolling for my first OU module, I was sent a textbook called The Good Study Guide by Andrew Northedge. I didn't ask for this book, and I had never seen this book before. In fact, I was really surprised to get an unexpected book!

I found the time to sit down and read it, and this was time well spent; it offered a wealth of study tips, resources and strategies.

If you're an OU student and you don't have this book, then do get a copy. If you're an existing OU student, then do make the time to look over this book time and time again: its really useful.

I think I have once written that I hold the view that if I had learnt about this book during my undergraduate days, I might have got better scores in both my essays and my exams!

Skills for Study: a really useful resource

There are some really useful resources that are available online. I particularly recommend that everyone visits the Open University Skills for Study website.

There are two really useful parts of the site (which is separated into tabs): a section about preparing and writing assignments and another section that is about revision and examinations. The preparing and writing assignments is particularly useful; it offers ideas about how to begin an assignment, to create a draft and think about how to edit what has been written.

There are also a set of downloadable study skills booklets. Key topics include: thinking critically, reading and taking notes, and develop effective study strategies. One particularly useful booklet is: preparing assignments (PDF). It contains some really useful sections are about paraphrasing, quoting and referencing, and improving your written English.

Library resources

The OU library is massive: it enables students to access papers and publications that are about anything and everything. The library have developed a set of useful study skills resources, but these are not very easy to find. 

In the help section, there is a link to a section that is all about Referencing and Plagiarism (OU Library website) it contains a really nice animation that explains things. One thing to remember that plagiarism is a term that can be pretty emotive. A key point is that it's important to make sure that you reference all the sources that you use, and that appropriate referencing does two things (1) it shows your tutor how much you've been reading, and (2) shows how you are becoming familiar with what it means to do academic writing.

A further links leads to something called the avoiding plagiarism pathway (OU being digital). This is one page of a wider set of library resources called Being Digital (OU Library services site) which is all about developing digital literacy skills. These pages contain a set of really useful interactive activities (OU being digital) that aim to develop computing, IT, and digital literacy skills.

The library also provides a link to something called the OU Harvard referencing guide. This shows you how to refer to any kind of resource: books, academic papers, conference proceedings, blogs, news articles and videos. If you're not sure whether you can reference something, do check out the OU Harvard guide; this should offer a bit of useful guidance.

Developing good academic practice

The library resource about Referencing and Plagiarism links to a short course that is called Developing Good Academic Practice (OU DGAP website). Although this is a short resource, it is very useful. It helps you to understand what good academic practice is and why it is important.

English language development and Open Learn resources

Some programmes aim to integrate English language development and skills into their modules; this is what Computing and IT does. Other subjects or programmes are slightly different: there is a module called L185 English for Academic Purposes which some Science students might study. Business studies students might study LB170 Communication skills for business and management.

One really cool thing that the Open University does is make a small percentage of its modules available to everyone for free though a site called OpenLearn (OU OpenLearn website). Up to ten percent of all OU modules may be available through OpenLearn, and it also makes some older modules available too.

Essentially, OpenLearn offers free courses. There are a series of English language skills courses (OpenLearn site) that anyone can access. One course, entitled English: skills for learning looks to be particularly useful. Here's a description:

“This course is for anybody who is thinking of studying for a university degree and would like to develop the English reading and writing skills needed to succeed. You'll be introduced to academic reading and effective note-making strategies. You'll develop your essay writing. You'll look at academic style and vocabulary-building strategies. You'll also enhance your understanding of sentence structure and punctuation. You will learn through a range of engaging activities aimed at extending your existing language skills.”

A more recent Open Learn resource has the title: Am I ready to be a distance learner? The summary to this module says: "will help to boost your confidence. You'll explore useful skills so you can discover how ready you are to study and how to develop your study skills in six steps to become a successful distance learner." Sounds useful!

There are also a range of courses that come under the broad title of 'learning to learn'. One course that jumped out at me as being particularly important was called: Learning to learn: Reflecting backward, reflecting forward; I'm mentioning this since reflective writing is particularly important at higher levels of study.

There's also some more OpenLearn resources for postgraduate modules, called Succeeding in postgraduate study; certainly worth a look if your considering taking a MSc.

Resources from other institutions

Students in other universities face exactly the same challenges faced by students in the OU. Since study skills and writing are important issues other universities have developed their own resources. A small sample of what is available is given below. 

One thing to add is: if you're an OU student, do look at the OU resources first before looking elsewhere. It's not that other institutions will offer bad or wrong advice (I always believe that different perspectives can be really useful in terms of understanding things), it's more a matter of terminology: the OU loves its abbreviations and sometimes has a certain way of doing things.

Final thoughts

This post contains link to many different resources and it might feel a bit overwhelming. The trick is to figure out what you need, to consider how you learn, and to then to have a look at some of the resources to see if you find them useful. If you need additional help in figuring out what you need, you should then also consider giving your subject student support team a ring.

Acknowledgements

I would like to thank Tricia Cronin and Ann Matsunaga; I have drawn on some of the links they have provided in their Resource to support students with English as a second language document.

Updated 1 March 2021

Permalink 2 comments (latest comment by Reaghan Reilly, Thursday, 29 Jul 2021, 01:32)
Share post
Christopher Douce

Academic conduct symposium – Towards good academic practice (day 2)

Visible to anyone in the world
Edited by Christopher Douce, Tuesday, 23 Feb 2021, 18:57

This is the second post in a series of two about an academic conduct symposium that I attended at the Open University between 20 March and 21 March 2013.

The difference between the first day of the conference and the second was that the first day was more focussed towards the student and the essential role of the associate lecturer.  The second day (in my opinion!) seemed to be more focussed towards those who have the role of dealing with and working with academic conduct issues. Below is a brief summary of the three workshop sessions, followed by some final reflections on the whole symposium.

Student perspectives on good academic practice

Pete Smith from the Faculty of Education and Languages, was the facilitator for my first workshop of the day.  This session addressed a different perspective to all the previous workshops.  It aimed to ask the question:  'what is the published literature on the student perspective?  [or 'views' about academic conduct].  Pete presented what was, in essence, a short literature review of the subject.  I was really struck by the wealth of information that Pete presented (which means that I'm only going to pick out a number of points that jumped out at me).  If you're interested in the detail of the research that Pete has uncovered (which is almost akin to a masters thesis), it might be a good idea to contact him directly.

Some key notes that I've made from the session include the point that learners can perceive themselves in terms of different roles in terms of how they relate to issue of academic conduct.  There are also differences of perceived seriousness and attitudinal differences.  Factors such as topic knowledge, cultural influences, demographic variables, new technology and conflicting advice are all considered to play a part.

Multiple reasons for academic misconduct range from genuine lack of understanding, attempts to gain greater levels of efficiency, temptation, cultural differences and beliefs. 

When looking more deeply at the research it was commented that there was a lack of robust evidence about the success of interventions.  We don't know what works, and also we don't have consistent guidance about how to begin to tackle this issue.  One important perspective is that everyone is different and knowledge and understanding of a learner is needed to make the best judgement about the most approach to take.

What resources are available?

This session was facilitated by Jenny Alderman from the Open University Business School and another colleague who works in the Academic Conduct Office.

One of the reasons why academic conduct is considered to be so important is that there is an important principle of ensuring that all students are given fair and equitable treatment.  Jenny reminded us that there are considerable costs in staffing the academic conduct office, running the central disciplinary and appeal committees and supporting the academic conduct officers.

An interesting debate that emerged from this session related to the efficacy of tools.  Whilst tools such as TurnItIn can be useful, it is necessary to take time to scrutinise the output.  There will be some clear differences between submissions for different faculties.  Some more technical subject (such as mathematics) may lead to the production of assignments that are necessarily similar to one another.  This has the potential to generate false positives within plagiarism detection systems.

Key resources: code of practice for student assessment, university policy on plagiarism, developing good academic practice website (which was linked to earlier), and the skills for study website which contains a section entitled developing academic English (Skills for Study).

Other resources that could be useful include Time Management Skills (Skills for Study), Writing in your own Words (Skills for Study), Use of source Materials (Skills for Study) and Gathering Materials for preparing for your assignments (Skills for Study).

The library have also produced some resources that can be useful.  These include a video about avoiding plagiarism (which features 'Bob').  The library have some resources about digital literacy entitled 'being digital'.  There is also a plagiarism pathway (Being Digital, Open University Library), which contains a number of activities.  (At the time of writing, I hadn't seen these before - many of these resources were pretty new).

As an aside, I had some discussions with colleagues about the need to more fully embed academic English into either individual modules or programmes of study, and I was directed to a module entitled L185 English for Academic Purposes.  Two fundamental challenges that need to be overcome include that of will and resource.  This said, there are three sections of the L185 module that are available freely on-line through OpenLearn.  These are: Paraphrasing Text, Summarising Text and How to be a Critical Reader.

Since the workshop, I've also been directed towards a resource entitled, Is my English good enough?  This page contains a link to the English for OU study pages.

What works?

The final session, facilitated by Jonathan Hughes, was all about what interventions might successfully nurture good academic practice (and what we might be able to learn from student casework).

Connecting back to earlier debates surrounding the use of technology to detect plagiarism, the issue of spurious reports discussed.  In instances where we are unsure what the situation was, we were reminded that the right thing to do is refer cases to the faculty academic conduct officer. 

I've noted that academic conduct is an issue of education and an important part of this is sharing the university view of what plagiarism is.  It is also connected with the judicious application of technology in combination with human judgement and adoption of necessary of process to ensure appropriate checks and balances.  (Again, all this is from the notes that I made during the event).

During this session I remember a debate about whether it was possible to create something called a 'plagiarism proof assignment'.  One contributor said, 'if you write a question, if you can do a quick internet search for an answer, then it is a poor question'.  The point being that there is an intrinsic connection between academic conduct and good instructional design.

One question that arose was whether the university should be telling our students more about tools such as TurnItIn and Copycatch.  Another approach is, of course, to have students submit their own work through these detection tools and also permit them to see their reports (which is an approach that other institutions adopt). 

Final thoughts

This conference or symposium was very different to other conferences I've been to before.  It seemed to have two (if not more) main objectives.  The first was to inform other people within the university about the current thinking on the subject and to share more information about the various policies and procedures that the university employs.  The second was to find a space to debate the different conceptions, approaches and challenges which come with the difficult balancing act of supporting students and policing academic conduct.

In terms of offering a space that informs and facilitates debate, I felt the conference did a good job, and I certainly feel a bit more equipped to cope with some of the challenges that I occasionally face.  Moving forward, my own objective is to try my best to share information about the debates, policies and resources with my immediate colleagues. 

I came away with three take away points.  The first relates to the definition of what 'plagiarism' is.  It now strikes me that there are almost two different definitions.  One definition is the internal definition which acknowledges that students can both deliberately and inadvertently fail to acknowledge the work of others.  The other more common definition is where plagiarism can be interpreted (almost immediately) as maliciously and deliberately copying someone else with the clear intention of passing someone's work off as your own.  Although the difference is one that is very subtle, the second definition is, of course, much more loaded.

The second take away point lies with the policies and procedures.  I now have a greater understanding of what they are and the role of the academic conduct office.  I can clearly see that there are robust processes that ensure fairness in academic conduct cases.  These processes, in turn, help to maintain the integrity and validity of the qualifications.

The final take away point is that I am now a lot clearer in understanding what I need to do, from my perspective, to help both students and tutors deal with different types of academic conduct.

Copies of slides and videos are now available on the Academic Conduct Site (Open University staff only)

Permalink 1 comment (latest comment by Jonathan Vernon, Thursday, 18 Apr 2013, 21:20)
Share post
Christopher Douce

Academic conduct symposium – Towards good academic practice (day 1)

Visible to anyone in the world
Edited by Christopher Douce, Tuesday, 23 Feb 2021, 18:58

This is the first of two posts about an academic conduct symposium that I attended at the Open University between 20 March and 21 March 2013.  I'm mainly writing this as a broad 'note for self', a reminder of some of issues that emerged from the event, but I hope it will be useful for my OU colleagues and others too.

The symposium was kicked off by Peter Taylor who spoke briefly about an academic practice project that ran in 2007 which led to the last conference (which coincided with the launch of policies) in 2009.  Peter emphasised the point that the issue of academic conduct (and dealing with plagiarism cases) is fundamental to the academic integrity of the university and the qualifications that it offers.

Each day of the symposium had three parallel sessions which comprised of three different workshops.  Each workshop covered a slightly different aspect of academic conduct.  I'll do my best to present a quick summary of each one.

Keynote: Carol Bailey, EFL Senior Lecturer

Carol Bailey, who works as an English as a Second Language lecturer at the University of Wolverhampton, gave a keynote that clearly connected with many of the challenges that the symposium aimed to address. 

One of Carol's quotes that I particularly remember is a student saying, 'I never wrote such a long essay before'.  This is a quote that I can directly relate to.  It also relates to the truth that academic writing is a fundamentally challenging endeavour; it is one that requires time and experience.  To some, the process of writing can be one that is both confusing and stressful.   Students might come to study having experienced very different academic approaches to the one that they face either within the Open University or within other UK institutions - situations where the teachers provide all the resources necessary to complete study, situations where access to information technology may be profoundly limited.

When it comes to study, particularly in distance education, writing is a high level fundamental skill that is tested from the very start of a module.  Students need to quickly grasp the idiolect of a discipline and appreciate sets of subject words to begin to appreciate what is meant to become a part of a 'discourse community'.  It takes time to develop an understanding of what is meant by the 'casual elegance' of academic writing.

There is also the tension between accuracy and personal expression.  When faced with new study challenges where students are still grappling with the nuances and rules of expression, misunderstandings of what is required can potentially lead to accidental academic misconduct.  The challenge of presenting your ideas in your own voice is one that is fundamental to study within the Open University.

Hide and Seek : Academic Integrity

Liz McCrystal and Encarna Trinidad-Barnes ran what was my first workshop of the symposium.  The premise of this workshop was that 'Information is hidden and we need to seek it out'.  Encarna opened with a question, which was, 'what do you understand by academic integrity?'  Some answers included: honesty, doing it right, following academic conventions, crediting other people - all these answers resonated with all the participants.

We were then directed to some group work.  We were asked a second question, which was, 'how do you find information [about academic integrity]?'  Our group came up with a range of different answers.  Some of them were: official notes offered to tutors by module teams, the developing good academic practice site (OpenLearn version), assessment guides (also provided by the module team), helpful colleagues and representatives of module teams.

Another question was, 'when would you expect students to look at or be directed to the information?'  Answers included: ideally part of the induction process, before the first assignment, feedback from an assignment, tutorials (and associated connections with the on-line forums).  One perspective was that issues surrounding good academic practice should be an integral part of the teaching (and learning) that is carried out within a module.

A final question that I noted down was, 'is it clear what academic integrity is?'  The answer that we arrived at was information is there, but we have to actively seek it out - but there's also a responsibility by the university and for those who work for the university to offer proactive guidance (for students) too.

A useful resource that was mentioned a couple of times was Writing in your own words (OpenLearn), which contains a very useful podcast.

Plagarism: Issues, Policy and Practice

The second workshop I attended was facilitated by Anne Martin from the Faculty of Health and Social Care.  In comparison to the first workshop, this workshop had a somewhat different focus.  Rather than focussing on how to find stuff, the focus was on the importance of policies and practice.  Key phrases that I noted included: university and policy context, definitions of terms and the importance of study skills.

On the subject of process, there was some discussion about the role of a university body called the academic conduct office.  The office accepts evidence, such as reports (from plagiarism detection tools), explanations from students, script feedback, whether additional support has been arranged for a student.  An important point was made that students always have the right to appeal.

One of the (very obvious) points that I've noted is that there is no one 'gold standard' in terms of detecting academic conduct issues (there are also different ways of dealing with the issue).  The role of the associate lecturer (AL) or tutor is just as important as automated tools such as TurnItIn (website) and Copycatch. 

Technology, of course, isn't perfect, but technology can be used to highlight issues before they may become significant.

Fuzzy Lines: Determining between good and bad academic practice

The third and final workshop of the day was facilitated by Arlene Hunter and Lynda Cook.  When faced with a report from a plagiarism detection system (such as TurnitIn) it's important to ask the question of 'what has happened here?'  Very often, things are not at all clear cut.  The reports that we are presented with can be, without a doubt, very ambiguous.

During this session I was introduced to some different ways to characterise or to think about evidence that relates to academic practice.  Examples include poor paraphrasing and shadow writing, excessive use of quotations, and the use of homework sites and social networking tools.  (I now understand shadow writing to be where a writer might use different words but uses almost the same structure of another document or source).  I also remember that were was some discussion that related to the university social networking policy.   

In many (it not most) situations there is no distinct line between poor study skills and plagiarism.  A point was: if in doubt, pass it onto the academic conduct office.  On the other hand, it is an imperative to help tutors to help students to focus on developing academic writing and literacy skills.

Plenary

The final session of the day was a short plenary session which highlighted many of the issues that were brought to the fore.  These included the tension between policing academic standards whilst at the same time helping students to develop good academic practices.  There was also some debate that related to the use of tools.  The university makes use of plagiarism detection tools at the module team level and there was some debate as to whether it might also be useful to provide access to detection software to associate lecturers, since they are arguably closer to the students. 

Another challenge is that of transparency, i.e. how easy it is to get information about the policies and procedures that are used by the university.  It was also mentioned that it is important to embed the values of good academic practice within modules and that the university should continue, and ideally do more, to support its associate lecturers when it comes to instilling good academic practice amongst its students.  An unresolved question that I had which related to supporting of students whose English is a second language was touched on during the second day.

All in all, it was a useful day.  Of the two days, this first day was the one that was more closely aligned to the challenges that are faced by the tutors.  What I took away from it was  a more rigorous understanding and appreciation of the processes that have been created to both support students but also to maintain academic integrity.

Permalink
Share post

This blog might contain posts that are only visible to logged-in users, or where only logged-in users can comment. If you have an account on the system, please log in for full access.

Total visits to this blog: 2050539