On the evening of 3 Feb 2021 I attended a briefing for a new module, Machine learning and artificial intelligence (TM358), that is scheduled to begin in October 2021.
This short blog post represents a short summary of what was covered during the briefing. I must, however, begin with a short disclaimer: some of the detail that is presented here may well be subject to change as the module moves to production.
The briefing began with Neil Smith, module team chair, who said that “machine learning has been one of the biggest changes in computing in the past 20 years”. Neil also said that artificial intelligence has not featured within the computing curriculum for 5 years. In some ways, this module fills an important gap in the computing undergraduate curriculum.
TM358 is a part of a new qualification: BSc (Hons) Data Science. R38, as it is known, is a joint degree with the schools of Maths and Stats and Computing. To study the module, there are two important prerequisite modules: MU123 Discovering Mathematics and M269 Algorithms, data structures and computability.
From the computing side of the pathway, students can also study TM351 Data management and analysis.and TM356 Interaction design and the user experience. Students would begin their level 1 computing studies by studying TM111 Introduction to computing and information technology 1 and TM112 Introduction to computing and information technology 2.
TM358 has a particular focus on deep neural learning. I made a note that the module adopts an engineering approach and makes use of toolkits and languages that may already be familiar to some students. There is also strong thread of social impact and the importance of ethics. Key tools that students will use include the Python (featured in TM112) and Jupyter notebooks (featured in TM351). Datasets that students will be using will be provided by the module team.
Like many modules, it begins with an introductory (or foundation) section, and then subjects are introduced through a series of study blocks.
This first section sets the scene and also presents a historical perspective. It also introduces what is called the “compute environment”, which is the environment that students will be using and studying. This first section will introduce different types of data, mention datasets, and introduce concepts and terms which will be later explored.
Block 1: Introduction to neural networks/deep learning
This first main block introduces artificial neural networks and some accompanying mathematics. The module offers students a handwriting recognition example. It also looks at AI and machine learning transparency challenges and what they may broadly mean to society.
Block 2: Image recognition with conventional neural networks
This block looks at limitations of traditional neural network systems. It examines the challenge of image classification. Students will be introduced to the concepts of neural network training, and data bias issues.
Block 3: Recurrent neural networks and long short term memory networks
Some key questions that are asked by this module includes: why do we need sequential modelling, what are the differences from the previous types of learning? Applications such as a speech recognition and sentiment analysis (which is about looking at whether things are views positively or negatively) are used. Recurrent neural networks (RNNs), bidirectional RNNs, long short term memory networks (LSTM) are studied.
Block 4: Unsupervised learning and autoencoders
I noted down a question that is addressed in this block, which is: what is unsupervised learning? Another topic autoencoders and their structure. I also made a note that there is a section about ethical issues.
Block 5: Alternatives to deep learning
Although there appears to be an emphasis on neural networks, it isn’t the only approach. This block says something about other approaches, such as, decision trees and Bayesian methods, exploring the reasons why different approaches might be chosen. Students will be using notebooks to study different datasets.
Block 6: Handling data
There is another question to answer, which is: why do we need to pre-process the data? I noted down the concept of discretisation and discretisation techniques. Another question that is addressed is: what is the effect of imbalanced data on learning algorithm performance? The block also covers solutions for the classification of imbalanced data.
Tuition and assessment model
Tutors will be required to give 10 hours of tutor or progression time. Progression time refers to time that isn’t allocated to tutorials but is used to help with student support and guidance. All tutorials will be delivered online through 2 clusters (groups of tutors). There is expected to be a tutorial to help students to prepare for each TMA and the EMA, with another tutorial for each block.
The module will use something called single component assessment, which means that the TMA results directly contribute to the final score, as opposed to students having to get distinctions in both the TMAs and an examinable component to gain a distinction.
There will be 3 TMAs (with an increasing percentage to the overall score), with an EMA contributing to 60% of the final score. For the EMA, “students be given a dataset and a task to accomplish using the techniques and tools taught in the module”. Students will also “write a report detailing the actions taken, justifications for the actions and decisions taken, results achieved, their understanding of the results and any ethical issues.”
I studied AI as an undergraduate student, and again as a postgrad. My undergraduate AI module contained a lot about how to solve problems by searching (we also used a fancy language called Prolog). My postgraduate studies touched on the interesting philosophical questions that thinking about intelligence immediately provokes. I also remember that the last AI module that the OU used to have, M366 (if I remember the module code correctly) had a slightly different character to it.
There were terms in the TM358 that I didn’t recognise, which suggests that things have certainly moved on a lot since I have last studied AI. Two substantial changes may include the substantial increase in processing power that we now have at our disposal, and the availability of tools that we can draw upon to analyse data.
In terms of this module, it’s practical focus clearly comes through from the briefing. It seems to be about doing stuff, understanding tools and, significantly, understanding some of the ethical issues accompany the use of these tools.
Since I have enough on as a tutor (I’m tutoring on a second level module, and a project module), I don’t have any capacity to even consider making an application. This said, I do encourage other to consider making an application, since it does look fun, and challenging too. It strikes me that there is certainly a lot to learn.
With all tutor briefings, thanks go to all members of the module team, led by Neil Smith, who all gave presentations during this short briefing session. Some of the notes presented within this blog are drawn from a PowerPoint presentation that was made during the recruitment briefing. Acknowledgement are also given to curriculum manager, Sarah Bohn and Staff Tutors Christine Gardner and Frances Chetwynd.