OU blog

Personal Blogs

Christopher Douce

TM470 On Reflection

Visible to anyone in the world

In the study materials that describe the EMA, in the section that has the title ‘What you need to demonstrate in your Final Project Report’ there are two important bullet points which are related to reflection. These are:

  • Learn independently and reflect on what has been done, with a view to improving skills and knowledge.
  • Critically review how you have tackled the project.

These bullet points are learning outcomes 5 and 8. To get marks for any learning outcome, you need to show evidence of attaining them in your EMA report through what you write. Learning outcomes 5 and 8 are grouped together, along learning outcome 7, which is all about effective communication.

The study materials that has the title ‘evaluating your work’ has a useful section called ‘carrying out the evaluation’. It splits reflection into two parts: reflection before action, and reflection after action.  

Reflection before action is defined as “purposeful thinking through of something before starting it”. You can use your project log to make notes of these thoughts as you progress throughout your project. Purposeful thinking is also demonstrated through the extent of your planning, and the way that you refer to resources, skills, and risks.

Reflection after action refers to questions that you can ask yourself about the effectiveness of what you have achieved, what has been learnt, what went well and what didn’t go well, and what you might have done differently.

What follows are some reflections on how to respond to two of the learning outcomes that relate to, well, reflection. For clarity, I’ve also included bits from the indicative marking scheme that you everyone has access to. This is then followed with some suggestions about how you could conclude your project report for maximum impact.

LO5 (Critically reviewing approach)

This learning outcome is quite concise: “critically review how you have tackled the project”. The Indicative marking scheme offers a bit of helpful guidance about what the examiners are looking for: “all students should update a project log/reflective journal on a weekly basis. This should be included as an appendix. Within the main body of the TMA, referring to the log/journal, identifying what went well and why; identifying what didn’t go well, why and what to do as a result.”

It is useful to pull this apart. Clearly, it is important to include your project log as an appendix. If you haven't been creating one as you go, don’t go making one up; that would be a waste of your time. Instead, just provide what you have. If you only have a couple of entries, this is a fine opportunity to share your views about the idea of a project log. When it comes to reflections, strong opinions are welcome.

The second bit of the guidance encourages you to identify ‘what went well and why’ and ‘what didn’t go well [and] why’. Reflect on your choice of project model by asking yourself “why did I do I did?”, “did it work?” and “was it easy to work with?”

Taking a very practical approach, one suggestion is to take those questions, and answer them as directly as you can. Another question to ask yourself if: ‘were there any surprises?’ If there were no surprises and everything ran as expected, then perhaps that might have been a surprise in itself.

Sometimes, projects do not go to plan. If you were rudely interrupted by the vicissitudes of life during your project, do tell the examiner what happened, and how you responded to the challenges you faced. A good project report is one that is interesting. As long as your reflections are clear and reasonable, you will get the marks.

LO8 (Learning Independently)

This learning outcome is as follows: “Learn independently and reflect on what has been done, with a view to improving skills and knowledge”. As before, here is the guidance from the indicative marking scheme: “Makes progress under own supervision, communicating regularly and accurately in respect of progress. Seeks guidance when needed but offers own ideas when doing so. Has clearly developed new skills and knowledge.”

As with the previous learning outcome, it is useful to break apart the guidance for this learning outcome too. Essentially, there are two parts: evidence about communication with your tutor (which can be used as evidence for your learning during the course of your project), and the bit about “new skills and knowledge”.

For the first bit, a good idea is to provide copies of all the email messages you have exchanged with your tutor. If you have exchanged a lot of messages, it is okay to summarise them, or to give only the titles.

For the “new skills and knowledge” ask yourself a long stream questions beginning with ‘WH’ words (or interrogative words). These are questions beginning with: what, when, who, where, why and how. You might begin by asking yourself the following: “What did I find difficult? What did I find easy? What has helped me? What did I enjoy? Where did I go to find out information to solve my problems?”. The exact questions you choose will depend on what is important within your project. A further practical recommendation is to share something about a variety of different subjects from your project.

As mentioned earlier, make your reflections interesting. The more personal they are the more interesting they can become. If you found something profoundly difficult, or have realised that you absolutely hate JavaScript and never want to work with it ever again, it is okay to mention it. This will give your report an edge of authenticity.

Bringing everything to a close

I recommend that everyone add a short summary section to the end of their EMA report, just to wrap everything up. This summary section can become a really nice way to emphasise some of your reflections.

 I would begin a summary section by mirroring your introduction. Remind the examiner what your project was all about, then say what you did by referring to the project model that you adopted. When you have done this, summarise what you have built or what you have achieved. When you get to the reflection section, say something about what you have learnt from carrying out the management of your project, say something about what you have learnt by solving the problem or doing the work, and then say something about what you have learnt about yourself.

Reflections

This section is an opportunity to practice what I have been preaching.

Whilst writing this blog post, I have realised that I have been a project tutor for almost ten years. Initially, I found it quite hard going, for the reason that every project was different, and the project module had a very different character to the other modules I had tutored. After the first presentation, I realised about the importance of the learning outcomes, and adjusted my tuition practice. My role, of course, is to help everyone to get their highest possible mark. This meant that I felt that I had to make the learning outcomes more visible to all the students I support. This led me to start to blog about the project module.

A significant reflection is that a good well written and well-structured reflection section can turn a good project into one that is excellent. It really is about the learning, and the expression of what learning has taken place has the power to instil confidence in examiners.

Ask a lot of ‘WH’ questions. Tell us about everything that went wrong, but also say something about why you thought it went wrong. If everything went well, tell us what you are most proud of. Then tell us what you learnt from it.

Permalink
Share post
Christopher Douce

Digital skills

Visible to anyone in the world
Edited by Christopher Douce, Monday 6 October 2025 at 11:13

I recently noticed the following post made to an arts and humanities module forum. I'm sharing with permission, whilst also taking the liberty of lightly adjusting one of the sections, to share a bit more information that is specific to computing modules.

The guidance here is also especially applicable to students who are studying the project module. Knowing your way around the university’s virtual library and how to 'dig out papers' from the library is an important and essential skill.

Building stronger digital skills

Whether you’re an undergraduate student, researcher, or someone that uses technology for work, improving on digital literacy can open a new world of opportunities and options.

If you’re looking to become a more confident user of digital tools or improve your ability to critically evaluate information, Being digital pathways from the OU Library features over 35 activities designed to develop your digital literacy skills – from managing your digital identity to making the most of online networks.

Each chunk of learning should take no more than 10 minutes to complete, making it easy to fit Being digital activities into your schedule. Why not try…

Referencing your sources

Learn how to reference books, ejournals, module materials and websites with confidence. Each activity includes a quiz to test your understanding.

Communicating online

How can you ensure your interactions with others online are appropriate and effective? How do you write for different online spaces?

Effective searching

Learn how to focus your search effectively, avoid common searching pitfalls and ensure you retrieve the best information for your needs.

Exploring your information landscape

Introducing you to the world of information at your disposal, including the Open University online library and its wide range of resources.

The Selected resources for your study list is really helpful. If you are studying computing (or software engineering), I do recommend looking at the Computing collection.

The ACM Digital Library and the IEEE Xplore libraries are extensive. If you are studying software engineering, the following journals are particularly relevant:

Acknowledgements

The above post was shared on an A335 forum. Sharing with permission from tutor who shared the original version. The links were edited to work more directly with this blog.

Permalink Add your comment
Share post
Christopher Douce

TM470 Considering prototyping

Visible to anyone in the world

If you decide to build something as a part of your project, you might want to consider building a prototype. The software engineering module defines a prototype as ‘a simplified software system that is a subset of the eventual product’. In fact, a common aim of a project is to create a prototype.

This blog summarises different approaches to prototyping. If you use prototyping within your project, make sure you’re able to justify which approach you adopt and why within your project report.

It is also important to make sure that the different elements of your project are aligned together.  If you choose prototyping, you should choose a project model that works with prototyping. The interaction design project model is one that emphases the building of prototyping, but you can also embed the building of prototypes within other iterative approaches to project management. You also need to consider any risks, any skills you need to develop, and any resources that you may need to apply.

Approaches to prototyping

There are two broad approaches: you can build a prototype which you later throw away; your whole object will be to learn something. Alternatively, you can build a prototype with the aim of incrementally refining it into a potentially usable product.

Through the process of prototyping, you can learn about requirements your product will need. You can do this by asking potential users (or stakeholders) what they think about what you have built. This is the process of evaluation. You should be able to learn new things with every new version of prototype you create.

Types of prototyping

There are two main types of prototyping: horizontal prototyping and vertical prototyping.

A horizontal prototype is a prototype that appears to show all the main functionality of a product, but not in a lot of depth. A horizontal prototype is all about breadth. It allows whoever is evaluating your prototype to get a feel for what a product does or looks like, but it might not work in any meaningful way.

A vertical prototype is a prototype that focuses on depth. It aims to only present a small amount of functionality in a detailed way suggests how a product may operate. In some cases, a vertical prototype may be implemented using software code. A vertical prototype is likely to focus on the essence of what a product does. Users of a vertical prototype will be able get a feel for how a product is likely to work.

Prototyping approaches

Broadly speaking, there are two prototyping approaches: low fidelity and high fidelity.

Low fidelity prototyping is prototyping where the design of a prototype may not directly or substantially resemble the final product. A low fidelity prototype is one that is likely to be thrown away. It is typically used to find out something about the problem. It may reflect an initial design that is to be refined further when it has been evaluated. A low fidelity prototype might not do much, or it might simulate the beginnings of potential interactions.

High fidelity prototypes, on the other hand, go some way to solving the problem. They might be interactive, or they might be a proof of concept demonstrators that has been created using a technology that is different to the technology used in the final version of the software product.

Low fidelity prototyping

Paper prototypes

When you think about paper prototyping, think about pens, paper, scissors and Sellotape.

Paper prototypes should be messy. Messiness is a virtue for a simple reason: they are easy to change. They can be put together quickly. If a design or an idea doesn’t work, they can be thrown away quickly.

If you have spent hours creating a detailed interface design that looks beautiful, your refined design will be harder to change, and throwing it away will become difficult. It is easier to change and to disregard something that appears to have little value. The value with paper prototyping as an approach lies with its scrappiness.

The interaction design text by Preece et al. show different types of sketches that can be considered, in one way or another, a paper prototype:

Storyboards: a simple sketch of the environment in which a product is used, which can be divided into cells, which show a sequence of interactions. The emphasis is on the ‘sketchiness’ of the sketch. Users or stakeholders can be represented using stick figures. The interactions that are depicted has value; the quality of the sketching does not.

Card-based prototypes: a set of options or choices that are made available to the user, shared on a card. A sequence of cards are used to get an understanding of the interactions that are embodied within the product.

Interface sketches: a sketch of a screen or a display. This might include labels or icons, as well as buttons and product logos. Everything is very sketchy. You might create a number of different versions, and carry out an evaluation to choose which of the interfaces is best. Evaluation will be discussed later.

Wireframing

Although paper prototyping has obvious and clear advantages due to its low cost and high speed, senior managers may become confused with the appearance and suggested chaos of a scrappy sketched out design. A middle way between a paper prototype and a high fidelity prototype lies with the creation of wireframes which can be created using dedicated wireframing software, or by creating simple mock-ups using tools that you are familiar with.

One approach to wireframing is to create a series of simple and unrefined HTML pages. Only front-end pages need be created, which can be linked together to suggest an interaction or a connection between different pages.

There are lots of wireframing tools to choose from. Three notable products are:

If you do choose a wireframing product, you need to explain why you have done so, and also explain what you hope to gain from applying it. You should also list it as a resource and say something about your skills.

PowerPoint

Sometimes tools can be used in combination with each other.

PowerPoint is, of course, a popular presentation tool. PowerPoint files can be read and opened by most devices. Rather than showing actual paper prototypes to potential users, you could instead take photographs of your paper prototypes and create a PowerPoint presentation with them. The advantage of doing this is that you could then create sequences of interactions. If you are skilled and confident PowerPoint user, you can also have users jump to slides which show alternative interactions.

You can also use PowerPoint to show a series of wireframes that have been created using wireframing tools, especially if your chosen tool doesn’t have the ability to share a series of interactions.

Wizard of Oz

Creating computer code is expensive. It can take a lot of time. If you want to see whether an idea works or makes sense, one approach is to ‘simulate’ the computer using a variation of the Wizard of Oz experiment.

Imagine you have a paper prototype. On its own, it doesn’t do anything. During an evaluation, you might ask a user the question ‘what are you going to press?’ When they have pointed to a bit of an interface sketch, you could replace one paper prototype ‘screen’ with another, fulfilling the action as if all your bits of paper was a working product. You become the computer, or the product that is to be designed.

High fidelity prototyping

Returning to our earlier definition of a prototype, ‘a simplified software system that is a subset of the eventual product’, a high fidelity prototype might be a bit of a bigger product, but the most important characteristic is does something. Your product might partially solve a problem. Relating back to earlier terms, your high fidelity prototype might be a horizonal prototype or a vertical prototype. Really good projects tend to be vertical prototypes, for the reason that they can demonstrate depth of skill and knowledge.

Physical prototyping

Some projects combine physical design and software design. This may be especially the case for students who may have studied a combination of design and computing. Physical prototyping (in the sense that it is considered here) can be considered to be analogous to paper prototyping. Creative mock artefacts can be created to embody some of the key ideas within a project.

The notion of physical prototyping has been discussed in an earlier blog, TM356 Prototyping Hackathon. Just as elements of paper can be Sellotaped together for a paper based prototypes, physical items can be glued or tied together for a physical prototype. Physical prototyping can add a second meaning to the notion of a card based prototype. A ‘low fidelity’ physical prototype isn’t going to realise any functionality; it is more about expressing and understanding ideas.

Evaluating prototypes

There are different ways to evaluate a prototype.

If you have access to users, you could do some usability testing. If you do this, do consider preparing some useful resources, such as a project information sheet to tell the participant what it is all about, a consent form, a script (to make sure that everyone is asked the same thing), a set of tasks, and a data collection form.

If you don’t have access to participants who are likely to be the intended users, you can always use what are known as ‘proxy users’. These will be participants who pretend to be the users. You might ask friends and family members, for instance.

If you don’t have access to anyone you can ask, it is possible to carry out your own critical evaluations. You could do this by applying techniques such as heuristic evaluations or cognitive walkthroughs. For further information about these approaches, do refer to the interaction design textbook which is mentioned in the resources section.

Resources

Requirements are important in all software projects. How you approach requirements will depend on the aims and objectives of your project. One of the strengths of prototyping is that it can help you to uncover requirements for what you need to build to solve your problem. The blog TM470 requirements revisited also offers some useful guidance. An important point is that how you treat your requirements will very much depend on the character of the problem that you are trying to solve.

An important recommendation is to look to the modules you have previously studied. Prototyping of interface designs is covered in some depth in the school’s interaction design module. The module also makes use of the following set text, which is especially helpful:

Sharp, H., Rogers, Y. and Preece, J. (2023) Interaction design : beyond human-computer interaction. 6th edition. Milton: Wiley. Available at: https://library-search.open.ac.uk/permalink/44OPN_INST/la9sg5/alma9953131933002316

This text also contains a lot of really helpful references, many of which you can find through the OU library.

You might also look to the web technologies module if you wish to create a simple HTML prototype.

Reflections

The project module is all about showing off skills and knowledge gained from earlier studies. Creating a series of prototypes is a really effective way to do this. The best projects that use prototypes show how product designs or software solutions are refined and developed over a series of iterations. Don’t dismiss low fidelity techniques, such as paper prototyping because they may appear to be unprofessional. It isn’t what they look like, it is what they can do.

One of the points I share is: do consider using some of the tools that are highlighted in the interaction design book. There are a lot of useful tools that could be used in addition to prototyping, such as scenarios, personas and use cases. The DECIDE framework is particularly helpful when it comes to making decisions about how to plan and run evaluations. I encourage students to apply tools in a critical way that makes sense in the context of their project.

Permalink
Share post
Christopher Douce

A335 Journal - September 2025

Visible to anyone in the world
Edited by Christopher Douce, Sunday 5 October 2025 at 13:20

1 September 2025

I managed to finish reading Thoreau’s Walking yesterday. I quite liked it. It did feel that it was an extension of Walden. There was less description of actual walking than I had expected but Thoreau did continue his enthusiastic description of squirrels, which was something I appreciated.

Over the last few weeks, I’ve discovered that there are quite a few really helpful resources on BBC Sounds. There is an episode of Free Thinking that covers the Mill on the Floss. There are also In Our Time episodes on Dickens as well as Calvino. After listening to the one about Calvino one night (whilst trying to get to sleep), I started the read the introduction to Cosmicomics. Since it was pretty long, and I wasn’t quite sure how much of that text we would be reading, I decided to put it to one side.

My last bit of reading yesterday was the very beginning of Season of Migration to the North (Wikipedia). I didn’t get very far. I read the introduction, which really piqued my interest, and then fell asleep on the sofa. That will teach me to listen to podcasts about Calvino in the very early hours of the morning.

My objective is to try to finish (or, restart) reading the Salih text this week.

11 September 2025

I’ve not read Salih yet, but I’ve packed the text for a trip to the midlands (along with Mayhew, and a book about stand-up comedy).

From the WhatApp group chat, I was reminded that the text of the assignments were available. I had a quick look to see what they are all about, after putting all the TMA cut-off dates in my diary, so I know what is going on. (I’ve entered them with the heading ‘A335 25J’ so I can find them using my calendar search function pretty easily.

TMA 1 looks a bit like a ‘warm up’, for which I’m thankful for. We have a three set texts to choose from. Although it’s too soon to decide about which way I’m going to jump (I’m going to attend as many tutorials as I can, since I’m a swot), I think I’ve rules one of the texts out.

I really like the look of TMA 2, and I appreciate its emphasis on the identification of critical sources, and how they relate to a question. Like with TMA 1, we have to make a choice – and they all look pretty difficult, if I’m being honest.  In a masochistic way, I’m ‘kind of’ looking forward to this one.

TMA 3 is all about collaboration and group work, which I know many students hate. There’s an interim cut-off date, and a period where we must work together with each other. I’ve put both of these in my diary. I’ve read through the question quickly, and it all sounds a bit involved, but I’m not going to worry. The harder bit looks to the second half, the essay.

The final TMA looks tricky, perhaps because it is quite a few months away. It’s longer than the other texts, and we’ve got to pick on two texts (and other than the Rhys text, I’m not too keen on it). It does like there’s a bit of flex in it, in the sense that one of the options permit a wider choice of texts beyond those that are suggested by the module team. This one might be interesting.

The EMA question looks suitably demanding (since the question is quite ‘searching’), and mentions some digital texts that we’ll cover towards the end of the module.  I’m sure it’ll make sense when we get to it.

I tried to book in to as many tutorials as I could, but they were not yet available. I’ll keep my eye on the WhatsApp group.

My final activity today was to review Generative AI Literacy for Arts and Humanities, which is located within the Arts and Humanities subject centre area, under the Study Skills Activities subheading. There is helpful section on referencing and academic conduct.  I really liked the flowchart which has a heading ‘is it safe to use ChatGPT for your task?’  From my own perspective, I’m going to avoid using any generative AI for a very simple reason: it make stuff up, and I’m going to have enough to read, and I could do without having to read computer generated nonsense.

Permalink
Share post
Christopher Douce

Software Engineering Radio: Design and Designing

Visible to anyone in the world
Edited by Christopher Douce, Sunday 5 October 2025 at 10:09

A quick search of software engineering radio yields a number of themes that relate to design: experience design, design for security and privacy, design of APIs, and also architectural design.

This post shares some useful points shared on two episodes. The first asks questions about and reflect on the software design process. The second, which is complementary, relates to how engineers share their designs with others through the use of diagrams. An earlier blog post, TM354 Working with diagrams addresses the same subject, but in a slightly different way, emphasising diagramming tools.

Before I get into what struck me as interesting in both of those podcasts, it is useful to have a quick look at Chapter 3 of SWEBOK, which contains a useful summary of some software Design principles (3-4):

  • Abstraction: Identify the essential properties of a problem.
  • Separation of concerns: Identify what concerns are important to stakeholders.
  • Modularization: Break larger components into smaller elements to make the problem easier to understand.
  • Encapsulation: Hides unnecessary detail.
  • Separation of interface and implementation: Every component should have a clear interface.
  • Uniformity: Ensure consistency in terms of how the software is described.
  • Completeness: It does what is supposed to “and leaves nothing out”.
  • Verfiability: A characteristic of software to determine whether what has been built can be checked to make sure it meets requirements.

These principles seem very ‘code centric’. When we look at the titles of other software engineering radio podcasts we can clearly see that we need to consider wider perspectives: we need to think about architecture, organisational structure what we need to do to realise important non-functional requirements such as accessibility and usability.

Software design

In SE Radio 333: Marian Petre and André van der Hoek on Software Design discuss not only software design, and the also the practice of expert software engineers. It opens with the point that it is important to distinguish between the outcome of design and the process of designing. There was also a reflection that design (and designing) ‘is something that is done throughout [the software development lifecycle]’, beginning with eliciting (designing) requirements.

We are also introduced to an important term: design thinking. This is defined as the process of ‘understanding the relationship between how we understand the problem and how we understand the solution, and thinking reflectively about the relationship between the two’. Design thinking is mentioned in the SWEBOK, where it is defined in a similar (but slightly similar) way “design thinking comprises two essentials: (1) understanding the need or problem and (2) devising a solution” (3-2). The SWEBOK definition of design thinking is, arguably, quite a narrow definition since the term can refer to both an approach and a mindset, and it is a mindset that can reside “inside all the individuals in the organisation”.

A related question is how do experts go about design? Experts go deep (into a problem) as well as going broad (when looking for solutions). When experts go deep, they can dive into a problem quickly. Van der Hoek shared an interesting turn of phrase, suggesting that developers “talked on the whiteboards”. It is important to and necessary to externalise ideas, since ideas can be then exposed to others, discussed and evaluated. An expert designer needs be able to listen, to disagree, and have the ability to postpone making decisions until they have gathered more information. Experts, it is said, borrow, collaborate, sketch, and take breaks.

Expert software designers also have an ability to identify which parts of a problem are most difficult, and then begin to solve those bits. The are able to see the essence of a problem. In turn, they know where to apply attention, investing effort ‘up front’. Rather than considering which database to choose, they might tackle the more fundamental question of what a database needs to do. Expert designers also have a mindset focussed toward understanding and strive for elegance since “an elegant solution also communicates how it works” (41:00)

Experts can also deliberately ‘break the rules’ to understand problem constraints or boundaries (43:50). Expert designers may also use design thinking practices to generate ideas and to reveal assumptions by applying seemingly odd or surprising activities. Doing something out of the ordinary and ‘using techniques to see differently’ may uncover new insights about problems or solutions. Designers are also able to step back and observe and reflect on the design process.

Organisational cultural constraints can also play a role. The environment in which a software designer (a software engineer or architect) is employed can constrain them from working towards results and applying creative approaches. This cultural context can ‘supress’ design and designers especially if organisational imperatives not aligned with development and design practices.

Marion Petre, an emeritus professor of computing at the OU referred to a paper by David Walker which describes a ‘soup, bowl, and table’ metaphor. A concise description is shared in the abstract of the article: “The soup is the mix of factors that stimulates creativity with respect to a particular project. The bowl is the management structure that nurtures creative outcomes in general. And the table is the context of leadership and vision that focuses creative energies towards innovative but realizable objectives.” You could also argue that soup gives designers energy.

The podcast also asked what do designers need to do to become better designer? The answer was simple, namely, experts find the time look the code and designs of other systems. Engineers ‘want to understand what they do and make it better’. An essential and important point is that ‘experts and high performing teams are reflective’; they think about what they do, and what they have done.

Diagrams in Software Engineering

An interesting phrase that was shared in Petre and van der Hoek’s podcast was that developers ‘talked using whiteboards’. The sketching and sharing of diagrams is an essential practice within software engineering. In SE Radio 566: Ashley Peacock on Diagramming in Software Engineering different aspects of the use and creation of diagrams are explored. Diagrams are useful because of “the ease in which information is digestible” (1:00). Diagrams and sketches can be short-lived/long lived. They can be used to document completed software systems, summarise software that is undergoing change, and be used to share ideas before they are translated into architecture and code.

TM354 Software Engineering makes extensive use of a graphical language called the Unified Modelling Language (UML). UML has 12 types of diagrams, of which 2 or 3 types are most frequently used. Class diagrams are used to share key abstractions, ideas within the problem domain and a design, and their dependencies (how the bits relate to each other). Sequence diagrams can be used to show the interactions between different layers of software. Activity diagrams can be used to describe the connections between software and the wider world. UML is important since it provides a share diagramming language that can be used and understood by software engineers.

Reflections

One of the aspects that I really appreciated from the first podcast was that it emphasises the importance and significance of the design process. One of my first duties after becoming a permanent staff tutor at the OU was to help to support the delivery of some of the design modules. I remember there were three of them. There was U101 Design thinking: creativity for the 21st century, an earlier version of T240 Design for Impact, T217, and T317 Innovation: Designing for Change. Even though I was familiar with a sister module from the School of Computing and Communications, TM356 Interaction design and the user experience, being exposed to the design modules opened my eyes to a breadth of different approaches that I had never heard of before and could have applicability within computing.

U101 introduced me to the importance of play. T217 (and the module that came before it, T211) introduced me to the contrasting ideas of divergent and convergent thinking. The idea of divergent thinking relates to the idea mentioned in the first podcast of thinking beyond the constraints. I was also introduced to the double-diamond design process (Design Council, PDF). Design processes are different in character to software development processes since they concern exploring different ways to solve problems as opposed to distilling solutions into architectures and code.

A really important point from the first podcast is that design can (and should) happen across the entire software development lifecycle. Defining (and designing) requirements at the start of a project is much as creative process as the creation and specification of tests and test cases.

It is important and necessary to highlight the importance of reflection. Thinking about what we have, how well our software has been created, and what we need all help to refine not just our engineered artefacts, but also our engineering processes. Another point that resonates is the role that organisational structures may play in helping to foster design. To create good designs, we rely on the support of others, but our creativity may be attenuated if the value of ‘play’ is views as frivolous or without value.

Effective designers will be aware of different sets of principles, why they are important, how they might be applied. This post opened by sharing a set of software design principles that were featured in the SWEBOK. As suggested, these principles are viewed as very code centric. There are, of course, other design principles that can be applied to user interfaces (and everyday things), such as those by Donald Norman. Reflecting on these two sets of principles, I can’t help but feel that there is quite a gap in the middle, and a need for software architecture design principles.  Bass et al. (2021) is a useful reference, but there are other resources, including those by service providers, such as Amazon’s Well-Architected guidance. Engineers should always work towards understanding. Reflecting on what we don’t yet full understand is as important as what we do understand.  

References

Bass, D. L., Clements, D. P and Kazman, D. R. (2021) Software Architecture in Practice, 4th edn, Upper Saddle River, NJ, Addison Wesley.

SWEBOK v.4 (2024) Software Engineering Body of Knowledge SWEBOK. Available at: https://www.computer.org/education/bodies-of-knowledge/software-engineering

Walker, D. (1993), The Soup, the Bowl, and the Place at the Table. Design Management Journal (Former Series), 4: 10-22. https://doi.org/10.1111/j.1948-7169.1993.tb00368.x

Permalink
Share post
Christopher Douce

Software Engineering Radio: Testing

Visible to anyone in the world
Edited by Christopher Douce, Thursday 2 October 2025 at 13:28

The term ‘software testing’ can be associated with a very simple yet essential question: ‘does it do what it supposed to do?’

There is, of course, a clear and obvious link to the topic of requirements, which express what software should do from the perspective of different stakeholders. A complexity lies in the fact that different stakeholders can have requirements that can sometimes conflict with each other.

Ideally it should be possible to trace software requirements all the way through to software code. The extent to which formal traceability is required, and the types of tests you need to carry out will depend on the character of the software that you are building. The tests that you need for a real-time healthcare monitor will be quite different to the tests you need for a consumer website.

Due to the differences in the scale, type and character of software, software testing is a large topic in software engineering. Chapter 5 of SWEBOK v4, the software engineering body of knowledge highlights different levels of test: unit testing, integration testing, system testing, and acceptance testing. It also highlights different types of test: conformance, compliance, installation, alpha and beta, regression, prioritization, non-functional, security, privacy, API, configuration, and usability.

In the article, The Practical Test Pyramid, Ham Vocke describes a simple model: a test pyramid.  At the bottom of the pyramid are unit tests that test code. These unit tests run quickly. At the top, there are user interface tests, which can take time to complete. In the middle there is something called service tests (which can also be known as component tests). Vocke’s article is pretty long, and quickly gets into a lot of technical detail.

What follows are some highlights from some Software Engineering radio episodes that are about testing. A couple of these podcasts mention this test pyramid. Although testing is a broad subject, the podcasts that I’ve chosen emphasise unit testing.

The first podcast concerns the history of unit testing. The last podcast featured in this article offers some thoughts about where the practice of ‘testing’ may be heading. Before sharing some personal reflections, some other types of test are briefly mentioned.

The History of JUnit and the Future of Testing

Returning to the opening question, how do you know your software does what it supposed to do? A simple answer is: you get your software to do things, and then check to see if it has done what you expect. It is this principle that underpins a testing framework called JUnit, which is used with software written using the Java programming language.

The episode SE Radio 167: The History of JUnit and the Future of Testing with Kent Beck begins with a short history of the JUnit framework (3:20). The simple idea of JUnit is that you are able to write tests as code; one bit of code tests another. All tests are run by a test framework which tells you which tests pass and which tests fail. An important reflection by Beck is that when you read a test, it should tell you a story. Beck goes on to say that someone reading a test should understand something important about the software code. Tests are also about communication; “if you have a test and it doesn’t help your understanding … it is probably a useless test”.

Beck is asked to explain the concept of Test Driven Development (TDD) (14:00). He describes it as “a crazy idea that when you want to code, you write a test that fails”. The test only passes when that code that does what the test expects. The podcast discussion suggests that a product might contain thousands of tiny tests, with the implication that there might be as much testing code as production code; the code that implements features and solves problems.

When considering the future of testing (45:20) there was the suggestion that “tests will become as important to programming as the compiler”. This implies that tests give the engineers useful feedback. This may be especially significant during periods of maintenance, when code begins to adapt and change. There was also an expression of the notion that engineers could “design for testability” which means that unit tests have more value.

Although the podcast presents a helpful summary of unit testing, there is an obvious question which needs asking, which is: what unit tests should engineers be creating? One school of thought is that engineers should create tests that cover as much of the software code as possible, also known as code coverage. Chapter 5 of SWEBOK shares a large number of useful test techniques that can help with the creation of tests (5-10).

Since errors can sometimes creep into conditional statement and loops, a well known technique is known as boundary-value analysis. Put more simply, given a problem, such as choosing a number of an item from a menu, does the software do what it supposed to do if the highest number is selected (say, 50)? Also, does it continue to work if the highest number just before a boundary is selected (say, 49)?

Working Effectively with Unit Tests

Another podcast on unit testing is SE Radio 256: Jay Fields on Working Effectively with Unit Tests. Between 30:00 and 33:00, there is an interesting discussion that highlights some of the terms that feature within Vocke’s article. A test that doesn’t cross any boundaries and focus on a single class could be termed a ‘solitary unit test’. This can be contrasted with a ‘sociable unit test’, where tests work together with each other; one test may influence another. Other terms are introduced, such as stubs and mocks, which are again mentioned by Vocke.

Automated Testing with Generative AI

To deliberately mix a metaphor, a glimpse of the (potential) future can be heard within SE Radio 633: Itamar Friedman on Automated Testing with Generative AI. The big (and simple) idea is to have AI helper to have a look at your software and ask it to generate test cases for you. A tool called CoverAgent was mentioned, along with an article entitled Automated Unit Test Improvement using Large Language Models at Meta (2024). A point is: you still need a software engineer to sense check what is created. AI tools will not solve your problems, since these automated code centric tools know nothing of your software requirements and your software engineering priorities.

Since we are beginning to consider artificial intelligence, this leads onto another obvious question, which is: how do we go about testing AI? Also, how do we make sure they do not embody or perpetuate biases or security risks, especially if they are used to help solve software engineering problems.

Different types of testing

The SWEBOK states that “software testing is usually performed at different levels throughout development and maintenance” (p.5-6). The key levels are: unit, integration, system and acceptance.

Unit testing is carried out on individual “subprograms or components” and is “typically, but not always, the person who wrote the code” (p.5-6). Integration testing “verifies the interaction among” system under test components. This is testing where different parts of the system are brought together. This may need different test objectives to be completed. System testing goes even wider and “is usually considered appropriate for assessing non-functional system requirements, such as security, privacy, speed, accuracy, and reliability” (p.5-7). Acceptance testing is all about whether it is accepted by key stakeholders, and relate back to key requirements. In other words, “it is run by or with the end-users to perform those functions and tasks for which the software was built”.

To complete a ‘test level’ a number of test objectives may need to be satisfied or completed. The SWEBOK presents 12 of these. I will have a quick look at two of them: regression tests, and usability testing.

Regression testing is defined as “selective retesting of a SUT to verify that modifications have not caused unintended effects and that the SUT still complies with its specified requirements” (5-8). SUT is, of course, an abbreviation for ‘system under test’. Put another way a regression test check to make sure that any change you have made hasn’t messed anything up. One of the benefits of unit testing frameworks such as JUnit is that it is possible to quickly and easily run a series of unit tests, to carry out a regression test.

Usability testing is defined as “testing the software functions that support user tasks, the documentation that aids users, and the system’s ability to recover from user errors” (5-10), and sits at the top of the test pyramid. User testing should involve real users. In addition to user testing there are, of course, automated tools that help software engineers to make sure that a product deployment works with different brewers and devices.

Reflections

When I worked as a software engineer, I used JUnit to solve a very particular problem. I needed to create a data structure that is known as a circular queue. I wouldn’t need to write it in the same way these days since Java now has more useful libraries. At the time, I needed to make sure that my queue code did what I expected it to. To give me confidence in the code I had created, I wrote a bunch of tests. I enjoyed seeing the tests pass whenever I recompiled my code.

I liked JUnit. I specifically liked the declarative nature of the tests that I created. My code did something, but my tests described what my code did. Creating a test was a bit like writing a specification. I remember applying a variety of techniques. I used boundary-value analysis to look at the status of my queue when it was in different states: when it was nearly full, and when it was full.

Reflecting Beck, I appreciated that my tests also told a story. I also appreciated that these tests might not only be for me, but might be useful for other developers who might have the misfortune of working with my code in the future.

The other aspect of unit testing that I liked was that it proactively added friction to the code. If I started to maintain it, pulling apart function and classes, the tests would begin to break. The tests became statements of ‘what should be’. I didn’t view tests in terms of their code coverage (to make sure that every single bit of software was evaluated) but in terms of simple practical tools that gave alternative expressions of the purpose of my software. In turn, they helped me to move forward.

It is interesting and useful to reflect on the differences between the test pyramid and the SWEBOK test levels. In some respect, the UI testing of the pyramid can be aligned with acceptance testing of the SWEBOK. I do consider the integration and system testing to be helpful.

An important point that I haven’t discussed is the question of when should a software engineer carry out testing? A simple answer is, of course, as soon as practically possible. The longer it takes to identify an issue, the more significant the impact and the greater the economic cost. The ideal of early testing (or early problem detection) is reflected in the term ‘shift-left’ testing, which essentially means ‘try to carry out testing towards the left hand side of your project plan’. Put even more simply: the earlier the better.

Returning to the overriding aim of software testing, testing isn’t just about figuring out whether your software does what it supposed to do. It is also about managing risk. If there are significant societal, environmental, institutional and individual impacts if software doesn’t work, you or your organisation needs to do whatever it can to ensure that everything is as correct and as effective as possible. Another point is that sometimes the weak spot isn’t the code, but in the spaces where people and technology intersects. Testing is socio-technical.

To conclude, it is worth asking a final question. Where is the software testing heading? Some of these podcasts suggest some pointers. In the recently past, we have seen the emergence of automation and the engineering of software development pipelines to facilitate continual deployment or delivery of software. I do expect that artificial intelligence, in one form or another, will influence testing practice, but AI tools can’t know everything about our requirements. There will be testing using artificial intelligence and testing of artificial intelligence. As software reaches into so many different areas of society, there will also be testing for sustainability.

Resources

JUnit is one of many bits of technology that can help to automate software testing. Two other  tools s I have heard of are called Cucumber which implements a language called Gherkin, a formal but human-readable language which is used to describe test cases. I’m also aware of something called Selenium which is “a suite of tools for automating web browsers”.

Since software testing is such an important specialism within software engineering, there are a series of industrial certifications that have been created by the International Software Testing Qualifications Board (ISTQB). As well as offering foundation level certifications, there are also certifications for specialisms such as agile, security and usability. Many of the topics mentioned in the certifications are also mentioned in Chapter 5 of SWEBOK v4.

I was alerted to a site called the Ministry of Testing which shares details of UK conferences and events about testing and software quality.

One of the points that I picked up from the podcasts was the point that, when working at forefront of an engineering subject, there is a lot of sharing that takes place through blogs. A name that was mentioned was Dan North who has written two articles that resonate: We need to talk about testing (or how programmers and testers can work together for a happy and fulfilling life), and Introducing BDD (BDD being an abbreviation for Behaviour Driven Development).

Acknowledgements

Many thanks to Josh King, a fellow TM354 tutor, who was kind enough to share some useful resources about testing.

Permalink Add your comment
Share post
Christopher Douce

Software Engineering Radio: Technical Debt

Visible to anyone in the world
Edited by Christopher Douce, Wednesday 1 October 2025 at 09:42

Imagine you’re taking a trip to catch up with one of your best friends. You also need to get a few things from the shop; let’s say, a couple of pints of milk. You have a choice. You could head directly to your friend’s house and be on time, and do the shopping later. Or alternatively, you could change your route, visit the shop, and arrive at your friend’s place a little bit later than agreed. This really simple dilemma encapsulates what technical debt is all about.

When it comes to software, software engineers may prioritise some development activities over others due to the need to either ship a product, or to get a service working. The impact of prioritisation may have implications for software engineers who have to take over the work at a later date.

As suggested earlier, software can quality attributes: it can be efficient, it can be secure, or it can be maintainable. In some cases, a software engineer might prioritise getting something working over its maintainability or comprehensibility. This means that the software that is created might be more ‘brittle’, or harder to change later on. The ‘debt’ bit of technical debt means it will be harder to migrate the software from one operating environment to another in the future. You might avoid ‘paying’ or investing time now to get something working earlier, but you may well need to ‘pay down’ the technical debt in the future when you need to migrate your software to a new environment.

On Managing Technical Debt

In SE Radio 481: Ipek Ozkaya on Managing Technical Debt, Ozkaya is asked a simple question: why should we care about technical debt? The answer is also quite simple: it gives us a term to help us to think about trade-offs. For example, “we’re referring to the shortcuts that software development teams make. … with the knowledge they will have to change it; rework it”. Technical debt is a term that is firmly related to the topic of software maintenance.

Another question is: why does it need to be managed (5:10)? A reflection is that “every system has technical debt. …  If you don’t manage it, it accumulates”. When the consequences of design decisions begin to be observed or become apparent, then it becomes technical debt, which needs to be understood, and the consequences need to be managed. In other words, carrying out software maintenance will mean ‘doing what should have been done earlier’ or ‘adapting the software so it more effectively solves the problems that it is intended to solve’. My understanding is that debt can also emerge, since the social and organisational contexts in which software exists naturally shift and change.

Interestingly, Ozkaya outlines nine principles of technical debt. The first one is: ‘Technical debt reifies an abstract concept’. This principle speaks to the whole concept. Reification is the ‘making physical’ an abstract concept. Ultimately, it is a tool that helps us to understand the work that needs to be carried out. A note I made during the podcast was that there is a ‘difference between symptoms and defects’. Expressed in another way, your software might work okay, but it might not work as effectively or as efficiently (or be as maintainable) was yo would like, due to technical debt.

It is worth listening to Ozkaya’s summary of the principles, which are also described in Kruchten et al. (2019). Out of all the principles, principle 5, architecture technical debt has the highest cost of ownership, strikes me as being very significant. This relates to the subject of architectural choices and architectural design.

Krutchen et al. suggest that “technical debt assessment and management are not one-time activities. They are strategic software management approaches that you should incorporate as ongoing activities” (2019). I see technical debt as a useful conceptual tool that can help engineers to make decisions about practical actions about what work needs to be done, and communicate to other about that work, and why it is important.

Reflections

I was introduced to the concept of technical debt a few years ago, and the concept instinctively made a lot of sense. Thinking back to my time as a software engineer I often were face with dilemmas and trade-offs. Did I spend time ‘right now’ to change how I gathered real-time data from a hardware device, or do I live with it and just ship the product?

The Kruchten text introduces the notion of ‘bankruptcy’. External events can cause business bankruptcy. In the case of software, this was facilitated by a software vendor ending support for a whole product line, and the need to rewrite a software product using different languages and tools.

When looking through software engineering radio I noticed an earlier podcast, SE Radio 224: Sven Johann and Eberhard Wolff on Technical Debt covers the same topic. Interestingly, they reference a blog post by Fowler, Technical Debt Quadrant, Fowler suggests a simple tool that can be used to think about technical debt, based on four quadrants. Decision about technical debt should be ‘prudent and deliberate’.

Returning to the opening dilemma: do I go straight to my friend’s house, or do I go get some milk on the way and end up being late? It depends on who the friend is and why I am meeting them. I depends on the consequences. If I’m going round there for a cup of tea, I’ll probably get the milk.

Resources

Fowler, M. (2009) Technical Debt Quadrant. Available at: https://martinfowler.com/bliki/TechnicalDebtQuadrant.html

Kruchten, P., Nord, R. and Ozkaya, I. (2019) Managing Technical Debt: Reducing Friction in Software Development. 1st edition. Addison-Wesley Professional.  Available at: https://library-search.open.ac.uk/permalink/44OPN_INST/la9sg5/alma9952700169402316

Permalink Add your comment
Share post
Christopher Douce

Software Engineering Radio: Software architecture

Visible to anyone in the world

Software architecture is quite a big topic. I would argue that it ranges from software design patterns all the way up the design and configuration of cloud infrastructures, and the development of software development and deployment pipelines.

Software architecture features in a number of useful Software Engineering Radio podcasts. What follows is a brief summary of two of them. I then share an article by a fellow TM354 tutor and practicing software architect who shares his thoughts from 25 years of professional experience.

An important point is that there are, of course, links between requirements, non-functional requirements and architectural design. Architectures help us to ‘get stuff done’. There are, of course, implicit links and connections to other posts in this series, such as to the one that is about Infrastructure as Code (IaC).

On the Role of the Software Architect

In SE Radio 616: Ori Saporta on the Role of the Software Architect Saporta suggests that design doesn’t just happen at the start of the software lifecycle. Since software is always subject to change, this means that a software architect has a role across the entire software development lifecycle. Notably, an architect should be interested in the ‘connections between components, systems and people’. ‘You should go from 30,000ft to ground level’ (13:00), moving between the ‘what’ problem needs to be solved to the ‘how’ problems can be solved.

Soft skills are considered to be really important. Saporta was asked how might engineers go about ‘shoring up’ their soft skills? A notable quote was: “it takes confidence and self-assurance to listen”. Some specific soft skills were mentioned (29:20). As well as listening, there is the need for empathy and the ability to bridge, translate or mediate between technical and non-technical domains. Turning to the idea of quality, which was addressed in an earlier blog, quality can be understood as a characteristic, and a part of a process (which reflects how the term was earlier broken down by the SWEBOK).

A software architect means “being a facilitator for change, and being open for change” in other words, helping people across the bridge. An interesting point was that “you should actively seek change”, to see how the software design could improve. An important point and a reflection is that a good design accommodates change. In software, ‘the wind [of change] will come’ since the world is always moving around it.

Architecture and Organizational Design

The second podcast I would like to highlight is SE Radio 331: Kevin Goldsmith on Architecture and Organizational Design. Goldsmith’s is immediately asked about Conway’s Law (Wikipedia), which was summarised as “[o]rganizations … which design systems … produce designs which are copies of the communication structures of these organizations”. Put more simply, the structure of your software architecture is likely to reflect the structure of your organisation.

If there is an existing organisation where different teams do different things “you tend to think of microservices”; a microservice being a small defined service which is supported by a surrounding infrastructure.

If a new software start-up is created by a small group of engineers, a monolith application may well be created. When an organisation grows and more engineers are recruited, existing teams may split which might lead to decisions about how to break up a monolith. This process of identifying and breaking apart services and relating them to functionality can be thought as a form of refactoring (which is a fancy word for ‘structured software changes to software code’). This leads to interesting decisions: should the organisation use their own services, or should they use public cloud services? The answer of, course, relates back to requirements.

An interesting question ‘was which comes first: the organisational structure or the software structure’ (13:05)? Organisations could embrace Conway’s law, or they could do a ‘reverse Conway manoeuvre’, which means that engineering teams are created to support a chosen software architecture.

A really interesting point is that communication pathways within organisations can play a role; organisations can have their tribes and networks (49:30). It is also important to understand how work moves through an organisation (54:50). This is where, in my understanding, the role of the business analyst and software architect can converge.

Towards the end of Goldsmith’s podcast, there was a fascinating reflection about how Conway’s law relates to our brain (57:00). Apparently, there’s something called the Entorhinal cortex “whose functions include being a widespread network hub for memory, navigation, and the perception of time” (Wikipedia). As well as being used for physical navigation, it can also be used to navigate social structures. In other words, ‘your brain fights you when you try to subvert Conway’s law’.

Reflections

In my eyes, the key point in Saporta’s podcast was the metaphor of the bridge, which can be understood in different ways. There could be a bridge moving from the technical to the non-technical, or could be moving from the detail of code and services to the 30,000ft view of everything.

Goldsmith’s podcast offers a nice contrast. I appreciated the discussion about the difference between monoliths and microservices (which is something that is discussed in the current version of TM354). An important point is that when organisations flex and change, microservices can help to move functionality away from a monolith (or other microservices). A microservice can be deployed in different ways, and realised using different technologies.

I found the discussion about the Entorhinal cortex. Towards the end of my doctoral studies, I created a new generation of software metrics that was inspired by the understanding that software developers need to navigate their way across software code bases. It had never occurred to me that the same neural systems may be helping us to navigate our connections with others.

On a different (and final) note, I would like to highlight the work of an article called Software Architecture Insights by Andrew Leigh. Andrew is a former doctoral student from the OU School of Computing and Communications, a current software engineering tutor, and a practicing software engineer. He shares four findings which are worth having a quick look at, and suggests some further reading.

References

Leigh, A. (2024) Software Architecture Insights, ITNow, 66(3), pp. 60–61. Available at: https://doi.org/10.1093/itnow/bwae102.

Permalink Add your comment
Share post
Christopher Douce

Software Engineering Radio: Security and secure coding

Visible to anyone in the world
Edited by Christopher Douce, Sunday 5 October 2025 at 10:10

Digital security is an important specialism in computing. The OU offers a BSc (Hons) in Cyber Security which features TM359 Systems penetration testing. Security is obviously and clearly important within software engineering. The extent to which security is required should be made explicit within non-functional requirements. Any software product that is created should be created and deployed with security in mind.

There are a number of podcasts in Software Engineering radio that addresses security from different perspectives, such as SE Radio 640: Jonathan Horvath on Physical Security and SE Radio 467: Kim Carter on Dynamic Application Security Testing.

One of the podcasts that caught my attention was about secure coding.

Secure coding

SE Radio 658: Tanya Janca on Secure Coding discusses secure coding from a number of perspectives: code, tools and processes to help create robust software systems. She begins at 1:50 (until 2:11) by introducing the principle of least privilege. This led to a discussion of user security and the significance of trust. The CIA triad, Confidentiality, Integrity and Availability is introduced between 10:00 and 11:45. 

A notable section of this podcast is the discussion about secure coding guidelines, between 27:00 and 32:12. Some of the principles shared included the need to validate and sanitise all input, to sense check data, to always use parameterised [database] queries since how and where you write your database queries is important. The software development lifecycle was mentioned between 41:32 and 50:18, which led to a discussion about different types of testing tools and approaches (static and dynamic testing, pen testing and QA testing).

A really notable quote I noted down was the reflection that “software ages very poorly”. There are simple reasons for this. Requirements can change. They change because of changes to the social and technical contexts in which software is used.

Reflections

The podcast scratches the surface of a much bigger topic. One thing that I have picked up from other podcasts is that it is possible to embed code checking within a CI/CD software deployment process. Having a quick look around, I’ve discovered an article by OWASP called OWASP DevSecOps Guideline which discusses ‘linting code’.

The concept of ‘lint’ and ‘linting’ deserves a little bit of explanation. The ‘lint’ in software engineering is, of course, a metaphor. Lint (Wikipedia) is bits of fluff or material that can accumulate on your jumper or trousers. You can get rid of lint using a lint roller, or Sellotape.

There used to be a program called ‘lint’ which ‘went over’ any source code that you have written. Although your source code might compile and run without any problems, this extra program will identify additional bits of code that might potentially be problematic. Think of these bits of code as being pieces of white tissue paper that are sitting on your black trousers. Your ‘lint’ software (which is also called static analysis software) will highlight potential problems that you might want to have a look at.

Continuing looking at OWASP, I was recently alerted to the OWASP Top Ten list which is described as “a standard awareness document for developers and web application security. It represents a broad consensus about the most critical security risks to web applications”. It presents a summary of common security issues that software engineers need to be aware of.

Each of these items are described in a lot of detail and go a lot further than my simplistic knowledge of secure coding. A personal reflection is: software engineers need to know how to read these summaries. This also means: I need to know how to read these summaries.

Python is going to be used in TM113. I’ve been made aware of Six Python security best practices for developers which from an organisation called Black Duck (which is thoroughly in keeping with the yellow rubber duck theme of this new module.

A bit more searching took me to the National Cyber Security Centre (NCSC) and 8 principles of the Secure development and deployment guidance (2018). This set of principles takes a broad perspective, ranging from individual responsibility and learning, through to effective and maintainable code, to creation of a software deployment pipeline.

A final reflection is that none of all this discussion about security is new. Just as there are some classic papers on modular decomposition within software engineering, I’ve been made aware of a 1975 paper entitled The Protection of Information in Computer Systems. I haven’t seen this paper before, and I’ve not read it yet; it requires a whole lot of dedicated reading that I need to find time for.

The geek in me is quite excited at the references to old (and influential) operating systems of times gone by. The set of eight principles (a bit like the NCSE guidelines) contains one of the most important principles in security I know of, namely, the principle of “Least privilege: Every program and every user of the system should operate using the least set of privileges necessary to complete the job. Primarily, this principle limits the damage that can result from an accident or error”.

I have some reading to do.

The article of this article mentions “architectural structures - whether hardware or software - that are necessary to support information protection”. This takes me directly onto the next blog which is all about software architecture.

Acknowledgements

Thank you to Lee Campbell for sharing those additional resources.

Permalink
Share post
Christopher Douce

Software Engineering Radio: Software quality

Visible to anyone in the world
Edited by Christopher Douce, Tuesday 30 September 2025 at 16:19

In one way or another, all the previous blogs which draw on Software Engineering Radio podcasts have been moving towards this short post about software quality. In TM354 Software Engineering, software quality is defined as “the extent to which the customer is satisfied with the software product delivered at the end of the development process”. It offers a further definition, which is the “conformance to explicitly stated requirements, explicitly documented development standards, and implicit characteristics that are expected of all professionally developed software”. The implicit characteristics can relate to non-functional requirements, or characteristics such as maintainability and readability.

The Software Engineering Body of Knowledge (SWEBOK) emphasises the importance of stakeholders: “the primary goal for all engineered products is to deliver maximum stakeholder value while balancing the constraints of development, maintenance, and operational cost, sometimes characterized as fitness for use” (SWEBOK v4, 12-2).

The SWEBOK also breaks ‘software quality’ into a number of subtopics: fundamentals, management processes, assurance processes, and tools. Software quality fundamentals relates to software engineering culture and ethics, notions of value and cost, models and certifications, and software dependability and integrity levels.

Software quality

After doing a trawl of Software Engineering Radio, I’ve discovered the following podcast: SE Radio 637: Steve Smith on Software Quality. This podcast is understandably quite wide ranging. It can be related to earlier posts (and podcasts) about requirements, testing and process (such as CI/CD). There are also connects to the forthcoming podcasts about software architecture, where software can be built with different layers. The point about layers relates to an earlier point that was made about the power and importance of abstraction (which means ‘dealing with complexity to make things simpler’).  For students who are studying TM354, there is a bit of chat in this podcast about the McCabe complexity metric, and the connection between testing and code coverage.

Towards the end of the podcast (45:20) the connection between organisational culture and quality is highlighted. There is also a link between quality and lean manufacturing approaches, which have then inspired some agile practices, such as Scrum.

Reflections

Software quality is such an important topic, but it is something that is quite hard to pin down without using a lot of words. Its ethereal quality may explain why there are not as many podcasts on this topic when compared to more tangible subjects, such as requirements. Perhaps unsurprisingly, the podcasts that I have found appear to emphasise code quality over the broader perspective of ‘software quality’.

This reflection has led to another thought, which is: software quality exists across layers. It must lie within your user interfaces design, within your architectural choices, within source code, within your database designs, and within your processes.

One of the texts that I really like that addresses software quality is by Len Bass et al. In part II of Software Architecture in Practice, Bass et al. identify a number of useful (and practical) software quality attributes: availability, deployability, energy efficiency, integrability, modifiability, performance, safety, security, testability, and usability. They then later go on to share some practical tactics (decisions) that could be made to help to address those attributes.

As an aside, I’ve discovered a podcast which features Bass, which is quite good fun and worth a listen: Stories of Computer Science Past and Present (2014) (Hanselminutes.com). Bass talks about booting up a mainframe, punched card dust, and the benefit of having two offices.

References

Bass, D. L., Clements, D. P and Kazman, D. R. (2021) Software Architecture in Practice [Online], 4th edn, Upper Saddle River, NJ, Addison Wesley.

Permalink
Share post
Christopher Douce

Software Engineering Radio: Code reviews

Visible to anyone in the world
Edited by Christopher Douce, Tuesday 30 September 2025 at 10:44

Although software engineering is what you call a technical discipline, the people element is really important. Software engineering processes are as much about people as they are about product. Some software engineering processes apply a process known a code review.

Code reviews

In SE Radio 400: Michaela Greiler on Code Reviews a code review is described as ‘a process where a peer offers feedback about software code that has been created’ (1:11). Greiler goes on to say that whilst testing can check the functionality of software, code reviews can help to understand ‘other characteristics of software code’, such as its readability and maintainability (2:00). The notion of these other characteristics relates to the notion of software quality, which is the subject of the next blog. Significantly, code reviews can facilitate the sharing of knowledge between peers. Since software is invisible, it is helpful to talk about it. Valuable feedback is described as feedback that leads of change.

Greiler’s podcast suggests that reviews can be carried out in different ways (8:00). They can be formal, or informal. Code can be emailed, or code could be ‘pulled’ from source code repositories. Feedback could be shared by having a chat, or should be mediated through a tool. The exact extent and nature of a review will be dictated by the characteristics of the software. The notion of risk plays a role in shaping the processes.

One of the parts of this podcast I really liked was the bit that emphasised the importance of people skills in software engineering. Tips were shared on giving (and receiving) feedback (19:20). I noted that a reviewer should aim add value, and the engineer’s whose code is being reviewed should try to be humble and open minded. A practical tip was to give engineers a ‘heads up’ about what is going to happen, since it gives them a bit of to prepare and be explicit about the goals of a review. The podcast also emphasised a blog post by Greiler that had the title: 10 Tips for Code Review feedback.

Towards the end, there was a comment that code reviews are not typically taught in universities (34:25). I certainly don’t remember being involved in one when I was an undergraduate. In retrospect, I do feel as if it would have been helpful.

A final question was ‘how do you get better at code reviews?’ The answer was simple: show what happens during a review, and practice doing them.

Reflections

When I was a software engineer, I spent quite a bit of time reading through existing code. Although I was able to eventually figure out how everything worked, and why lines of code and functions were written, what would have really been useful was the opportunity to have a chat with the original developer. This only happened once.

Although this conversation wasn’t a code review (I was re-writing the software that he had originally written in a different programming language), I did feel that the opportunity to speak with the original developer was invaluable. I enjoyed the conversation. I gained confidence in what I was doing and understanding, and my colleague liked the fact that I had picked up on a piece of work he had done some years earlier.

I did feel that one of the benefits of having that chat is that we were able to appreciate characteristics of the code that were not immediately visible or apparent, such as decisions made that related to the performance of the software. The next blog is about the sometimes slippery notion of software quality (as expressed through the conversations in the podcasts).

Permalink
Share post
Christopher Douce

Software Engineering Radio: Infrastructure as Code (IaC)

Visible to anyone in the world

In the last post of this series, I shared a link to a podcast that described CI/CD. This can be broadly described as a ‘software engineering pipeline where changes are made to software in a structured and organised way which are then made available for use’. I should add that this is my own definition!

The abbreviation CI/CD is sometimes used with the term DevOps, which is a combination of two words: development and operations. In the olden days of software engineering, there used to be two teams: a development team, and an operations team. One team build the software; the other team rolled it out and supported its delivery. To all intents and purposes, this division is artificial, and also unhelpful. The idea of DevOps is to combine the two together.

Looking at all these terms more broadly, DevOps can be thought as a philosophical ideal about how work is done and organised, whereas CI/CD release to specific practices. Put more simply, CI/CD makes DevOps possible.

A broader question is: how do we make CI/CD possible? The answer lies in the ability to run process and to tie together bits of infrastructure together. By infrastructure, we might mean servers. When we have cloud computing, we have choices about what servers and services we use.

All this takes us to the next podcast.

Infrastructure as Code (IaC)

In SE Radio 482: Luke Hoban on Infrastructure as Code Hoban is asked a simple question: What is IaC and why does it matter (2:00)?  The paraphrased answer is that IaC can describe “a desired state of the [software] environment”, where that environment is created using cloud infrastructure. An important point in the podcast is “when you move to the cloud, there is additional complexity that you need to deal with”. This software environment (or infrastructure) can also be an entire software architecture which comprises of different components that do different things (and help to satisfy your different requirements). IaC matters, since creating an infrastructure by hand introduces risk, since engineers may forget to carry out certain steps. A checklist, in some senses, becomes code.

When the challenge becomes “how to compose the connections between … thousands of elements, infrastructure becomes a software problem”. Different approaches to solve this software are mentioned. There is a declarative approach (you declare stuff in code), and an imperative approach (you specify a set of instruction in code). There are also user interfaces as well as textual languages. Taking a declarative approach, there are models that make use of formalisms (or notations) such as JSON or YAML. A scripting approach, through the use of application programming interfaces may make use of familiar programming languages, such as Python, which allow you to apply existing software engineering practices. When you start to use code to describe your infrastructure, you can then begin to use software engineering principles and tools on that code, such as version control.

Towards the end of the podcast, a question was asking about testing (52:30), which is a topic that will be discussed in a later blog. Engineers may create unit tests to check to see what elements have been created and to validate characteristics of a deployed infrastructure. Integration testing may be carried out using a ‘production like’ staging environment before everything is deployed.

Reflections

A computing lecturer once gave a short talk during a tutorial that changed how I looked at things. He said that one of the most fundamental principles in computing and computer science is the principle of abstraction. Put in other words, if a problem becomes too difficult to solve in one go, break the problem down into the essential parts that make up that problem, and work with those bits instead.

A colleague (who works in the school) once expressed the same idea in another way, which was “if you get into trouble, abstract up a level”. In the context of the discussion “getting into trouble” means everything becoming too complicated to control. The phrase “abstract up a level” means breaking the problem down into bits, and putting them into procedures or function, which you can then start to manage more easily.

Infrastructure as Code is a really good example of “abstracting up” to solve a problem that started to become too complicated.

IaC has facilitated the development of a CI/CD pipeline. Interestingly, a CI/CD pipeline can also facilitate the development of IaC.

Permalink Add your comment
Share post
Christopher Douce

Software Engineering Radio: CI/CD

Visible to anyone in the world
Edited by Christopher Douce, Monday 29 September 2025 at 18:12

This one of a series of posts that draw on Software Engineering Radio. This post is one of a group that relate to software engineering processes. On one extreme there is a waterfall approach, where everything is planned in advance. On the other extreme, there is ‘agile’ in all its different forms. Agile isn’t about not planning. Agile is about understanding that not everything is known, and about sharing technical and process information between developers.

Within each approach, there is the need to build software, and put what ha been built into production. This is where Continuous Integration/Continuous Delivery (CI/CD) comes in. CI/CD can be considered in terms of a pipeline of ‘stuff’ that happens from the point where changes are made to source code, to the point where changes to your software are used by customers or employees.

Continuous Integration/Continuous Delivery (CI/CD)

Two important points I’ve taken away from the Scrum podcast is that the team makes decisions about how to manage their own work, and they might apply CI/CD.

In SE Radio 498 James Socol discusses Continuous Integration and Continuous Delivery (CI/CD). An important question is, of course, what is Continuous Integration (CI)? (1:40) CI is ability to make small changes that are integrated into the main codebase of a product. A codebase is typically stored within a source code version management system, such as GitHub. CI also solves the problem of how to combine the work of different developers together (2:40), which can sometimes be tricky, especially if there are multiple developers working on the same bits of software at the same time.

The next question is: what is Continuous Delivery? (4:40) CD means taking the changes that have been integrated together and creating a version of a product that can be used. The decision about whether to use that version may be decided by software engineering team. Continuous Deployment (another version of CD) means that the updates (which are integrated together) are ‘going into production right now’.

A practical question is: what type of tools are needed? (8:50). Engineering teams need a way to create an automated build process (perhaps using products such as Jenkins or GitHub actions). Build tools gather bits of code that are to be combined (perhaps compiled) together, run a variety of tests (unit tests, integration tests or security tests) and then create a built artefact. The artefact that is created could be a ‘container’ which could then be deployed to cloud servers. A further point was: if things go wrong – if deployed software doesn’t work as expected, it should be possible to ‘roll back’ to a known version that does work.

The closing section of this podcast is interesting (48:00). It is important to monitor how a deployed system is running. This point relates to a fundamental of engineering. It is important and necessary to take measurements since it enable you to gain an understanding of how effective everything is running. This is reflected in the following passage within the SWEBOK: “In continuous development paradigms (such as DevOps), other metrics have evolved that focus not on the architecture directly but on the responsiveness of the process, such as metrics for lead time for changes, deployment frequency, mean time to restore service, and change failure rate - as indicative of the state of the architecture.” (SWEBOK, 2-11). There is a lot to unpack here: software can be deployed on, or across an architecture, and architectural choices relate to important requirements.

Another podcast on the same topic is SE Radio 585: Adam Frank on Continuous Delivery vs Continuous Deployment. Similar concepts are discussed in this podcast, but there are further discussions about the relationship between Ci/CD and how the principles relate to differences between monolithic applications and microservices (which we will get into when we get to the post on architecture).

There are also further comments about the different types of tests that could be integrated into the CI/CD pipeline (21:50). There is also a hint about further complexities which might emerge, such as the deployment to ‘staging environments’ before a product is moved towards production with real users. This links to the importance of managing risk.

Reflections

When I worked a professional software engineer at the start of the century, CI/CD wasn’t a thing, but it was something that I desperately needed.

I was forever switching between different programming environments to create different bits of software. Code, in the form of stored procedures had to be added to databases. Databases had to be saved into specific directories. Server side software written in C# has to be built into what were called assemblies. Everything had to be combined together into installation packages. If you forgot one bit, you had to do it all again. I’m pretty sure that I had a manual checklist. This checklist would now take the form of code. Of course, everything would all be done on the cloud.

This neatly take us to the next podcast, which is all about Infrastructure as Code (or, IaS).

Permalink
Share post
Christopher Douce

Software Engineering Radio: Waterfall versus Agile

Visible to anyone in the world

The first Software Engineering Radio podcast featured in this blog speaks to a fundamental question within software projects, which is: how much do we know? If we know everything, we can plan everything. If we don’t know everything, then we need to go a bit more carefully, and figure things out as we go. This doesn’t necessarily mean that we don’t have a plan. Instead, we must be prepared to adjust and change what we do.

Waterfall versus Agile

In SE Radio 401: Jeremy Miller on Waterfall Versus Agile two different approaches are discussed; one is systematic and structured, whereas the other is sometimes viewed being a bit ‘looser’. In this podcast, I bookmarked a couple of small clips. The first is between 16:20 and 19:00, where there is a question about when the idea of agile was first encountered. This then led to a discussion about eXtreme Programming (XP) and Scrum. The second fragment runs between 45:40 and 47:21, which returns to the point about people. This fragment highlights conflicts within teams, the significance of compromise and the importance of considering alternative perspectives. This not only emphasises the importance of people in the processes, but also the importance of people skills within software engineering practices.

Following on from this discussion, I recommend SE Radio 60: Roman Pichler on Scrum. Roman is asked ‘what is Scrum and where does it come from?’ An important inspiration was considered to be ‘lean thinking’ and an article called ‘the new product development game’. It was later described as ‘an agile framework for developing software systems’ (47:50) which focuses on project and requirements management practices. Scrum can be thought of a wrapper where other software development practices can be used (such as eXteme Programming, and continual integration and deployment).

It is worth highlighting some key Scrum principles and ideas, which are discussed from 2:50 onwards. An important principle is the use of small autonomous multidisciplinary self-organising team (21:10) of up to 7  (plus or minus 2) people that comprises of developers, a product owner and a Scrum master. The Scrum master (24:15) is responsible for the ‘health’ of the team and remove barriers to progress. The team is empowered to make their own decisions about how they work during each development increment, which is called a sprint. A sprint  (7:20) is a mini project that has a goal, where something that is built that ‘has value’ to the customer (such as, an important requirement, or group of requirements), and is also ‘potentially shippable’.

Decisions about what is built during sprints is facilitated through something called a product backlog (28:50), which is a requirements management tool, where requirements are prioritised. How requirements are represented depends on the project. User stories were mentioned as ‘fine grained’ requirements. In Scrum, meetings are important. There is a daily Scrum meeting (13:10), sprint reviews, and a retrospective (43:35). The retrospective is described as important meeting in Scrum, which takes place at the end of each sprint to help the team reflect on what went well and what didn’t go well.

Reflections

When I was an undergraduate, we were all taught a methodology that went by natty abbreviation of SSADM. I later found out that SSADM found its way into a method called Prince, which is an approach used for the management of large projects. (Prince is featured in the OU’s postgraduate project management module).

I was working in industry when Beck’s book about XP came out. When I worked as a software engineer, I could say that we applied a small ‘agile’ approach with a more traditional project management methodology. We used techniques from XP, such as pair programming, and continually kept a Gantt chart up to date.

At the time, none of us knew about Scrum. Our Gantt chart was Scrum’s burn down chart. We didn’t have a project backlog, but we did have an early form of a ‘ticket system’ to keep track what features we needed to add, and what bugs needed to be fixed.

One of the things that we did have was version control. Creating a production version of our software products was quite a intensive labour process. We had to write release notes, which had to be given a number, a date and saved in a directory. We built new installation routines, and manually copied them to a CD printing machine, which as asking for trouble. What we needed was something called CI/CD, which is the topic of the next post.

Permalink
Share post
Christopher Douce

Software Engineering Radio: Software engineering processes

Visible to anyone in the world
Edited by Christopher Douce, Monday 29 September 2025 at 17:41

Software engineering is about the creation of large software systems, products or solutions in a systematic way with a team of people.

Since software can serve very different needs and has necessarily different requirements, there are a variety of ways that software can be built. These differences relate to the need to take account of different levels of risk. You would use different processes to create a video game, than you would for an engine management system. Software engineering processes are also about making the ‘invisible stuff’ of software visible to software engineers and other stakeholders.

One of the abbreviations that is sometimes used is SDLC, an abbreviation for Software Development Lifecycle. Software has a lifecycle which begins with requirements and ends with maintenance. Although software never wears out, it does age, since the context in which it sits changes. Processes can be applied to manage the stages within and between the software life cycle.

Different terms are used to refer to the development of software systems. There can be greenfield systems, brownfield systems, or legacy systems. Legacy systems can be thought of the ‘old systems that continue to do useful stuff. Legacy systems are also brownfield systems, which software engineers maintain, to make sure they continue to work. Greenfield systems are completely new software products. In the spirit of honesty, more often than not, software engineers will typically be working on brownfield systems and legacy systems than greenfield systems; systems that are towards the end of the software lifecycle than at the start.

The blogs that follow highlight different elements of the software development process. It begins with a discussion about the differences between waterfall and agile. It then goes onto say something about a technique known as Continual Integration/Continual Deployment (CI/CD), which has emerged through the development of cloud computing. CI/CD has been made possible through the development of something called ‘infrastructure as code’, which is worth spending a moment looking at (or listening to). Before we move onto the important subject of software quality, I then share a link to a podcast that discusses a process that aims to enhance quality: code reviews.

Permalink
Share post
Christopher Douce

Listening to Software Engineering Radio

Visible to anyone in the world
Edited by Christopher Douce, Monday 29 September 2025 at 15:39

From time to time, I dip into (and out of) a podcast series called Software Engineering Radio, which is produced by the IEEE. It’s a really useful resource, and one that I’ve previously mentioned in the blog post Software engineering podcasts.

This is the first of a series of blog posts that shares some notes I’ve made from a number of episodes that I have found especially interesting and useful. In some ways, these posts can be through of a mini course on software engineering, curated using the voices of respected experts.

Towards the end of each blog, I share some informal thoughts and reflections. I also share some links both earlier posts and other relevant resources.

I hope this series is useful for students who are studying TM354 Software Engineering, or any other OU module that touches on the topic of software engineering or software development.

Software engineering as a discipline

When doing some background reading (or listening) for TM113, I found my way to SE Radio 149: Difference between Software Engineering and Computer Science with Chuck Connell.

In this episode, there are a couple of sections that I bookmarked. The first is 10:20 through to 12:20, where there is a discussion about differences between the two subjects. Another section runs between 24:10 and 25:25, where there is an interesting question: is software engineering a science, an art, or a craft? The speaker in the podcast shares an opinion which is worth taking a moment to listen to.

According to the software engineering body of knowledge (SWEBOK), engineering is defined as “the application of a systematic, disciplined, quantifiable approach to structures, machines, products, systems or processes” (SWEBOK v4.0, 18-1). Put in my own words, engineering is all about building things that solve a problem, in a systematic and repeatable way that enables you to evaluate the success of your actions and the success of what you have created.

An early point in chapter 18 of SWEBOK is the need to: “understand the real problem” which is expanded to the point that “engineering begins when a need is recognized and no existing solution meets that need” (18-1). Software, it is argued, solves real world problems. This takes us to a related question, which is: how do we define what we are building? This takes us to the next post, which is all about requirements.

Before having a look at requirements, it is useful to break down ‘software engineering’ a little further. The SWEBOK is divided into chapters. The chapters that begin with the word ‘software’ are: requirements, architecture, design, construction, testing, maintenance, configuration management, engineering management, engineering process, models and methods, quality, security, professional practice, economics.

There are three others which speaks to some of the foundations (and have the word ‘foundation’) in their title. They are: computing, mathematical, and engineering.

Reflections

Software is invisible. It is something that barely exists.

The only way to get a grasp on this ‘imaginary mind stuff’ of software is to measure it in some way. The following bits of the SWEBOK is helpful: “Knowing what to measure, how to measure it, what can be done with measurements and even why to measure is critical in engineering endeavors. Everyone involved in an engineering project must understand the measurement methods, the measurement results and how those results can and should be used.” (SWEBOK v4, 18-10). The second sentence is particularly interesting since it links to the other really important element in software engineering: people. Specifically, everyone must be able to understand the same thing.

The following bit from the SWEBOK is also helpful: “Measurements can be physical, environmental, economic, operational or another sort of measurement that is meaningful to the project”.

Next up is software requirements. By writing down our requirements, we can begin to count them. In turn, we can begin to understand and to control what we are building, or working with.

Permalink Add your comment
Share post
Christopher Douce

A335 Journal - August 2025

Visible to anyone in the world
Edited by Christopher Douce, Monday 1 September 2025 at 10:46

11 August 2025

I’ve just come back from a couple of weeks of much needed leave.

A few days before heading away I was stuck down by a nasty stomach bug which meant that I couldn’t travel. Whilst recovering, I listened to a couple of audio books: a selection of stories by Katherine Mansfield (but not the exact same selection that have been selected from the module), and a listen to Under Milk Wood by Thomas, as narrated by Richard Burton. I liked Mansfield, but I loved the Thomas text. I remembered fragments from seeing a production of it in the 1990s at the national theatre.

I was planning on taking a lot of texts on holiday with me, but I culled the collection down to a practical core. Plus, I was told I might be getting a couple of the texts for my birthday.

When I finally got stuck into my holiday, I began with Good Morning, Midnight by Jean Rhys, which I adored. It felt quite contextually topical, since I was going to be travelling via Paris on the way back (and I had been drinking a bit of wine to celebrate my break). Next up was Between the Acts by Woolf, which I hated. I found the introduction of the text really helpful, which attuned me to her prose style, but I felt the it was artificial and detracted from what was going on. Perhaps I’ll change my views when I get into the module materials.

Next up was Playboy of the Western World by Synge. Interestingly, there’s a production of this which is going to take place at the national theatre in the new year (I was tipped off about this by some chat in the Facebook group). Tickets have been booked. I have no idea whether it will coincide with the TMA schedule (the module website isn’t open yet). I quite liked it, and I started to think about the meaning of the various characters, and how much there was to decode. I’m sure we’ll get onto this when we get into the module materials.

I followed this by The Good Soldier by Ford Maddox Ford. I really liked this one. Due to the focus on ‘the soldier’ I was reminded of the film The Talented Mr Ripley, which is probably a spurious comparison. I also though of The Secret History by Donna Tartt, which I read when it came out. One of the names of the characters is shared between these two texts.

Two of the birthday texts I received were The Mill on The Floss by George Eliot, and the Norton edition of Walden by Thoreau, which is (of course) different than the version that I had downloaded for my Kindle. I tore through The Mill on The Floss in about three days. I really like Eliot’s writing style; her very considered descriptions. I was really interested to learn that the novel is set in Gainsborough, Lincolnshire. I’m guessing that the river Floss might have been inspired by the river Trent.

In the final couple of days, I managed to start the Thoreau text beginning with his essay Civil Disobedience, which has striking resonance today. I then found my way to the start of Walden, reading his chapter on the economics of living by a lake. I was then drawn to the biography section, asking myself the question: ‘who is this chap?’ I was also struck by how young he died.

I should also mention that I got the A334 result I was hoping for, which is a relief. I felt that the EMA assessment was very fair, and I can clearly see where I could have done better and gained a higher mark. There is always learning to be had.

I’m now back to my day, job triaging my inbox. I’m going to try to keep up the reading momentum and shall try to find some time to read some Thoreau every day; I feel it’s important to get ahead. I want to move onto Season of Migration to the North by Salh, which looks like an interesting read.

16 August 2025

The module website is open. It’s taken a few days to get there, but I’m starting to have a look around. I’ve read the introduction, and I’ve skim read the assessment guide. There are specific bits about TMAs 1 and 3. I think I’m going to enjoy TMA 3, even though it looks like it is going to be group work. Relating to the module website, a further task is to identify whether there are some resources that I can send to the Kindle. Before getting to this, I was directed to a short series of YouTube videos about Women Writers: Voices in Transition beginning with Katherine Mansfield.

A final note on this entry is that I’ve become an A335 WhatApp group co-moderator. There are already a number of messages. To answer a question, I need to look at what the A335 GenAI policy is. I’m sure it will be somewhere on the module website.

29 August 2025

Over the last week or so I’ve been trying to snatch a few moments here and there to read Walden. Just before a long drive, I downloaded a Walden audio book on Audible. It turns out there are quite a few of them, and one was even included in my Audible subscription. The audiobook ran to ten hours, but I realised that I had the equivalent of four hours of audiobook remaining, which was pretty much the length of my drive.

I’m going to have to go over it again if I use this text for any of my assignments, but I have concluded that Thoreau writes really well about squirrels.

I’ve downloaded his essay Walking which runs to an hour and three quarters, which I’m going to have to listen to whilst going on some walks. After this, I’ll continue my pre-reading with Salih. Then there’s the sci-fi novel, but I’m in no rush to read that one.

Permalink 1 comment (latest comment by Neil Denham, Monday 1 September 2025 at 11:56)
Share post
Christopher Douce

TM470 Requirements revisited

Visible to anyone in the world
Edited by Christopher Douce, Wednesday 27 August 2025 at 12:32

Software requirements are important. If you are building a software product, your software requirements will describe what your software product is intended to do. Since software has the potential to do so many different things and solve so many different problems, your requirements can have different characteristics.

This blog post follows on from an earlier blog post, TM470 Considering software requirements. In this earlier blog post, I shared some questions. In this post, I would like to further refine and develop these questions.

When considering your project, it is important to ask yourself the following:

  1. When in your project will you allocate time to uncovering the requirements for your project?
  2. How will you go about gathering your requirements?
  3. How will go you about describing your requirements in your EMA report (and to the examiner)?
  4. How will you go about testing your product to check to see if it meets your requirements?

It is quite easy to answer the important first question. You would have chosen a lifecycle model for your project. Whatever approach you choose, the requirements ‘bit’ of your project should (ideally) occur not long after the start of your project.

The final question is also especially important. If you don’t define your requirements in a clear way, it may not be possible to know whether your software product meets those requirements. In other words, if you don’t have a clear picture of what you are building, you won’t know whether or not your project is a success.

Answering the second and third questions is more difficult, since it very much depends on the type of software you are creating. Arguably, the approaches you will adopt for testing will also be different too – but the key point remains - you need requirements to know what you need to test. Testing, of course, also needs to fit within your project plan.

Functional and non-functional requirements

In TM354, a software requirement (or requirement) is defined as ‘a desired feature, property or behaviour of a software system’ (according to the TM354 glossary). This term can be broken down into two types of requirement. A functional requirement is defined as ‘a specified action that a system has to perform’. In other words, what a software product should do. A non-functional requirement, by way of contrast, is defined as ‘a quality that a system must have’. Expanding this slightly, the idea of a ‘quality’ relates to a characteristic of a software product, such as its performance, security, reliability or usability.

The activity of keeping track of requirements is known, perhaps expectedly, as requirements engineering. If you feel that you need to find a way to keep track of your product requirements in a systematic way, it might be worthwhile briefly doing a bit of background reading into requirements engineering (in the context of your project) and sharing evidence of your reading within the literature review section of your project. Do remember to critical evaluate any articles and sources that you uncover. You should apply methods and techniques which help you to solve whatever problem you are trying to solve.

Scenarios

One of the reasons why software engineering is such a fascinating subject is how software can be used to solve so many different problems. Here are three examples of very different contexts in which software can be used:

  1. Embedded software
  2. Logistics software
  3. Consumer software

Embedded software

Embedded software is software that makes physical hardware work. It is software that ‘does something’ that is often important, and is often invisible. A useful example of embedded software is the software that runs on your internet router that you may use to connect to your broadband connection (you may well have learnt more about broadband technology in other modules).

An internet router operating system must satisfy a number of important non-functional requirements: your router software must be efficient (it must not slow down your use of the internet), it must be secure, and must run effectively whilst using limited memory and processing power.

An important question is, of course, what does router need to do? From a very high level perspective, it must provide some kind of user interface that enables either yourself or an engineer to configure it so it can be used. It will also, of course, offer internet connectivity either through a wired connection, or through a WiFi interface.

This leads a lot of further questions, such as: how are these interfaces defined? Wireless and internet protocols are, of course, already very well defined. They have been created by international standards organisations (such as the IEEE), corporations and committees over a long period of time. Technology needs standards to be able to work, and software engineers who have the unenviable task of building software for consumer internet routers must know what they are.

When faced with complex functional and non-functionality complexity, software engineers need to express their requirements in a concise and clear way, and in a form that ensures they can be tested. Testing is, of course, important since we need to know whether our software does what we think it should do.

Some requirements are more important than others. With our internet router, we may be able to tolerate a slight degradation in performance if there is more than one member of our household using our broadband connection.  We would not, however, wish to compromise on security, since this would put all users at risk. In other cases, some requirements are non-negotiable. Consider, for a moment, software used to control an engine (or motor) management system for a vehicle. The consequences of that software either failing, or not functioning as expected have the potential to be catastrophic. On one hand, the driver of a vehicle might be mildly inconvenienced if they can’t get it started, but on the other hand, there could be consequences for passengers, the pedestrians and the wider environment. A notable example is the

Volkswagen Diesel Emissions Scandal which is summarised by Jung and Park (OU Library). Since software that software engineers create impacts on people and society, software engineers have a responsibility to reflect on the decisions that they make, as well as the ultimate impact of the tasks they are asked to carry out.

Before moving onto the next example, a useful question to ask is: what project model is best suited to the development of embedded software for an internet router, where all the requirements are clearly defined? If everything is known from the start of your project, a traditional waterfall project model may well be most appropriate.

Logistics software

This second category can be thought of as ‘software that manages workflow’. The term ‘workflow’ can be used to describe any series of actions that contribute towards the solving of a problem or satisfying a need. The effect of some software isn’t immediately visible to us. Our experience at a supermarket or at a restaurant is often facilitated by software. Behind the scenes, requests for products are sent to one or more suppliers. These suppliers must then find a way of delivering your order to the shop or restaurant you visit. The importance of such software becomes very apparent when there are problems, as illustrated by a cyber attack on the Co-op chain of supermarkets (BBC News)

The functional and non-functional requirements for business-to-business logistics software will, of course, have a different character when compared to the formality that is needed for very highly specified embedded software. If is, of course, really important to clearly describe what is expected, by which stakeholder (who is defined as ‘anyone who is affected by the introduction of a software system’), and in what format.

It is also useful to reflect that they ways in which requirements may be gathered. Whilst the requirements for an internet router may exist within formal standard documentations, the requirements for logistical operations may have to be discovered by studying existing business processes and ways of working. Depending on the organisation, this may be the responsibility of a business analyst. In smaller organisations, it may be the responsibility of the software engineer to ask important questions and document their findings. Software is, of course, always about people.

What project model choice would be appropriate for this category of project? The answer, of course, depends on the detail, but let us assume that we know a lot about the operation of business processes and the underlying technologies we may wish to use. In this case, we might adopt a variation of the iterative waterfall, where more communication between the different stakeholders are embedded into the planning.

Consumer software

Consider a website that supports a community coffee shop. The website shares useful information for potential customers, such as its location, a menu of cakes and drinks, and its opening times. The manager has decided that it would be useful to enhance the website to offer more functionality.  Some key ideas include a way to advertise special events, such as gallery evenings, book groups and a ‘toddler club’, and to provide a way to reserve tables during busy times, and even to allow some customers to order their coffee and cake before they physically visit the café.

Where there are requirements, there are also questions. In this scenario we might ask where our requirements come from. Do they come from the manager, or should they come from the staff who will be working in the café, or from its customers? The manager, for example, may have an idea about what may need, but might not have a very clear vision of what they want.

 The software requirements for our coffee website are a lot less clear than the requirements of a logistics product, and much less clear than the requirements for an embedded system. In each scenario, the requirements need to be discovered, but the approach for discovering those requirements are different in each example.

When we are faced with unclear requirements and have access to potential users (who may have strong opinions about what they do and don’t want) we may adopt an interactive project model where we create a series of prototypes of what a product might look like. We might even apply something called an ‘agile method’ that emphasises producing versions of working software as early as possible, whilst also fostering collaboration between developers and communication with key stakeholders.

Discovering requirements

These three scenarios implicitly suggest there are different ways of uncovering requirements. The first scenario suggests that some essential requirements are likely to exist in the form of detailed standards and protocol documents. The second scenario suggests that requirements exist within the business, and may need to be discovered, perhaps by business analysts or by working with other who have a detailed understanding of the problem area. The third scenario suggests that although end users may have a solution in mind, they might not have a clear idea about what form it might take.

Here is a summary of some methods that could be used to learn more about software requirements:

  1. Study documents that are connected with the problem that is being solved. As well as standards, this could include documents that describe national or international legislation, sector guidelines, or corporate policies.
  2. Look at existing products, or study what existing products are currently used to solve the problems that your software product will ideally solve.
  3. Ask users. Interview them individually or by using focus groups.
  4. Observe users. Study, with permission, what happens within an existing system. Alternatively, ask users to record their own observations.
  5. Examine the context or wider environment of the problem to identify any further stakeholders, social or technical constraints, and concerns that need to be thought about.
  6. Build a prototype which is then evaluated. The process of evaluation can then lead to the discovery of more questions, and more requirements.

The information you gather from each of these approaches must be analysed. Analysis is a critical and creative process that must also take account of ethics; it is necessary to consider the impact of a software product on individuals, communities, and societies.

When you have decided on what are requirements are, how should you represent them? Should you write them down, or are there other approaches you could use? Your decision about which approach to use may well depend on your approach to risk, and the consequences of what might happen if your software product were to go wrong.

Representing requirements

The best way to represent or to describe your requirements depends on the characteristics of your software system. When looked at very simply, there a continuum which relates to formality. On one end of the requirements continuum there are very formal requirements that could be expressed in a formal mathematical language. On the other end, the requirements may not be very well known and are expressed using some slides or images created using a prototype tool.

What follows is a summary of different approaches you can use to represent requirements:

Formal methods: One of the main difficulties of using natural language to describe software requirements is that natural language can lead to ambiguity. To get around this challenge, software researchers have invented software specification languages that are based on mathematics. One of the advantages of a formal approach is that it becomes theoretically possible to formally verify whether program code (software) logically satisfies (matches) a specification.  You would use a formal approach in safety critical systems, which are often embedded software systems. The difficulty of suing formal methods lies, of course, with their formality. Since they use a formal language, they can be very difficult to work with.

Volere shells: Although natural language can express ambiguities, one way to manage this is to control the way in which language is used. Volere shells are essentially templates which contain sections that presents a concise description of a requirement, the motivation for the requirement, and a fit criterion that can be used to measure or determine whether it has been implemented successfully. For more information about Volere shells, do consult the OU’s software engineering module and Mastering the Requirements Process by Reed, Robertson and Robertson (2024) (OU Library).

Use cases: Use cases are all about describing what happens when a stakeholder interacts with a software system with the intention of carrying out a task or completing a goal. OU module materials describe two types of use case: textual use cases and graphical use cases. Textual use cases present a set of steps that are numbered, with alternatives. Graphical use cases are described using the Unified Modelling Language (UML) and summarise actions and actors (which can be stakeholders). Use cases are given names, which allows software engineers to easily discuss them with others.

User stories: The OU software engineering module describes a user story as ‘a story written by an intended user of a system; it describes some functionality that is of value to the person(s) writing the story. It represents a user’s expectation of the system’ (TM354 module glossary, 2025). User stories can take the form: ‘As a <user> I want to <description of activity> so I can <explanation of why activity is important>’. Like use cases, user stories can be given names (or even numbers) to help groups of software engineers to discuss them between each other, and within meetings. Since software requirements are, of course, related to software testing, an approach known as Behaviour-driven Development (BDD) builds on the concept of user stories, allowing them to be embedded within software test scenarios.

Prototypes: There are many different types and forms of prototype. A very simple prototype could be created from a series of pencil sketches, or prepared using one of many user design tools. Alternatively, a prototype could be a simulation of a software product, perhaps built using a presentation tool such as PowerPoint. A prototype might also be a semi-functional product. Sometimes the terms ‘horizontal’ or ‘vertical’ are used. A horizontal prototype that shows a lot of functionality but in not much depth; a vertical prototype that shows a small amount of functionality in a lot of depth. Unlike other representation techniques, prototypes embody requirements rather having them written down. It may, of course, be necessary to add additional description, especially regarding non-functional requirements, to provide further information and context for software engineers.

Planning for implementation

You have used different approaches to gather your requirements, and you have decided on a way to represent them. The next question is: how should you prioritise your requirements? The process of prioritisation, like the identification of requirements is also a creative process. There are, however, tools that can helpful. One tool is called the MoSCoW prioritisation tool (DSDM Agile Project Framework, MoSCoW Prioritisation).

MoSCoW is an abbreviation for ‘Must Have’, ‘Should Have’, ‘Could Have’ and ‘Won’t Have this time’. The most important requirements are obviously those that your stakeholders ‘must have’. The Agile Project Framework resource offers the following direct guidance: ‘ask the question ‘what happens if this requirement is not met?’ If the answer is ‘cancel the project – there is no point in implementing a solution that does not meet this requirement’, then it is a Must Have requirement’ (Agile Business Consortium).

If you are considering running a project that adopts an iterative approach, it will be useful to highlight which requirements will be implemented within which iteration. If you have begun with a prototype, taking a practical approach, it will be necessary to convert requirements embodied within a prototype to a specific development task. Working with requirements is a process of making invisible needs visible, so they can be converted into working software.

A recommendation

If you are creating a software product include a separate requirements section in your account of project work chapter. Use your literature review chapter to show your awareness of different approaches to requirements, and use your project work chapter to show how you have critically applied your understanding to the problem that you are solving.

Use an appendix to present a summary of all your requirements. Use the body of your report to share examples of the most significant or important requirements in your project. In the reflection chapter, consider writing something about what you learnt by working with requirements in a systematic way, and which requirements were the most difficult to implement. If there were some requirements that were too difficult to implement or define, do say something about why you thought that was the case, and and what you might have done differently.

Resources

When writing your project report (your EMA) it is important and necessary to show your understanding of principles and ideas from modules you have studied before you got to your project. A number of the level three computing modules address the topic of software requirements. What follow is a very brief summary of three important modules. If you have not studied these modules during your degree, there is an opportunity to learn more about the topics they contain, and apply them to your project.

TM353 IT Systems: Planning for Success

This module helps you to consider the connection technology and people and the way in which they can both interact with each other to create a sociotechnical system. TM353 emphasises the importance of understanding broader and wider perspectives. It introduces a methodology, known as soft systems methodology and techniques such as the creation of rich pictures, which can be used to depict and explore the environment in which your software product inhabits. By considering the broader perspective, it is then possible to consider the way in which software may influence communities of users and stakeholders.

TM354 Software Engineering

TM354 introduces a number of practical tools that can be used to explore and share requirements. It describes a UML notation called an Activity Diagram which allows you to express business processes and accompanying interaction with software products. It also describes different types of use case, both textual and diagrammatic. It describes Volere Shells in quite a bit of detail, and offers some very clear examples about how to write user stories. It also draws upon the Reed, Robertson and Robertson text, and directs students to a useful interview.

TM356 Interaction Design and the User Experience

TM356 makes extensive use of a well-known textbook, Interaction Design: Beyond Human-Computer Interaction by Preece, Rogers and Sharp (OU Library). Although the text focuses substantially on the design of user interfaces in all their different forms, understanding user requirements is a fundamental theme for a very simple reason. If there is a mismatch between the user’s expectations and the design of a product, that product is not likely to be as usable as expected. If you haven’t studied the OU interaction design module, this text can be especially useful. It offers some practical guidance about what techniques can be used to gather requirements, and what approaches can be used to perform usability evaluations of prototypes.

To find your requirements, it is sometimes necessary to carry out applied research to discover what users do and need in the contexts and environments in which they inhabit. Like the software engineering module, the interaction design module discusses user stories, use cases, and also mentions Volere Shells. Notably, it adds a few further tools that can help to further uncover and develop requirement. Essential use cases are, for example, a use case that gets to the ‘essence’ of an interaction, without regard how an interaction is implemented.  Personas, scenarios and storyboards are practical tools that help us to think about who the users are, what they will be doing when they use the software, and the context in which it will be used. Understanding requirements can then lead to the design of a prototype, which then can be evaluate and then further refined.

Reflections

I’ve been thinking about writing an article like this ever since I have noticed a lack a detailed discussion about requirements in TM470 EMA project submissions. Good projects will always consider them in depth. The best projects will also answer the question ‘how should I best represent my requirements to meet the needs of my project?’. There should, of course, be evidence of studying the topic of requirements within the literature review chapter. The best projects will also go onto use requirements within a testing process to determine whether a development has been successful.

The more you start to think about requirements, the more slippery the concept becomes. It is impossible to create one simple set of rules that works for all projects. There is a simple reason. Every project is different, and every project has its own unique set of characteristics. Gathering and describing requirements are creative activities, which takes software engineers towards onto other creative activities, such as software development and testing.

Differences between projects and problems can also mean differences in software engineering culture. Culture is expressed through differences in behaviours, practices and traditions. The culture of a software company that builds real-time safety critical embedded software is likely to be different to the culture of a company that builds consumer websites. How requirements are expressed are intrinsically linked to institutional values and non-functional requirements.

A final point is that software is always subject to change. Change can occur due to technology changes, changes in government legislation or corporate processes, or changes in work practices. Businesses are, of course, subject to mergers and acquisitions, and sometimes technology and software providers are changed for economic or practical reasons. In some cases, the speed of work-place change can outstrip the necessary changes that may need to take place within information systems. Since the role that software can play within organisations continually changes, so do the software requirements.

As well as supporting TM470, this blog serves a second purpose. Some of the ideas presented within this article may find their way into the new TM113 module, which features an important software engineering component.

Acknowledgements

Many thanks are given to the members of two module teams: the TM470 module team, and the TM113 module team. When I’m referring to ‘module teams’ in this section, I am also referring to tutors. I continue to hold the view that tutors are the most important people in the university.

Permalink Add your comment
Share post
Christopher Douce

Introducing ICEBERG

Visible to anyone in the world

On 11 July 25, in my capacity as a TM113 module team member, I attended a continuing professional development (CPD) event about something called ICEBERG.

ICEBERG is an abbreviation for Integrated, Collaborative, Engaging, Balanced, Economical, Reflective and Gradual. It is a tool used during learning and curriculum design, and is intended to embody best practice. The session was facilitated by learning designer Paul Astles, who is from the OU unit Learner and Discovery Services (LDS) (I think that is what LDS means).

What follows is a set of notes, which I am sharing with permission. It is hoped they are useful to anyone who is involved in learning design (including my colleagues from TM113 module team). My advance apologies are for anything obvious that I have missed, any mistakes I have included, and how long it has taken to pull together this set of notes. I always endeavour to thoroughly offer citations, but some sentences may have been taken verbatim from a useful presentation that Paul shared during his session.

Considering draft materials

The starting point of the session was also our starting point; our first drafts of our module materials, which are known as a ‘D0’ (or, module materials that we have started to sketch out). To help up think about our D0s, we looked at 3 ICEBERG principles: Integrated, Collaborative, and Engaging. Each principles have a set of ‘corresponding design tips’. Here are the tips that I’ve noted down, which come from Van Ameijde et al. (2018):

  • Integrated: A well-integrated curriculum constitutes a coherent whole where all the parts work together in a meaningful and cohesive way. This means that there is constructive alignment between learning outcomes, assessments, activities and support materials which all contribute effectively to helping students to pass the module.
  • Collaborative: Meaningful student collaboration and communication helps students in engaging in deep learning and making concepts and ideas their own (e.g., Garrison et al., 2001; Johnson & Johnson, 1999). It also serves as a mechanism for social support where students feel part of an active academic community of learners (see Tinto, 1975) which makes it more likely that they are retained.
  • Engaging: An engaging curriculum draws students in and keeps them interested, challenged and enthusiastic about their learning journey. Where the curriculum matches student interests and aligns with their educational and career aspirations, students are more likely to be retained. Using relevant case studies and readings and keeping these up-to-date as well as including a variety of different types of activities contribute to an engaging curriculum.

We were asked to look at a bit of material that was content heavy and were asked a question: how do we relate our draft materials to these points on the framework?

During our discussions, I made a couple of notes. Regarding Integrative, scene setting is important, since it adds concept. Collaborative can be useful, particularly a bit later on in the module when tools are introduced (collaboration is really important skill within software engineering). Also, Engaging can and should directly align with educational and career aspirations.

A key point that I took away from this part of the session was the need to emphasise ‘the people bit’. Also, since TM113 has three key themes, a question I had was ‘how do we integrate them together?’ There are also, of course, other important themes that are important to the module, such as employability, skills development, ethics and sustainability. In some respects, software engineering can be a linking theme, since it is all about people, tools, management of complexity, and communication.

The student learning journey

After a short break, the next part of the session related to the ‘student learning journey’. We again returned to the definitions of Van Ameijde et al. (2018):

  • Balanced: Balanced in this context refers to the workload that students face when studying the curriculum and the extent that this workload is well-paced and evenly distributed. Research has pointed out a negative correlation between average weekly workload and student outcomes, including satisfaction and pass rates, making it particularly important that we don’t overload students whilst keeping the workload appropriate for the level of study.
  • Economical: Economical refers to the extent to which a module or qualification is efficient in delivering the learning outcomes without providing too much additional material which. There might be a temptation to provide students with an overwhelming array of interesting facts, ideas, theories and concepts in a given subject area.
  • Reflective: For students to effectively pass a module and engage in deep learning, it is important that they are able to reflect on their learning and study progress and have the time and space to do so. This includes regular opportunities for students to test their understanding through, for instance, self-assessment questions, formative quizzes and iCMAs. It also includes opportunities for students to reflect on their learning practices and progress, and set goals. Such opportunities for reflection and feedback help keep students engaged with the curriculum and makes retention more likely.

Of these three principles, one of them is causing me a mild amount of worry: the ‘economical’ principal. There is an inherent challenge within pedagogy, which is: to learn some higher level concepts, you may need to learn a lot of lower level concepts. This learning of ‘lots of useful stuff’ can be difficult. There is also an important related question, which is: where do tell students about all these lower level concepts, if we’re being asked to do it in a cognitively economically way? Interesting facts, ideas and theories can be useful.

We didn’t get the chance to have a chat about ‘G’, which is Gradual. Also drawing on Van Ameijde et al. (2018):

  • Gradual: In an effective learning journey, students will gradually encounter increasingly complex and challenging concepts, ideas, materials, tasks and skills development. Where knowledge, skills and assessments all occur over a manageable gradient which builds on acquired knowledge, provides timely opportunities to learn and practice study skills and prepares them achieving the defined learning outcomes, it is more likely that students will not be overwhelmed and therefore more likely be retained.

The key point that I’ve taken away from this bit is the importance of practice (relevant ‘student answered questions’ which can be presented in the module materials)

Resources

During this session (and a related session) links to a number of useful resources were shared. These include:

And, of course, the article that was mentioned earlier:

Reflections

During this session, we didn’t have much time to apply the framework to our module materials, since we still had much to figure out. Not only were we still figuring out ICEBERG, we were also still figuring out the nature and form of our module materials.

One question I did have of ICEBERG was: where is the tutor in all this? I think the answer is that the tutor is implicitly embedded within all parts of the framework. Tutors, of course, make module materials come alive. In turn, they can magnify whatever learning design decisions have been made by the module team.

I get the impression that ICEBERG is a tool that specifically applies to individual modules, rather than qualifications – or, in other words, groups of modules. Can ICEBERG be applied to qualifications?  Referring to the original article by Van Ameijde et al., the definition of economical ‘refers to the extent to which a course or qualification is efficient in delivering the learning outcomes’ which suggest that it may well have a wider role. An interesting research question could be: ‘how might all the principles of ICEBERG be used to analyse the learning design of qualifications from different faculties?’ In the meantime, I’m going to concentrate on TM113.

Acknowledgements

Many thanks to Paul for his useful presentation, LDS, and all colleagues who have contributed to the development of ICEBERG. Thanks are also extended to fellow TM113 colleagues who attended the session.

Permalink Add your comment
Share post
Christopher Douce

Results day

Visible to anyone in the world
Edited by Christopher Douce, Monday 21 July 2025 at 10:33

Today or tomorrow my A334 result are released. I'm a bit on edge. I know what result I would like to get, and will probably be grumpy if I don't get what I'm hoping for. When having a quick look at all my Facebook updates, I noticed the following post from fellow colleague, tutor and student, Cath Brown, who offers some really helpful practical advice. Cath has given me permission to share her post.

A post about results  

If you got what you want - many congratulations! Go and celebrate ! 

No offence, but the rest of this isn’t for you...

a) If you passed but are disappointed with the grade

It is OK to be disappointed if it’s not what you want - you don’t have to be glad to have passed if that’s not how it feels to you 

Once you’ve had a bit of time to process it:-

  1. check how much difference it makes to your final classification using my classification calculator in the group files. You may be pleasantly surprised 
  2. if it’s creating a problem you could look, if your degree allows it, at doing a different module in its place and unlinking it. Student Finance does allow 120 credits of leeway above the total for the sort of thing 
  3. drastic move of course - but if your degree doesn’t allow it, changing degrees is a thing 
  4. IMPORTANT - if you are at the end of your degree and don’t want to get the classification this result gives you - DO NOT ACCEPT THE DEGREE OFFER!

b) if you failed

Take time to scream/cry/drink/mainline chocolate !

Then - remember lots and lots of people have failed a module and gone on to succeed.

Practicalities 
  1. In many cases you can resit/resubmit (usually for early September). If you do this your grade is usually capped at a Pass 4. Have a look at the classification calculator to see if you are OK with that. You can ask for an individual support session via student support to help with your preparation (there may be more support depending on the module). If you pass the resit you can progress to next year as you originally planned.
  2. if you don’t want a capped grade you can redo the entire module - all the grades are then available. 
  3. or you could potentially ditch the module and do something new - see above in disappointing grade section. 

c) if you got a “pending”

There can be various reasons for this - occasionally it is the whole module, or it could be missing evidence for special circs (obviously provide it as soon as you are asked if it’s this!)

Other reasons
  1. you are going to be asked to do an “additional assessment” - this is if they can’t decide which grade you are on based on what they’ve seen. Doing it can only help you 
  2. In a few modules, you are going to be asked to redo a TMA or submit one you didn’t submit. This is to enable you to get over the pass threshold. It can only take you from fail to pass - it can’t make things worse 
  3. Academic conduct issues such as collusion or unauthorised use of Gen AI. If it is this you will be told it is quite quickly but actually processing the case is likely to take quite a lot longer unfortunately. When it is processed you will be given the chance to explain yourself.

Your tutor and student support will not have any further information I’m afraid.

The Students Union’s Individual Representation service may be able to help if you have a case against you 

Key thing - if you are upset for any of the above reasons - IT IS NOT THE END OF THE ROAD! Whatever went wrong - there is something you can do! You can come back from it and go on to succeed!

Reflections

Cath's advice is really helpful. I didn't know that point about the additional credits that could be studied under a student loan. (The idea of taking an additional module had also crossed my mind). The only point I can add is: do feel to also seek advice from student services. After taking a bit of time to gather your thoughts and review all the guidance the university has shared with you, do give student services a ring.

Acknowledgements

Many thanks to the illustrious Cath Brown. Good luck with your own results for the module that you're currently studying.

Permalink Add your comment
Share post
Christopher Douce

A335 Journal – June 2025

Visible to anyone in the world

18 June 2025

A former A335 student has been kind enough to send me most of the set books they had bought for their studies, for which I am immensely grateful. There’s a couple of books that are missing which I need to buy. It’s always good fun finding copies of books online. (In return, I’m going to give away my A334 sets to someone. If you’re after a set of A334 texts, do get in touch!)

So far, I’ve listened to 5 hours of Bleak House whilst driving. I’ve also started to read the text since my concentration can necessarily drift whilst overtaking juggernauts on the A1. Since starting to read the text, I’m getting more of a feeling the characters and their importance. I’ve packed my newly acquired (and much thumbed) edition of Bleak House for my current travels, but I’m going to be taking my Kindle on holiday; 900 pages is a lot of pages to get through.

Since my mystery package of books also contained the module blocks, I’ve had a quick skim through the first two. I’m really looking forward to Mayhew’s London Labour and London Poor and, as mentioned, the George Elliot text. The lack of drama texts in this module is striking, but I understand we’ll be looking at two drama texts: one by Synd and another by Hare.

Permalink Add your comment
Share post
Christopher Douce

Communicating with students: student communications framework

Visible to anyone in the world
Edited by Christopher Douce, Thursday 3 July 2025 at 08:55

On 5 February 2025, I was sent a link to some files that described a new student communications framework (which was then later updated in June 25). The aim of this very long document was to offer a summary of some of the messages that might be sent to students before, during and between periods of study. It also contained suggestions of messages that could be used by faculties and module teams.

Accompanying this guidance, there is also something called ‘the student communication schedule - example for ALs’. What follows is an edited summary of that guidance which has been prepared for an October presentation. Interestingly (and usefully) it also offers some useful practical suggestions about additional actions that could be carried out by tutors to support students. For concision, I have omitted some references to additional links and resources that tutors can use. Full credit for this guidance comes from the student support hub, and the team who put it together.

The schedule is introduced as follows: “it is particularly useful for new ALs and those new to online teaching. It is a set of suggestions that you might find helpful as a tutor to structure your communication with students to offer proactive support, particularly at the start of the module and ahead of the first Tutor Marked Assignment (TMA). You are not expected to adhere to it; please use the resource as required and where beneficial, bearing in mind that not all points of contact will be applicable in all situations.”

What follows is a very lightly edited version of the schedule. I have added additional notes and comments in [square brackets].

Communication schedule

Student group is allocated  

Welcome email to all students, mentioning the module website and the forum.

[I mention these in a letter, which I also attach to my first email which also contains my contact details and availability]

A few days before the module start

Post on the module forum; include an introduction and an ice-breaker activity – for example, a question that every student can respond to (e.g., ‘What is your favourite period of History that you have studied so far?’ for a History module.)

[I tend to do this just before the sending out of my introductory email, so I can share a link to the forum in my letter – I also encourage students to subscribe to the forums that will be used during the module.]

End of Week 1

  • Check and encourage the use of the VLE and the forum.
  • Follow-up email or phone call to students who have not replied to the welcome email.
  • Email to all students introducing them to tutorials; different types of tutorials and different rooms, and what to expect, as well as that they can watch recordings and book the times and sessions that suit them.

[I mention the date of the first tutorial in my introductory letter, also suggesting that they put dates of tutorials in their diary. I mention why attending tutorials are important: it can help students to get higher scores in their assessments.]

Week 2

Check for students who have registered late and send them a welcome email. Alternatively, you can issue the welcome email to newly registered students as and when you receive a notification of a late registration.

End of Week 2

Follow up with late-registered students who have not replied to the welcome email.

Refer any students who have not replied to the follow-up contact to Student Support Team using eSRF [an electronic student referral form, which can be found on your TutorHome Page].

[I adopt a three stage approach to try to communicate with students. I begin with an email. If I haven’t heard back from them, I send a text message (I don’t personally have any concerns about sharing my personal phone number, but other tutors might not want to do this), and if I haven’t received a text message back, I give them a ring, leaving a voicemail. I only send in a referral when I have tried all three approaches.]

Week 3

Refer any late-registered students who have not replied to the follow-up contact to Student Support Team. Check Early Alert Indicators and identify students who are predicted not to submit TMA01 or receive a low grade. Reach out to them with guidance and resources if appropriate.

[In addition to using the early alerts indicator is to make sure that you had a good look at a student’s study history. The information shared on tutor home can also be a good reflection of what is summarised within the early alerts tool.]

Two weeks before TMA01

  • Email to all students offering support and useful links ahead of the first assignment and encourage them to submit a dummy TMA. Remind students that they have the opportunity to submit TMA00 to test the eTMA system and formatting requirements.
  • Reassure the students that, although the eTMA system may show a deadline for TMA00, there is no official deadline.
  • Check the Early Alert Indicators dashboard.

[The advice about the dummy TMA is more applicable to level 1 students, but can be helpful for students on other levels. I tend to send a group email in the run up to the first TMA, encouraging students to get in contact if they have any questions.]

A week before TMA01

Email to remind students about the TMA submission deadline and methods and key resources.

One day after TMA01 deadline

Reach out to students who have not submitted their TMA to offer support.

[With this point, I could debate the use of the term ‘reach out’ – I much prefer ‘contact’, but I’ll move on. I give it a day or so after the deadline before emailing students about their TMA. If I haven’t heard from them, I give them a ring, and if I still haven’t heard from them, I send in a referral to student services.]

After TMA01 results are published

  • Congratulate students on their results and remind them – particularly Level 1 – to download their marked TMAs and read the feedback on the TMA and PT3 form in addition to looking at their mark.
  • Reach out to students who have failed or received low marks in their first TMA. You may wish to do this even before returning their work to offer support and talk about their next steps.

[Between this step and the previous one, I post a forum message called ‘TMA01 marking updates’, where I let students know when I’ve downloaded the TMAs, when I’m roughly halfway through the marking, and when I’ve returned everything. This way students have a sense of when I am likely to return their assignments. Emphasising the downloading and reading of the feedback is important; it is so easy to just look at the mark and not look at the feedback.]

Before the Christmas break

Email all students to check in and signpost to resources on study skills in preparation for the next TMA, as well as mental wellbeing – encourage to reach out with any questions.

[In this message, I share season’s greetings and tell students something about my availability during the festive period. I continue to check email, but during this break I’m not as responsive. Managing expectations is important.]

Early January

Email all students to welcome them back after the study break. Encourage them to book tutorials. Check VLE use and Early Alert Indicators and reach out to those who are less likely to submit their next TMA, as well as those who didn’t do well in the previous one.

A week before a TMA

Email to remind students about the TMA submission deadline and methods and key resources. Check Early Alert Indicators to reach out to any students who might need additional support or encouragement.

After a TMA

Report any students who have not submitted to Student Support Team. Email these students to encourage a conversation and signpost to information about their options.

[Just as with TMA01, I post a forum thread which has a title ‘TMA0x marking updates’, posting again when I’m roughly half way through my marking.]

After TMA results are published

Reach out to students who have failed or received low marks in their TMA and offer support on next steps.

[There is an awful lot of ‘reaching out’ going on (!) An accompanying point is: do ask students who appear to be struggling whether they would like to have an additional support session. This is a one-to-one meeting, where a tutor can go through difficult and important parts of the module. It is also an opportunity to talk about study skills, and signpost students to useful resources. Do refer to this article about the different study skills resources and toolkits that are available]

After the Easter break/before the next TMA

Email to all students to welcome them back after the break. Remind students about the next TMA submission deadline and methods and key resources. Check Early Alert Indicators to reach out to any students who might need additional support or encouragement.

Before the EMA [or examination]

Email all students with resources on the EMA or preparation for exams. Encourage contact and engagement with tutorials. Check Early Alert Indicators to reach out to any students who might need additional support or encouragement.

[Before an exam preparation session, I tend to share a summary of what a session will contain. I also mention that ‘students who attend these preparation tutorials are likely to gain higher scores in their exam’ (which is probably true, since different types of examinable component assessment always helps). After running an exam preparation session, I share any resources I prepared to forums, along with links to recordings.]

After EMA/exams

Email students to congratulate them on the completion of their module and encourage [them] to reflect on their progress. Explain when they can find guidance on next steps.

Best practice notes

The following points were also shared in the schedule:

  • Regularly check in and personalise interactions to make students feel valued and understood.
  • Take time to get to know your students and encourage them to share any concerns or additional needs.
  • Define your availability, response times, and scope of support from the start.
  • Ensure to check the students list on TutorHome regularly for any changes in students’ circumstances.
  • Ensure that your emails are comprehensive, but succinct and consider using plain English; provide clear feedback that helps the student grow.

A final recommendation (if it works for you) is to make use of the group email tool that is found in your student list. The reason for mentioning this is that it keeps a copy of the message that is sent, which can then be viewed by colleagues in the student support teams.

Reflections

Before I was sent this schedule, I was thinking of writing my own version. I have also made a note of another version of a communication schedule that was created by a fellow tutor who taught on postgrad modules. My colleague, Arosha Bandara, used a spreadsheet to help guide his messages. Although this sounds terribly cold and impersonal, I got the impression from a presentation he gave at a tutor development event that he spend a lot of time personalising every email that he sends out. The best practice tips matter.

Unlike Arosha, I don’t have a detailed systematic framework to guide when I sent out messages to students. The process of reading, editing and sharing this framework has helped me to reflect on my own practice. A key reflection being that this schedule is useful. It is going to help me to become a better tutor. Also, just as you can customise any email, you can also customise your own communication strategy to reflect the module that you tutor, and your own personal style.

Acknowledgements

The headings within the schedule, and the actions that are described have all been produced by the student communication hub (as far as I am aware). All mistakes and errors should be attributed to that team, but I’m also happy to take some blame for any transcription and editing mistakes which may appear within this blog (there are likely to be many).

Permalink Add your comment
Share post
Christopher Douce

Preparing an account of practice for the UKCGE

Visible to anyone in the world
Edited by Christopher Douce, Tuesday 1 July 2025 at 09:40

The UKCGE an abbreviation for the UK Council of Graduate Education. After attending a series of Supervisory Professionalism and Recognition workshops I thought I would prepare a submission to the UKCGE to become a recognised supervisor. My aim was to consolidate my understanding and to recognise some of the experience that I’ve gained over the years. What follows is a set of notes that I made whilst preparing my submission. I’m sharing with the hope that it might be useful to others.

Preparing an account

An essential resource is the Good Supervisory Practice Framework from the UKCGE website. The resource contains a library of  articles that can be drawn upon. An important observation is that the site is also a catalyst for reading and reflection. Like much independent study, that articles that are shared is just a starting point for further investigation.

  • I began with an empty template, which had the title: Reflective Account Form, which I had downloaded from the site. I began by adding my name and institution.
  • I completed the introduction, summarising my background and experience, drawing on my CV.
  • I transferred some of the guidance notes directly to the form, summarising them down to bullet points.
  • Drawing on the points, I added some reflective commentary, based on my experiences, making references to my own work, and the work of some of the candidates that I have supported.
  • At the end of each section, I added a word count, noting that the whole submission must not be greater than 5000 words.
  • Taking a practical approach, I chose two articles which resonated with my experience. In some cases, I pick articles that look the most interesting. I ended up having a library of over 20 articles from different journals that are concerned with different aspects of doctoral education. I only recognise one of the authors. I clearly have a bit of learning to do.
  • I email my former doctoral candidate and co-supervisor to ask whether they will be willing to provide references.
  • I get a printout of my notes that are within the form, and review the resources which relate to each section. When reading, I ask myself: how does this article relate to my practice? Also, are there any new ideas that I’m not aware of? Does the article offer me any inspiration or insights?
  • I thoroughly edit each section of the form, further developing my reflective notes, drawing on the references.

Reflections

I found the process of putting together the UKCGE submission took longer than I had anticipated, for the simple reason that there was more to learn and read than expected. There is clearly an extensive literature about doctoral education which I’m only just starting to become familiar with. It is, quite rightly, a topic of lively research and scholarship. It was also helpful to reflect how things have changed since I was a doctoral student. The emphasis on professionalism and professional knowledge is helpful and welcome.

I would like to share that I wasn’t immediately successful. It took me two goes before my UKCGE application was accepted. My first application was rushed, and the evidence that I supplied wasn’t as thorough or detailed as it should have been. Failing at something is sometimes helpful. Being unsuccessful the first time said to me, very clearly, that supervision is a serious and important business. Although I was initially grumpy, the effect was to sharpen my writing, and to increase the depth of my reading.

Whilst preparing my submission, I found it useful to reflect on my various roles, which have included being a doctoral supervisor, being a third party monitor and supporting research interns. Although I do feel that I try to integrate my practice together, I also feel that there are always more I can do to enhance my professional knowledge of supervision to support the candidates that I work with.

Acknowledgements

Many thanks to the OU CPD team that supported my submission (and resubmission) to the UKCGE.

Permalink Add your comment
Share post
Christopher Douce

TM354 Drawing diagrams during an exam

Visible to anyone in the world
Edited by Christopher Douce, Wednesday 2 July 2025 at 16:47

Exams are always a challenge. Remote exams for TM354 Software Engineering students can be especially challenging since students are sometimes asked to draw diagrams. Students are asked for diagrams, since diagrams and sketches are important communication tools. Software is invisible, and diagrams represent a useful way of sharing information and design ideas with fellow software engineers.

This short blog posts shares some practical tips when preparing diagrams for your TM354 exam. What is presented here is a set of suggestions. You should pick and choose the ideas that work best for you, and choose an approach that you feel most comfortable with. The biggest and most important point is that you need to be prepared. When you come to take your exam, there should be no surprises.

Understand the types of diagrams

TM354 makes use of a graphical language called the Unified Modelling Language (UML). Make sure you are familiar with the different types of diagrams you might be asked for.

Consider using diagram tools

Since UML is a well known language, there are quite a few tools out there that can help you to produce UML diagrams. It is okay to use a tool to help you to produce a diagram, but before you take an exam, you should have a very good idea about how to use it. You should be able to use it fluently and be confident in using it. You don’t want to be in a situation where you battle with your tools during a timed exam.

Consider creating a template

If you know what types of diagrams you might be asked for, and what tool you might want to use, consider creating a template for every type of diagram that you might use. This isn’t cheating; this is effective preparation. When you need to create a new diagram, open up your template and modify it to meet your needs.

Consider sketching by hand

If you don’t like tools, you can always use pen and paper to create your diagrams. You can even dispense with using a ruler, but they can be pretty useful. You don’t get any marks for neatness (but you do get marks for the expression of ideas within your diagram). When you’ve finished your sketch, you can either take a digital photograph of it, or scan it (if you have access to a digital scanner), and then paste your diagram into your exam script.

A key point to remember: in TM354 you don't get any extra points for neatness. You get points for showing you understand how your diagram can be used to communicate concepts about software.

Avoid the Word drawing tool

Microsoft Word is a great tool. It offers a lot of useful features, including a drawing tool. Whilst useful for some tasks, the Word drawing tool is useless for creating UML diagrams. A bit of practical advice: avoid it like the plague. You can spend more time on choosing the style of boxes and arrows than communicating the elements of software that are the focus of your question.

Avoid GenAI

This point should be obvious. For some tasks, GenAI may be able to produce diagrams. GenAI doesn’t know what is in the module materials. This means that the diagrams it can create are invariably wrong or incomplete. Don’t use GenAI for assessment tasks unless you’re explicitly asked to do so.

Master your process

If you use a tool to create your diagram, make sure you know how to transfer your diagram to your exam script. If you need to take a screenshot (and you’re using Windows), push the Print Screen button, and crop the image using the Paint application. If you are creating a diagram by hand, make sure you can easily transfer a digital photograph from your phone to your Word document. Figuring out your process can save you time (and a whole lot of stress).

Practice

My final point: creating diagrams is a skill. Find the time to practice. There are a lot of TM354 past papers that can be downloaded from the Open SU shop.

Good luck in your exam!

Acknowledgements

Many thanks to members of the TM354 module team and TM354 module tutors. Thanks are extended to Mike Giddings and Richard Walker.

Permalink Add your comment
Share post
Christopher Douce

A335 Journal – May 2025

Visible to anyone in the world

25 May 2025

My previous module, A334 hasn’t even finished. There are three more days to go before the EMA cut off date, and I’m starting to think about what I need to do for the next module.

Today, I’ve done three things. The first was to download an audio version of Bleak House onto phone, in anticipation of a really long drive. (Bleak House runs for over 40 hours, which is nearly as bad as Rousseau’s Confessions).

Following some chat on the A334 WhatsApp group, I picked up that another set text is Oranges are not the only fruit. I downloaded that too, and that came in at a slightly more digestible 6 hours.

I’m up to chapter 5 in Bleak House, and I have no idea what’s going on, so it’s time to download a version of the text from Project Guttenberg to my Kindle so I can try to make sense of it.

Wish me luck.

Permalink Add your comment
Share post

This blog might contain posts that are only visible to logged-in users, or where only logged-in users can comment. If you have an account on the system, please log in for full access.

Total visits to this blog: 3156312