OU blog

Personal Blogs

Christopher Douce

Software engineering textbooks

Visible to anyone in the world

I was recently asked whether I could recommend a text to complement TM354 Software Engineering.

What follows is a post that shares some thoughts about a number of textbooks that relate to software engineering. What is presented within the opening two sections do not share firm recommendation. Instead, consider them to be comments about some significant texts that I have become aware of. The set of all available texts is, of course, huge; there are loads of other texts (and other editions) that could have been written about. This post also contains a short summary of texts that are used in TM354. The ‘and finally…’ section shares a list of software engineering (related) books that I’ve found interesting.

Software engineering texts

Ravi, S. (2023) Software Engineering: Basic Principles and Best Practices. Cambridge University Press.

I’m mentioning this text since I’ve been asked to review it for a module team.

It covers a lot of ground, and also covers many of the concepts that are studied in TM354. I particularly like the set of exercises that are shared at the end of every chapter. There is also a really useful appendix which describes a software engineering team project (which is especially useful for software engineering educators). I remember having to do a variation of one of these when I was an undergraduate. Ravi suggests the application of SCRUM, which is both an interesting and a practical choice.

The text opens with a definition of software engineering and the origins of the subject. It also introduces the challenge of requirements, the concept of software architecture, and then goes onto testing. It has a chapter on use cases which has a direct parallel with what is covered in TM354, and a chapter on testing, which offers a helpful summary of the topic.

There are some differences. There are a couple of chapters that address the topic of architecture. There is one chapter named ‘design and architecture’, and another named ‘architectural patterns’. These topics are treated differently in TM354 (I always struggle with book and document headings that contain the word ‘and’), as is the subject of software metrics, which is featured under the chapter of ‘quality metrics’. Significantly and importantly, Ravi emphases the importance of social responsibility in software engineering.

Whilst useful, it doesn’t directly complement TM354. It does, however, offer an alternative, more formal perspective. I do feel as if the people aspects in software engineering could have been given greater prominence. It also lacks a more thorough topics such as security and continual integration.

Pressman, R. & Maxm, B. R. (2019) Software Engineering: A Practitioner's Approach. McGraw Hill.

This review is going to exceptionally limited, since I don’t have access to the text! All I have access to is the table of contents. I did, however, once had a copy of an earlier issue, the European Edition which had been edited by Darrel Ince, who used to work at the OU some years ago. I do remember really liking it. I remember finding it more readable and understandable than the more familiar Sommerville text.

What immediate strikes me about the table of contents is that it is divided into significant parts. It’s useful to share the part titles: the software process, modeling, quality and security, managing software projects, and advanced topics. There is also an appendix about UML, and another about ‘data science for software engineers’. Architecture and design comes under the modeling part. Testing comes under the quality and security part (which makes sense), along with a chapter called ‘software security engineering’ and a chapter on software metrics. The important ‘people’ bit is also addressed in chapter 4: human aspects of software engineering.

Something slightly different

The something slightly different is an open source Software Engineering textbook by Marco Tulio Valente. This is an entire book that you can download for free. I had a quick look at three sections: the process chapter, the architecture chapter and the testing chapter.

The process chapter discusses the agile manifesto and the waterfall approach. In terms of agile, it digs into XP, Scrum, and Kanban before highlighting the spiral model and then mentioning the Rational Unified Process. Much of what is covered in the first block of TM354 is also covered in this chapter.

The architecture chapter begins in an interesting way by immediately turning to a debate about operating system design, and the question of what is best: a monolithic operating system, or an operating system that adopts a microkernel architecture. This is linked to a later discussion about the distinctions between monolithic and microservice architectures. It also discusses layered architecture, tiers, and model-view controllers (there is a corresponding chapter about design patterns). In terms of TM354, there is a link to some of the architectural themes that are covered in block 3 of the module materials.

The testing chapter begins with the familiar and important test pyramid, explaining the different between the different levels. Unit testing is dealt with in a helpful way, and this is complemented with the idea of code coverage. Terms that will be familiar to testers, such as mocks and stub are also addressed. Other themes include test driven development and acceptance testing. Testing is, perhaps, a bit of a weak point in TM354. The cross over with TM354 lies with the discussion of open-box (white-box) vs closed-box (black-box) testing, and testing testing techniques such as equivalence partitioning and boundary-value analysis. Testing is followed by chapters about Refactoring and DevOps.

Looking at TM354

There are a number of texts that are mentioned in TM354:

Robertson, J., Robertson, S. and Reed, A. (2025) Mastering the requirements process. Fourth edition. Addison-Wesley.

This text has been going for quite a while. Now in its fourth edition, it introduces what I understand to be some of the most important concepts that software engineering really need to understand. It is split into eight parts. Rather than going through each of these in turn, I’m going to enumerate some of the important topics that feature in its table of contents (which are, of course, linked to TM354): the importance of understanding the problem and the problem domain, the significance of stakeholders, the importance and usefulness of prototyping, using use cases and user stories, and using workshops to understand business use cases (BUCs) and their description using activity diagrams.

The text is, of course, linked to the Volere framework, which is described as ‘a framework for discovering, communicating, and managing requirements at all levels and in all environments’ (footnote 1, chapter 1). Within this framework there is the Volere shell, and a fit criterion which ‘means that your customer can tell you whether it is exactly what is needed’. This is, of course, linked to the software engineering concept of acceptance testing.

What I like about this text is its readability. It is informal without being too chatty. As an aside, an interview by Robertson and Robertson, SE Radio 188: Requirements in Agile Projects is featured in an early part of the module. It’s a useful interview. It’s worth a listen.

Fowler, M. (2018) UML Distilled: A Brief Guide to the Standard Object Modeling Language. 3rd ed. Pearson Education.

This text offers a practical and concise summary of the Unified Modelling Language (UML). This latest edition reflects understandings of which of the diagram forms have become more popular. It deemphasises the communication diagram, for example, reflecting that the sequence diagram is more popular (and, arguably, more understandable too).

It isn’t the most exciting of books. It is more useful as a reference which offers some clearly presented explanations.

Gamma, E. et al.(1995) Design patterns : elements of reusable object-oriented software. Addison-Wesley.

This text is considered to be a classic, and it is interesting to reflect that there haven’t been any updates, which is a testament to the thoroughness of this original catalogue of patterns. Since its publication, there have been the publication of further texts that draw on the idea of patterns. A text that complements this one is the Head First: Design Patterns text, which offers a lighter introduction to the subject.

A really good aspect of this book is that it contains simple code and useful diagrams that both express ideas that are quite complex. Other writers have taken the Design Patterns catalogue and have created implementation in certain languages, such as Java: Design Patterns: The Catalog of Java Examples.

This is one of those rare technical books that I can sit down and enjoy puzzling over. By assimilating and internalising the patterns, software engineers develop their skills of thinking about code, and how code can be used to solve problems.

Bass, L., Clements, P. and Kazman, R. (2022) Software Architecture in Practice, 4th Edition. Addison-Wesley Professional.

I really like this one since it offers some useful practical guidance. As the title suggests, it is less about software engineering, but more about a topic that is found within software engineering: software architecture. The TM354 module has been updated to use the latest issue. Some of the advice has been refined. Importantly and significantly, it does offer some pointers about sustainability.

It is worth mentioning that it has four parts: an introductory part, quality attributes, architectural solutions (where cloud computing solutions are discussed), scalable architecture practices, concluding with architecture and the organization (sic). Arguably, this text goes beyond what is covered in the others, but also expects readers to know about software engineering fundamentals, such as process models, requirements, and testing.

As mentioned in another blog, I’ve found a podcast which features Bass, which is quite good fun and worth a listen: Stories of Computer Science Past and Present (2014) (Hanselminutes.com). Bass talks about booting up a mainframe, punched card dust, and why it is useful to have two offices.

All of these texts are available through the O’Reilly Safari bookshelf, which you can find either by searching in the OU library.

And finally…

Here are some other textbooks that are pretty important if you’re interested in software engineering or software development.

Beck, K. and Andres, C. (2005) Extreme programming explained : embrace change. 2nd ed. Addison-Wesley.

 The first edition of this book opens with the words “the basic problem of software development is risk”. It then enumerates a big long list of reasons why software projects can go wrong. It later highlights four values of XP: communication, simplicity, feedback and courage. Essentially, these points all relate back to the first: communication.

When I was working in industry, what I took from this book was the use of pair programming, which could be found within the development strategy chapter. Doing pair programming relates directly to the principles of communication and courage. Working with a fellow software engineer on one screen at the same time was one of the best professional educational experiences I have ever had.

I bought this book at roughly the same time as another of Beck’s books: Test Driven Development. There is, of course, a link between agile development and effective unit testing.

Brooks, F.P. (1995) The mythical man-month : essays on software engineering. Anniversary Edition. Addison-Wesley.

There are not (yet) very many classic texts on software engineering, but this is certainly one of them. Brooks worked on the IBM System/360 mainframe operating system. His observation was simple, yet significant. Brooks’s Law is simply: “adding manpower to a late software project makes it later” (p.25). The reason being is that people have to talk with each other. Modern agile practices have teams that are of a limited size for a reason. The mythical man-month is quite a short book and is certainly worth spending a few hours reading.

Fowler, M. et al. (2012) Refactoring: improving the design of existing code. Addison-Wesley.

Like Design Patterns that I’ve mentioned earlier, this text was one that caused me to say the words ‘why didn’t I think of this and write this book?’ Refactoring shares a set of obvious tips and small changes that help to change the structure of your code. Connecting with other text, it helps to refine your craft of software development.

Raymond, E.S. (2001) The cathedral and the bazaar : musings on Linux and open source by an accidental revolutionary. 1st edition. O’Reilly.

You could argue that software always intersects with politics since people are involved with its use and creation. This text is a collection of related essays which discusses computing history, hacker culture, and open source software. In my edition of the text, there is the suggestion that the essay The Cathedral and the Bazaar ‘effectively overturns Brooks’s Law’, which is an interesting take, and one that I don’t entirely agree with. Raymond is, of course, deliberatively provocative.

Provocation aside, he does emphasise the importance of software engineering and craft, which is something that is unambiguously highlighted in The Pragmatic Programmer. He also shares a point that has always stuck with me, which is this: “Learn to write your native language well. Though it’s a common stereotype that programmers can’t write, a surprising number of hackers (including all the best ones I know of) are able writers”. He later encourages you to “develop your appreciation of puns and worldplay” (p.207).

McConnell, S. (2004) Code Complete, 2nd Edition. Cisco Press.

I first discovered the first edition of Code Complete whilst browsing through the computing section of a bookshop towards the very end of the last century (I am no starting to feel quite old!) What struck me was that I recognised some of the paper that the text was referencing. What I was interested in was the practices of programming, and whether there was any empirical (experimental) evidence that related to good practice, such as naming, indentation, and so on. This text seemed to bring together a range of different articles together.

The second edition was published in 2004. I did once write a review of it for something called the Psychology of Programming Interest group, but I would rather not share a direct link to it, since my writing has really come on since 2005. This alternative resource https://en.wikipedia.org/wiki/Code_Complete  (Wikipedia) offers a useful summary.

Thomas, D. and Hunt, A. (2020) The Pragmatic Programmer: Your Journey to Mastery. Second edition. Addison-Wesley.

I ordered this text when I was working in industry. Returning to it again, I see that the edition I have on my bookshelves contains 70 useful hints and tips. Looking after these again, I remember that I must have taken many of these tips to heart.

Three notable ones are:

  1. invest Regularly in Your Knowledge Portfolio. I was regularly learning new things, and had a curiosity about different tools and languages. The evolution and change of software has never stopped.
  2. Use the Power of Command Shells. I understood the point that was being made. Continually clicking on buttons to get things done can be annoying, and when you carry out a series of actions regularly, knowing how to automate some tasks and operations can save you time and aggravation.
  3. Learn a Text Manipulation Language. When I was working as a developer, I had to work with a lot of web-based content, and that content had metadata, and sometimes that metadata would change. To make my life earlier, I learnt a language called Perl (which many would agree is a terrible language). If you don’t want to learn Perl (and I wouldn’t blame you), there are text manipulation libraries that are likely to work with your favourite language.

There are loads of other tips that resonate.

All these texts are also available through the Safari bookshelf available from the OU library.

Books on my ‘to read’ pile

My reading about computing has moved from the practices of programming, to the history of computing, touched on aspects of hacker culture, and have skirted onto studies of gaming culture. Through a novel about DevOps and car spares, I feel as if I’m returning to software engineering texts. There are always more books to read. Along with the Pressman text (if I can get hold of a copy), here are two other texts that are on my ‘to be read’ pile:

DeMarco, T. and Lister, T.R. (2013) Peopleware: productive projects and teams. 3rd edition. Addison-Wesley.

Software is, of course, made by people, for people. Agile practices are all about helping to get people to work together in a way that makes the work that needs to be done more visible to others. I was left a copy of DeMarco os Lister’s Peopleware by a colleague.

Hermans, F. and Skeet, J. (2021) The programmer’s brain : what every programmer needs to know about cognition. Manning Publications.

I came across this text after discovering an episode of software engineering radio: SE Radio 462: Felienne on the Programmers Brain. It ticks all ‘academic interests in programming boxes’.

Although many of these books are available in the Safari bookshelf, I do like to read ‘proper’ books (or eBooks) rather than reading through the Safari portal. Rather than going to a large online book retailer, I tend to go to a second hand reseller, or even a popular online auction site to pick up second hand bargains.

Reflections

The original question I was asked what: ‘what textbook would I recommend to complement the OU’s existing software engineering module’. The general software engineering texts cover common topics, such as process models, the importance of requirements, design, architecture, and testing. They differ in what they emphasise.

TM354 is designed to be self-contained. Students studying this module shouldn’t need to go outside the module materials, but I hold the view that students studying software engineering should do so since knowing how to find access to resources and explanations is an important and necessary graduate skill.

Although some colleagues might disagree, I did quite like the clarity of writing in the open source text. It is quick and easy to access, and many of the topics featured in this text can also be found in TM354 – but do always refer to the module material.

A final note is to highlight the importance of something called the SWEBOK v4, the Software Engineering Body of Knowledge (IEEE). This useful resource also offers a wealth of recommendations.

Permalink Add your comment
Share post
Christopher Douce

Software Engineering Radio: Design and Designing

Visible to anyone in the world
Edited by Christopher Douce, Sunday 5 October 2025 at 10:09

A quick search of software engineering radio yields a number of themes that relate to design: experience design, design for security and privacy, design of APIs, and also architectural design.

This post shares some useful points shared on two episodes. The first asks questions about and reflect on the software design process. The second, which is complementary, relates to how engineers share their designs with others through the use of diagrams. An earlier blog post, TM354 Working with diagrams addresses the same subject, but in a slightly different way, emphasising diagramming tools.

Before I get into what struck me as interesting in both of those podcasts, it is useful to have a quick look at Chapter 3 of SWEBOK, which contains a useful summary of some software Design principles (3-4):

  • Abstraction: Identify the essential properties of a problem.
  • Separation of concerns: Identify what concerns are important to stakeholders.
  • Modularization: Break larger components into smaller elements to make the problem easier to understand.
  • Encapsulation: Hides unnecessary detail.
  • Separation of interface and implementation: Every component should have a clear interface.
  • Uniformity: Ensure consistency in terms of how the software is described.
  • Completeness: It does what is supposed to “and leaves nothing out”.
  • Verfiability: A characteristic of software to determine whether what has been built can be checked to make sure it meets requirements.

These principles seem very ‘code centric’. When we look at the titles of other software engineering radio podcasts we can clearly see that we need to consider wider perspectives: we need to think about architecture, organisational structure what we need to do to realise important non-functional requirements such as accessibility and usability.

Software design

In SE Radio 333: Marian Petre and André van der Hoek on Software Design discuss not only software design, and the also the practice of expert software engineers. It opens with the point that it is important to distinguish between the outcome of design and the process of designing. There was also a reflection that design (and designing) ‘is something that is done throughout [the software development lifecycle]’, beginning with eliciting (designing) requirements.

We are also introduced to an important term: design thinking. This is defined as the process of ‘understanding the relationship between how we understand the problem and how we understand the solution, and thinking reflectively about the relationship between the two’. Design thinking is mentioned in the SWEBOK, where it is defined in a similar (but slightly similar) way “design thinking comprises two essentials: (1) understanding the need or problem and (2) devising a solution” (3-2). The SWEBOK definition of design thinking is, arguably, quite a narrow definition since the term can refer to both an approach and a mindset, and it is a mindset that can reside “inside all the individuals in the organisation”.

A related question is how do experts go about design? Experts go deep (into a problem) as well as going broad (when looking for solutions). When experts go deep, they can dive into a problem quickly. Van der Hoek shared an interesting turn of phrase, suggesting that developers “talked on the whiteboards”. It is important to and necessary to externalise ideas, since ideas can be then exposed to others, discussed and evaluated. An expert designer needs be able to listen, to disagree, and have the ability to postpone making decisions until they have gathered more information. Experts, it is said, borrow, collaborate, sketch, and take breaks.

Expert software designers also have an ability to identify which parts of a problem are most difficult, and then begin to solve those bits. The are able to see the essence of a problem. In turn, they know where to apply attention, investing effort ‘up front’. Rather than considering which database to choose, they might tackle the more fundamental question of what a database needs to do. Expert designers also have a mindset focussed toward understanding and strive for elegance since “an elegant solution also communicates how it works” (41:00)

Experts can also deliberately ‘break the rules’ to understand problem constraints or boundaries (43:50). Expert designers may also use design thinking practices to generate ideas and to reveal assumptions by applying seemingly odd or surprising activities. Doing something out of the ordinary and ‘using techniques to see differently’ may uncover new insights about problems or solutions. Designers are also able to step back and observe and reflect on the design process.

Organisational cultural constraints can also play a role. The environment in which a software designer (a software engineer or architect) is employed can constrain them from working towards results and applying creative approaches. This cultural context can ‘supress’ design and designers especially if organisational imperatives not aligned with development and design practices.

Marion Petre, an emeritus professor of computing at the OU referred to a paper by David Walker which describes a ‘soup, bowl, and table’ metaphor. A concise description is shared in the abstract of the article: “The soup is the mix of factors that stimulates creativity with respect to a particular project. The bowl is the management structure that nurtures creative outcomes in general. And the table is the context of leadership and vision that focuses creative energies towards innovative but realizable objectives.” You could also argue that soup gives designers energy.

The podcast also asked what do designers need to do to become better designer? The answer was simple, namely, experts find the time look the code and designs of other systems. Engineers ‘want to understand what they do and make it better’. An essential and important point is that ‘experts and high performing teams are reflective’; they think about what they do, and what they have done.

Diagrams in Software Engineering

An interesting phrase that was shared in Petre and van der Hoek’s podcast was that developers ‘talked using whiteboards’. The sketching and sharing of diagrams is an essential practice within software engineering. In SE Radio 566: Ashley Peacock on Diagramming in Software Engineering different aspects of the use and creation of diagrams are explored. Diagrams are useful because of “the ease in which information is digestible” (1:00). Diagrams and sketches can be short-lived/long lived. They can be used to document completed software systems, summarise software that is undergoing change, and be used to share ideas before they are translated into architecture and code.

TM354 Software Engineering makes extensive use of a graphical language called the Unified Modelling Language (UML). UML has 12 types of diagrams, of which 2 or 3 types are most frequently used. Class diagrams are used to share key abstractions, ideas within the problem domain and a design, and their dependencies (how the bits relate to each other). Sequence diagrams can be used to show the interactions between different layers of software. Activity diagrams can be used to describe the connections between software and the wider world. UML is important since it provides a share diagramming language that can be used and understood by software engineers.

Reflections

One of the aspects that I really appreciated from the first podcast was that it emphasises the importance and significance of the design process. One of my first duties after becoming a permanent staff tutor at the OU was to help to support the delivery of some of the design modules. I remember there were three of them. There was U101 Design thinking: creativity for the 21st century, an earlier version of T240 Design for Impact, T217, and T317 Innovation: Designing for Change. Even though I was familiar with a sister module from the School of Computing and Communications, TM356 Interaction design and the user experience, being exposed to the design modules opened my eyes to a breadth of different approaches that I had never heard of before and could have applicability within computing.

U101 introduced me to the importance of play. T217 (and the module that came before it, T211) introduced me to the contrasting ideas of divergent and convergent thinking. The idea of divergent thinking relates to the idea mentioned in the first podcast of thinking beyond the constraints. I was also introduced to the double-diamond design process (Design Council, PDF). Design processes are different in character to software development processes since they concern exploring different ways to solve problems as opposed to distilling solutions into architectures and code.

A really important point from the first podcast is that design can (and should) happen across the entire software development lifecycle. Defining (and designing) requirements at the start of a project is much as creative process as the creation and specification of tests and test cases.

It is important and necessary to highlight the importance of reflection. Thinking about what we have, how well our software has been created, and what we need all help to refine not just our engineered artefacts, but also our engineering processes. Another point that resonates is the role that organisational structures may play in helping to foster design. To create good designs, we rely on the support of others, but our creativity may be attenuated if the value of ‘play’ is views as frivolous or without value.

Effective designers will be aware of different sets of principles, why they are important, how they might be applied. This post opened by sharing a set of software design principles that were featured in the SWEBOK. As suggested, these principles are viewed as very code centric. There are, of course, other design principles that can be applied to user interfaces (and everyday things), such as those by Donald Norman. Reflecting on these two sets of principles, I can’t help but feel that there is quite a gap in the middle, and a need for software architecture design principles.  Bass et al. (2021) is a useful reference, but there are other resources, including those by service providers, such as Amazon’s Well-Architected guidance. Engineers should always work towards understanding. Reflecting on what we don’t yet full understand is as important as what we do understand.  

References

Bass, D. L., Clements, D. P and Kazman, D. R. (2021) Software Architecture in Practice, 4th edn, Upper Saddle River, NJ, Addison Wesley.

SWEBOK v.4 (2024) Software Engineering Body of Knowledge SWEBOK. Available at: https://www.computer.org/education/bodies-of-knowledge/software-engineering

Walker, D. (1993), The Soup, the Bowl, and the Place at the Table. Design Management Journal (Former Series), 4: 10-22. https://doi.org/10.1111/j.1948-7169.1993.tb00368.x

Permalink
Share post
Christopher Douce

Software Engineering Radio: Testing

Visible to anyone in the world
Edited by Christopher Douce, Thursday 2 October 2025 at 13:28

The term ‘software testing’ can be associated with a very simple yet essential question: ‘does it do what it supposed to do?’

There is, of course, a clear and obvious link to the topic of requirements, which express what software should do from the perspective of different stakeholders. A complexity lies in the fact that different stakeholders can have requirements that can sometimes conflict with each other.

Ideally it should be possible to trace software requirements all the way through to software code. The extent to which formal traceability is required, and the types of tests you need to carry out will depend on the character of the software that you are building. The tests that you need for a real-time healthcare monitor will be quite different to the tests you need for a consumer website.

Due to the differences in the scale, type and character of software, software testing is a large topic in software engineering. Chapter 5 of SWEBOK v4, the software engineering body of knowledge highlights different levels of test: unit testing, integration testing, system testing, and acceptance testing. It also highlights different types of test: conformance, compliance, installation, alpha and beta, regression, prioritization, non-functional, security, privacy, API, configuration, and usability.

In the article, The Practical Test Pyramid, Ham Vocke describes a simple model: a test pyramid.  At the bottom of the pyramid are unit tests that test code. These unit tests run quickly. At the top, there are user interface tests, which can take time to complete. In the middle there is something called service tests (which can also be known as component tests). Vocke’s article is pretty long, and quickly gets into a lot of technical detail.

What follows are some highlights from some Software Engineering radio episodes that are about testing. A couple of these podcasts mention this test pyramid. Although testing is a broad subject, the podcasts that I’ve chosen emphasise unit testing.

The first podcast concerns the history of unit testing. The last podcast featured in this article offers some thoughts about where the practice of ‘testing’ may be heading. Before sharing some personal reflections, some other types of test are briefly mentioned.

The History of JUnit and the Future of Testing

Returning to the opening question, how do you know your software does what it supposed to do? A simple answer is: you get your software to do things, and then check to see if it has done what you expect. It is this principle that underpins a testing framework called JUnit, which is used with software written using the Java programming language.

The episode SE Radio 167: The History of JUnit and the Future of Testing with Kent Beck begins with a short history of the JUnit framework (3:20). The simple idea of JUnit is that you are able to write tests as code; one bit of code tests another. All tests are run by a test framework which tells you which tests pass and which tests fail. An important reflection by Beck is that when you read a test, it should tell you a story. Beck goes on to say that someone reading a test should understand something important about the software code. Tests are also about communication; “if you have a test and it doesn’t help your understanding … it is probably a useless test”.

Beck is asked to explain the concept of Test Driven Development (TDD) (14:00). He describes it as “a crazy idea that when you want to code, you write a test that fails”. The test only passes when that code that does what the test expects. The podcast discussion suggests that a product might contain thousands of tiny tests, with the implication that there might be as much testing code as production code; the code that implements features and solves problems.

When considering the future of testing (45:20) there was the suggestion that “tests will become as important to programming as the compiler”. This implies that tests give the engineers useful feedback. This may be especially significant during periods of maintenance, when code begins to adapt and change. There was also an expression of the notion that engineers could “design for testability” which means that unit tests have more value.

Although the podcast presents a helpful summary of unit testing, there is an obvious question which needs asking, which is: what unit tests should engineers be creating? One school of thought is that engineers should create tests that cover as much of the software code as possible, also known as code coverage. Chapter 5 of SWEBOK shares a large number of useful test techniques that can help with the creation of tests (5-10).

Since errors can sometimes creep into conditional statement and loops, a well known technique is known as boundary-value analysis. Put more simply, given a problem, such as choosing a number of an item from a menu, does the software do what it supposed to do if the highest number is selected (say, 50)? Also, does it continue to work if the highest number just before a boundary is selected (say, 49)?

Working Effectively with Unit Tests

Another podcast on unit testing is SE Radio 256: Jay Fields on Working Effectively with Unit Tests. Between 30:00 and 33:00, there is an interesting discussion that highlights some of the terms that feature within Vocke’s article. A test that doesn’t cross any boundaries and focus on a single class could be termed a ‘solitary unit test’. This can be contrasted with a ‘sociable unit test’, where tests work together with each other; one test may influence another. Other terms are introduced, such as stubs and mocks, which are again mentioned by Vocke.

Automated Testing with Generative AI

To deliberately mix a metaphor, a glimpse of the (potential) future can be heard within SE Radio 633: Itamar Friedman on Automated Testing with Generative AI. The big (and simple) idea is to have AI helper to have a look at your software and ask it to generate test cases for you. A tool called CoverAgent was mentioned, along with an article entitled Automated Unit Test Improvement using Large Language Models at Meta (2024). A point is: you still need a software engineer to sense check what is created. AI tools will not solve your problems, since these automated code centric tools know nothing of your software requirements and your software engineering priorities.

Since we are beginning to consider artificial intelligence, this leads onto another obvious question, which is: how do we go about testing AI? Also, how do we make sure they do not embody or perpetuate biases or security risks, especially if they are used to help solve software engineering problems.

Different types of testing

The SWEBOK states that “software testing is usually performed at different levels throughout development and maintenance” (p.5-6). The key levels are: unit, integration, system and acceptance.

Unit testing is carried out on individual “subprograms or components” and is “typically, but not always, the person who wrote the code” (p.5-6). Integration testing “verifies the interaction among” system under test components. This is testing where different parts of the system are brought together. This may need different test objectives to be completed. System testing goes even wider and “is usually considered appropriate for assessing non-functional system requirements, such as security, privacy, speed, accuracy, and reliability” (p.5-7). Acceptance testing is all about whether it is accepted by key stakeholders, and relate back to key requirements. In other words, “it is run by or with the end-users to perform those functions and tasks for which the software was built”.

To complete a ‘test level’ a number of test objectives may need to be satisfied or completed. The SWEBOK presents 12 of these. I will have a quick look at two of them: regression tests, and usability testing.

Regression testing is defined as “selective retesting of a SUT to verify that modifications have not caused unintended effects and that the SUT still complies with its specified requirements” (5-8). SUT is, of course, an abbreviation for ‘system under test’. Put another way a regression test check to make sure that any change you have made hasn’t messed anything up. One of the benefits of unit testing frameworks such as JUnit is that it is possible to quickly and easily run a series of unit tests, to carry out a regression test.

Usability testing is defined as “testing the software functions that support user tasks, the documentation that aids users, and the system’s ability to recover from user errors” (5-10), and sits at the top of the test pyramid. User testing should involve real users. In addition to user testing there are, of course, automated tools that help software engineers to make sure that a product deployment works with different brewers and devices.

Reflections

When I worked as a software engineer, I used JUnit to solve a very particular problem. I needed to create a data structure that is known as a circular queue. I wouldn’t need to write it in the same way these days since Java now has more useful libraries. At the time, I needed to make sure that my queue code did what I expected it to. To give me confidence in the code I had created, I wrote a bunch of tests. I enjoyed seeing the tests pass whenever I recompiled my code.

I liked JUnit. I specifically liked the declarative nature of the tests that I created. My code did something, but my tests described what my code did. Creating a test was a bit like writing a specification. I remember applying a variety of techniques. I used boundary-value analysis to look at the status of my queue when it was in different states: when it was nearly full, and when it was full.

Reflecting Beck, I appreciated that my tests also told a story. I also appreciated that these tests might not only be for me, but might be useful for other developers who might have the misfortune of working with my code in the future.

The other aspect of unit testing that I liked was that it proactively added friction to the code. If I started to maintain it, pulling apart function and classes, the tests would begin to break. The tests became statements of ‘what should be’. I didn’t view tests in terms of their code coverage (to make sure that every single bit of software was evaluated) but in terms of simple practical tools that gave alternative expressions of the purpose of my software. In turn, they helped me to move forward.

It is interesting and useful to reflect on the differences between the test pyramid and the SWEBOK test levels. In some respect, the UI testing of the pyramid can be aligned with acceptance testing of the SWEBOK. I do consider the integration and system testing to be helpful.

An important point that I haven’t discussed is the question of when should a software engineer carry out testing? A simple answer is, of course, as soon as practically possible. The longer it takes to identify an issue, the more significant the impact and the greater the economic cost. The ideal of early testing (or early problem detection) is reflected in the term ‘shift-left’ testing, which essentially means ‘try to carry out testing towards the left hand side of your project plan’. Put even more simply: the earlier the better.

Returning to the overriding aim of software testing, testing isn’t just about figuring out whether your software does what it supposed to do. It is also about managing risk. If there are significant societal, environmental, institutional and individual impacts if software doesn’t work, you or your organisation needs to do whatever it can to ensure that everything is as correct and as effective as possible. Another point is that sometimes the weak spot isn’t the code, but in the spaces where people and technology intersects. Testing is socio-technical.

To conclude, it is worth asking a final question. Where is the software testing heading? Some of these podcasts suggest some pointers. In the recently past, we have seen the emergence of automation and the engineering of software development pipelines to facilitate continual deployment or delivery of software. I do expect that artificial intelligence, in one form or another, will influence testing practice, but AI tools can’t know everything about our requirements. There will be testing using artificial intelligence and testing of artificial intelligence. As software reaches into so many different areas of society, there will also be testing for sustainability.

Resources

JUnit is one of many bits of technology that can help to automate software testing. Two other  tools s I have heard of are called Cucumber which implements a language called Gherkin, a formal but human-readable language which is used to describe test cases. I’m also aware of something called Selenium which is “a suite of tools for automating web browsers”.

Since software testing is such an important specialism within software engineering, there are a series of industrial certifications that have been created by the International Software Testing Qualifications Board (ISTQB). As well as offering foundation level certifications, there are also certifications for specialisms such as agile, security and usability. Many of the topics mentioned in the certifications are also mentioned in Chapter 5 of SWEBOK v4.

I was alerted to a site called the Ministry of Testing which shares details of UK conferences and events about testing and software quality.

One of the points that I picked up from the podcasts was the point that, when working at forefront of an engineering subject, there is a lot of sharing that takes place through blogs. A name that was mentioned was Dan North who has written two articles that resonate: We need to talk about testing (or how programmers and testers can work together for a happy and fulfilling life), and Introducing BDD (BDD being an abbreviation for Behaviour Driven Development).

Acknowledgements

Many thanks to Josh King, a fellow TM354 tutor, who was kind enough to share some useful resources about testing.

Permalink Add your comment
Share post
Christopher Douce

Software Engineering Radio: Technical Debt

Visible to anyone in the world
Edited by Christopher Douce, Wednesday 1 October 2025 at 09:42

Imagine you’re taking a trip to catch up with one of your best friends. You also need to get a few things from the shop; let’s say, a couple of pints of milk. You have a choice. You could head directly to your friend’s house and be on time, and do the shopping later. Or alternatively, you could change your route, visit the shop, and arrive at your friend’s place a little bit later than agreed. This really simple dilemma encapsulates what technical debt is all about.

When it comes to software, software engineers may prioritise some development activities over others due to the need to either ship a product, or to get a service working. The impact of prioritisation may have implications for software engineers who have to take over the work at a later date.

As suggested earlier, software can quality attributes: it can be efficient, it can be secure, or it can be maintainable. In some cases, a software engineer might prioritise getting something working over its maintainability or comprehensibility. This means that the software that is created might be more ‘brittle’, or harder to change later on. The ‘debt’ bit of technical debt means it will be harder to migrate the software from one operating environment to another in the future. You might avoid ‘paying’ or investing time now to get something working earlier, but you may well need to ‘pay down’ the technical debt in the future when you need to migrate your software to a new environment.

On Managing Technical Debt

In SE Radio 481: Ipek Ozkaya on Managing Technical Debt, Ozkaya is asked a simple question: why should we care about technical debt? The answer is also quite simple: it gives us a term to help us to think about trade-offs. For example, “we’re referring to the shortcuts that software development teams make. … with the knowledge they will have to change it; rework it”. Technical debt is a term that is firmly related to the topic of software maintenance.

Another question is: why does it need to be managed (5:10)? A reflection is that “every system has technical debt. …  If you don’t manage it, it accumulates”. When the consequences of design decisions begin to be observed or become apparent, then it becomes technical debt, which needs to be understood, and the consequences need to be managed. In other words, carrying out software maintenance will mean ‘doing what should have been done earlier’ or ‘adapting the software so it more effectively solves the problems that it is intended to solve’. My understanding is that debt can also emerge, since the social and organisational contexts in which software exists naturally shift and change.

Interestingly, Ozkaya outlines nine principles of technical debt. The first one is: ‘Technical debt reifies an abstract concept’. This principle speaks to the whole concept. Reification is the ‘making physical’ an abstract concept. Ultimately, it is a tool that helps us to understand the work that needs to be carried out. A note I made during the podcast was that there is a ‘difference between symptoms and defects’. Expressed in another way, your software might work okay, but it might not work as effectively or as efficiently (or be as maintainable) was yo would like, due to technical debt.

It is worth listening to Ozkaya’s summary of the principles, which are also described in Kruchten et al. (2019). Out of all the principles, principle 5, architecture technical debt has the highest cost of ownership, strikes me as being very significant. This relates to the subject of architectural choices and architectural design.

Krutchen et al. suggest that “technical debt assessment and management are not one-time activities. They are strategic software management approaches that you should incorporate as ongoing activities” (2019). I see technical debt as a useful conceptual tool that can help engineers to make decisions about practical actions about what work needs to be done, and communicate to other about that work, and why it is important.

Reflections

I was introduced to the concept of technical debt a few years ago, and the concept instinctively made a lot of sense. Thinking back to my time as a software engineer I often were face with dilemmas and trade-offs. Did I spend time ‘right now’ to change how I gathered real-time data from a hardware device, or do I live with it and just ship the product?

The Kruchten text introduces the notion of ‘bankruptcy’. External events can cause business bankruptcy. In the case of software, this was facilitated by a software vendor ending support for a whole product line, and the need to rewrite a software product using different languages and tools.

When looking through software engineering radio I noticed an earlier podcast, SE Radio 224: Sven Johann and Eberhard Wolff on Technical Debt covers the same topic. Interestingly, they reference a blog post by Fowler, Technical Debt Quadrant, Fowler suggests a simple tool that can be used to think about technical debt, based on four quadrants. Decision about technical debt should be ‘prudent and deliberate’.

Returning to the opening dilemma: do I go straight to my friend’s house, or do I go get some milk on the way and end up being late? It depends on who the friend is and why I am meeting them. I depends on the consequences. If I’m going round there for a cup of tea, I’ll probably get the milk.

Resources

Fowler, M. (2009) Technical Debt Quadrant. Available at: https://martinfowler.com/bliki/TechnicalDebtQuadrant.html

Kruchten, P., Nord, R. and Ozkaya, I. (2019) Managing Technical Debt: Reducing Friction in Software Development. 1st edition. Addison-Wesley Professional.  Available at: https://library-search.open.ac.uk/permalink/44OPN_INST/la9sg5/alma9952700169402316

Permalink Add your comment
Share post
Christopher Douce

Software Engineering Radio: Software architecture

Visible to anyone in the world

Software architecture is quite a big topic. I would argue that it ranges from software design patterns all the way up the design and configuration of cloud infrastructures, and the development of software development and deployment pipelines.

Software architecture features in a number of useful Software Engineering Radio podcasts. What follows is a brief summary of two of them. I then share an article by a fellow TM354 tutor and practicing software architect who shares his thoughts from 25 years of professional experience.

An important point is that there are, of course, links between requirements, non-functional requirements and architectural design. Architectures help us to ‘get stuff done’. There are, of course, implicit links and connections to other posts in this series, such as to the one that is about Infrastructure as Code (IaC).

On the Role of the Software Architect

In SE Radio 616: Ori Saporta on the Role of the Software Architect Saporta suggests that design doesn’t just happen at the start of the software lifecycle. Since software is always subject to change, this means that a software architect has a role across the entire software development lifecycle. Notably, an architect should be interested in the ‘connections between components, systems and people’. ‘You should go from 30,000ft to ground level’ (13:00), moving between the ‘what’ problem needs to be solved to the ‘how’ problems can be solved.

Soft skills are considered to be really important. Saporta was asked how might engineers go about ‘shoring up’ their soft skills? A notable quote was: “it takes confidence and self-assurance to listen”. Some specific soft skills were mentioned (29:20). As well as listening, there is the need for empathy and the ability to bridge, translate or mediate between technical and non-technical domains. Turning to the idea of quality, which was addressed in an earlier blog, quality can be understood as a characteristic, and a part of a process (which reflects how the term was earlier broken down by the SWEBOK).

A software architect means “being a facilitator for change, and being open for change” in other words, helping people across the bridge. An interesting point was that “you should actively seek change”, to see how the software design could improve. An important point and a reflection is that a good design accommodates change. In software, ‘the wind [of change] will come’ since the world is always moving around it.

Architecture and Organizational Design

The second podcast I would like to highlight is SE Radio 331: Kevin Goldsmith on Architecture and Organizational Design. Goldsmith’s is immediately asked about Conway’s Law (Wikipedia), which was summarised as “[o]rganizations … which design systems … produce designs which are copies of the communication structures of these organizations”. Put more simply, the structure of your software architecture is likely to reflect the structure of your organisation.

If there is an existing organisation where different teams do different things “you tend to think of microservices”; a microservice being a small defined service which is supported by a surrounding infrastructure.

If a new software start-up is created by a small group of engineers, a monolith application may well be created. When an organisation grows and more engineers are recruited, existing teams may split which might lead to decisions about how to break up a monolith. This process of identifying and breaking apart services and relating them to functionality can be thought as a form of refactoring (which is a fancy word for ‘structured software changes to software code’). This leads to interesting decisions: should the organisation use their own services, or should they use public cloud services? The answer of, course, relates back to requirements.

An interesting question ‘was which comes first: the organisational structure or the software structure’ (13:05)? Organisations could embrace Conway’s law, or they could do a ‘reverse Conway manoeuvre’, which means that engineering teams are created to support a chosen software architecture.

A really interesting point is that communication pathways within organisations can play a role; organisations can have their tribes and networks (49:30). It is also important to understand how work moves through an organisation (54:50). This is where, in my understanding, the role of the business analyst and software architect can converge.

Towards the end of Goldsmith’s podcast, there was a fascinating reflection about how Conway’s law relates to our brain (57:00). Apparently, there’s something called the Entorhinal cortex “whose functions include being a widespread network hub for memory, navigation, and the perception of time” (Wikipedia). As well as being used for physical navigation, it can also be used to navigate social structures. In other words, ‘your brain fights you when you try to subvert Conway’s law’.

Reflections

In my eyes, the key point in Saporta’s podcast was the metaphor of the bridge, which can be understood in different ways. There could be a bridge moving from the technical to the non-technical, or could be moving from the detail of code and services to the 30,000ft view of everything.

Goldsmith’s podcast offers a nice contrast. I appreciated the discussion about the difference between monoliths and microservices (which is something that is discussed in the current version of TM354). An important point is that when organisations flex and change, microservices can help to move functionality away from a monolith (or other microservices). A microservice can be deployed in different ways, and realised using different technologies.

I found the discussion about the Entorhinal cortex. Towards the end of my doctoral studies, I created a new generation of software metrics that was inspired by the understanding that software developers need to navigate their way across software code bases. It had never occurred to me that the same neural systems may be helping us to navigate our connections with others.

On a different (and final) note, I would like to highlight the work of an article called Software Architecture Insights by Andrew Leigh. Andrew is a former doctoral student from the OU School of Computing and Communications, a current software engineering tutor, and a practicing software engineer. He shares four findings which are worth having a quick look at, and suggests some further reading.

References

Leigh, A. (2024) Software Architecture Insights, ITNow, 66(3), pp. 60–61. Available at: https://doi.org/10.1093/itnow/bwae102.

Permalink Add your comment
Share post
Christopher Douce

Software Engineering Radio: Software quality

Visible to anyone in the world
Edited by Christopher Douce, Tuesday 30 September 2025 at 16:19

In one way or another, all the previous blogs which draw on Software Engineering Radio podcasts have been moving towards this short post about software quality. In TM354 Software Engineering, software quality is defined as “the extent to which the customer is satisfied with the software product delivered at the end of the development process”. It offers a further definition, which is the “conformance to explicitly stated requirements, explicitly documented development standards, and implicit characteristics that are expected of all professionally developed software”. The implicit characteristics can relate to non-functional requirements, or characteristics such as maintainability and readability.

The Software Engineering Body of Knowledge (SWEBOK) emphasises the importance of stakeholders: “the primary goal for all engineered products is to deliver maximum stakeholder value while balancing the constraints of development, maintenance, and operational cost, sometimes characterized as fitness for use” (SWEBOK v4, 12-2).

The SWEBOK also breaks ‘software quality’ into a number of subtopics: fundamentals, management processes, assurance processes, and tools. Software quality fundamentals relates to software engineering culture and ethics, notions of value and cost, models and certifications, and software dependability and integrity levels.

Software quality

After doing a trawl of Software Engineering Radio, I’ve discovered the following podcast: SE Radio 637: Steve Smith on Software Quality. This podcast is understandably quite wide ranging. It can be related to earlier posts (and podcasts) about requirements, testing and process (such as CI/CD). There are also connects to the forthcoming podcasts about software architecture, where software can be built with different layers. The point about layers relates to an earlier point that was made about the power and importance of abstraction (which means ‘dealing with complexity to make things simpler’).  For students who are studying TM354, there is a bit of chat in this podcast about the McCabe complexity metric, and the connection between testing and code coverage.

Towards the end of the podcast (45:20) the connection between organisational culture and quality is highlighted. There is also a link between quality and lean manufacturing approaches, which have then inspired some agile practices, such as Scrum.

Reflections

Software quality is such an important topic, but it is something that is quite hard to pin down without using a lot of words. Its ethereal quality may explain why there are not as many podcasts on this topic when compared to more tangible subjects, such as requirements. Perhaps unsurprisingly, the podcasts that I have found appear to emphasise code quality over the broader perspective of ‘software quality’.

This reflection has led to another thought, which is: software quality exists across layers. It must lie within your user interfaces design, within your architectural choices, within source code, within your database designs, and within your processes.

One of the texts that I really like that addresses software quality is by Len Bass et al. In part II of Software Architecture in Practice, Bass et al. identify a number of useful (and practical) software quality attributes: availability, deployability, energy efficiency, integrability, modifiability, performance, safety, security, testability, and usability. They then later go on to share some practical tactics (decisions) that could be made to help to address those attributes.

As an aside, I’ve discovered a podcast which features Bass, which is quite good fun and worth a listen: Stories of Computer Science Past and Present (2014) (Hanselminutes.com). Bass talks about booting up a mainframe, punched card dust, and the benefit of having two offices.

References

Bass, D. L., Clements, D. P and Kazman, D. R. (2021) Software Architecture in Practice [Online], 4th edn, Upper Saddle River, NJ, Addison Wesley.

Permalink
Share post
Christopher Douce

Software Engineering Radio: Code reviews

Visible to anyone in the world
Edited by Christopher Douce, Tuesday 30 September 2025 at 10:44

Although software engineering is what you call a technical discipline, the people element is really important. Software engineering processes are as much about people as they are about product. Some software engineering processes apply a process known a code review.

Code reviews

In SE Radio 400: Michaela Greiler on Code Reviews a code review is described as ‘a process where a peer offers feedback about software code that has been created’ (1:11). Greiler goes on to say that whilst testing can check the functionality of software, code reviews can help to understand ‘other characteristics of software code’, such as its readability and maintainability (2:00). The notion of these other characteristics relates to the notion of software quality, which is the subject of the next blog. Significantly, code reviews can facilitate the sharing of knowledge between peers. Since software is invisible, it is helpful to talk about it. Valuable feedback is described as feedback that leads of change.

Greiler’s podcast suggests that reviews can be carried out in different ways (8:00). They can be formal, or informal. Code can be emailed, or code could be ‘pulled’ from source code repositories. Feedback could be shared by having a chat, or should be mediated through a tool. The exact extent and nature of a review will be dictated by the characteristics of the software. The notion of risk plays a role in shaping the processes.

One of the parts of this podcast I really liked was the bit that emphasised the importance of people skills in software engineering. Tips were shared on giving (and receiving) feedback (19:20). I noted that a reviewer should aim add value, and the engineer’s whose code is being reviewed should try to be humble and open minded. A practical tip was to give engineers a ‘heads up’ about what is going to happen, since it gives them a bit of to prepare and be explicit about the goals of a review. The podcast also emphasised a blog post by Greiler that had the title: 10 Tips for Code Review feedback.

Towards the end, there was a comment that code reviews are not typically taught in universities (34:25). I certainly don’t remember being involved in one when I was an undergraduate. In retrospect, I do feel as if it would have been helpful.

A final question was ‘how do you get better at code reviews?’ The answer was simple: show what happens during a review, and practice doing them.

Reflections

When I was a software engineer, I spent quite a bit of time reading through existing code. Although I was able to eventually figure out how everything worked, and why lines of code and functions were written, what would have really been useful was the opportunity to have a chat with the original developer. This only happened once.

Although this conversation wasn’t a code review (I was re-writing the software that he had originally written in a different programming language), I did feel that the opportunity to speak with the original developer was invaluable. I enjoyed the conversation. I gained confidence in what I was doing and understanding, and my colleague liked the fact that I had picked up on a piece of work he had done some years earlier.

I did feel that one of the benefits of having that chat is that we were able to appreciate characteristics of the code that were not immediately visible or apparent, such as decisions made that related to the performance of the software. The next blog is about the sometimes slippery notion of software quality (as expressed through the conversations in the podcasts).

Permalink
Share post
Christopher Douce

Software Engineering Radio: Infrastructure as Code (IaC)

Visible to anyone in the world

In the last post of this series, I shared a link to a podcast that described CI/CD. This can be broadly described as a ‘software engineering pipeline where changes are made to software in a structured and organised way which are then made available for use’. I should add that this is my own definition!

The abbreviation CI/CD is sometimes used with the term DevOps, which is a combination of two words: development and operations. In the olden days of software engineering, there used to be two teams: a development team, and an operations team. One team build the software; the other team rolled it out and supported its delivery. To all intents and purposes, this division is artificial, and also unhelpful. The idea of DevOps is to combine the two together.

Looking at all these terms more broadly, DevOps can be thought as a philosophical ideal about how work is done and organised, whereas CI/CD release to specific practices. Put more simply, CI/CD makes DevOps possible.

A broader question is: how do we make CI/CD possible? The answer lies in the ability to run process and to tie together bits of infrastructure together. By infrastructure, we might mean servers. When we have cloud computing, we have choices about what servers and services we use.

All this takes us to the next podcast.

Infrastructure as Code (IaC)

In SE Radio 482: Luke Hoban on Infrastructure as Code Hoban is asked a simple question: What is IaC and why does it matter (2:00)?  The paraphrased answer is that IaC can describe “a desired state of the [software] environment”, where that environment is created using cloud infrastructure. An important point in the podcast is “when you move to the cloud, there is additional complexity that you need to deal with”. This software environment (or infrastructure) can also be an entire software architecture which comprises of different components that do different things (and help to satisfy your different requirements). IaC matters, since creating an infrastructure by hand introduces risk, since engineers may forget to carry out certain steps. A checklist, in some senses, becomes code.

When the challenge becomes “how to compose the connections between … thousands of elements, infrastructure becomes a software problem”. Different approaches to solve this software are mentioned. There is a declarative approach (you declare stuff in code), and an imperative approach (you specify a set of instruction in code). There are also user interfaces as well as textual languages. Taking a declarative approach, there are models that make use of formalisms (or notations) such as JSON or YAML. A scripting approach, through the use of application programming interfaces may make use of familiar programming languages, such as Python, which allow you to apply existing software engineering practices. When you start to use code to describe your infrastructure, you can then begin to use software engineering principles and tools on that code, such as version control.

Towards the end of the podcast, a question was asking about testing (52:30), which is a topic that will be discussed in a later blog. Engineers may create unit tests to check to see what elements have been created and to validate characteristics of a deployed infrastructure. Integration testing may be carried out using a ‘production like’ staging environment before everything is deployed.

Reflections

A computing lecturer once gave a short talk during a tutorial that changed how I looked at things. He said that one of the most fundamental principles in computing and computer science is the principle of abstraction. Put in other words, if a problem becomes too difficult to solve in one go, break the problem down into the essential parts that make up that problem, and work with those bits instead.

A colleague (who works in the school) once expressed the same idea in another way, which was “if you get into trouble, abstract up a level”. In the context of the discussion “getting into trouble” means everything becoming too complicated to control. The phrase “abstract up a level” means breaking the problem down into bits, and putting them into procedures or function, which you can then start to manage more easily.

Infrastructure as Code is a really good example of “abstracting up” to solve a problem that started to become too complicated.

IaC has facilitated the development of a CI/CD pipeline. Interestingly, a CI/CD pipeline can also facilitate the development of IaC.

Permalink Add your comment
Share post
Christopher Douce

Software Engineering Radio: Waterfall versus Agile

Visible to anyone in the world

The first Software Engineering Radio podcast featured in this blog speaks to a fundamental question within software projects, which is: how much do we know? If we know everything, we can plan everything. If we don’t know everything, then we need to go a bit more carefully, and figure things out as we go. This doesn’t necessarily mean that we don’t have a plan. Instead, we must be prepared to adjust and change what we do.

Waterfall versus Agile

In SE Radio 401: Jeremy Miller on Waterfall Versus Agile two different approaches are discussed; one is systematic and structured, whereas the other is sometimes viewed being a bit ‘looser’. In this podcast, I bookmarked a couple of small clips. The first is between 16:20 and 19:00, where there is a question about when the idea of agile was first encountered. This then led to a discussion about eXtreme Programming (XP) and Scrum. The second fragment runs between 45:40 and 47:21, which returns to the point about people. This fragment highlights conflicts within teams, the significance of compromise and the importance of considering alternative perspectives. This not only emphasises the importance of people in the processes, but also the importance of people skills within software engineering practices.

Following on from this discussion, I recommend SE Radio 60: Roman Pichler on Scrum. Roman is asked ‘what is Scrum and where does it come from?’ An important inspiration was considered to be ‘lean thinking’ and an article called ‘the new product development game’. It was later described as ‘an agile framework for developing software systems’ (47:50) which focuses on project and requirements management practices. Scrum can be thought of a wrapper where other software development practices can be used (such as eXteme Programming, and continual integration and deployment).

It is worth highlighting some key Scrum principles and ideas, which are discussed from 2:50 onwards. An important principle is the use of small autonomous multidisciplinary self-organising team (21:10) of up to 7  (plus or minus 2) people that comprises of developers, a product owner and a Scrum master. The Scrum master (24:15) is responsible for the ‘health’ of the team and remove barriers to progress. The team is empowered to make their own decisions about how they work during each development increment, which is called a sprint. A sprint  (7:20) is a mini project that has a goal, where something that is built that ‘has value’ to the customer (such as, an important requirement, or group of requirements), and is also ‘potentially shippable’.

Decisions about what is built during sprints is facilitated through something called a product backlog (28:50), which is a requirements management tool, where requirements are prioritised. How requirements are represented depends on the project. User stories were mentioned as ‘fine grained’ requirements. In Scrum, meetings are important. There is a daily Scrum meeting (13:10), sprint reviews, and a retrospective (43:35). The retrospective is described as important meeting in Scrum, which takes place at the end of each sprint to help the team reflect on what went well and what didn’t go well.

Reflections

When I was an undergraduate, we were all taught a methodology that went by natty abbreviation of SSADM. I later found out that SSADM found its way into a method called Prince, which is an approach used for the management of large projects. (Prince is featured in the OU’s postgraduate project management module).

I was working in industry when Beck’s book about XP came out. When I worked as a software engineer, I could say that we applied a small ‘agile’ approach with a more traditional project management methodology. We used techniques from XP, such as pair programming, and continually kept a Gantt chart up to date.

At the time, none of us knew about Scrum. Our Gantt chart was Scrum’s burn down chart. We didn’t have a project backlog, but we did have an early form of a ‘ticket system’ to keep track what features we needed to add, and what bugs needed to be fixed.

One of the things that we did have was version control. Creating a production version of our software products was quite a intensive labour process. We had to write release notes, which had to be given a number, a date and saved in a directory. We built new installation routines, and manually copied them to a CD printing machine, which as asking for trouble. What we needed was something called CI/CD, which is the topic of the next post.

Permalink
Share post
Christopher Douce

Software Engineering Radio: Software engineering processes

Visible to anyone in the world
Edited by Christopher Douce, Monday 29 September 2025 at 17:41

Software engineering is about the creation of large software systems, products or solutions in a systematic way with a team of people.

Since software can serve very different needs and has necessarily different requirements, there are a variety of ways that software can be built. These differences relate to the need to take account of different levels of risk. You would use different processes to create a video game, than you would for an engine management system. Software engineering processes are also about making the ‘invisible stuff’ of software visible to software engineers and other stakeholders.

One of the abbreviations that is sometimes used is SDLC, an abbreviation for Software Development Lifecycle. Software has a lifecycle which begins with requirements and ends with maintenance. Although software never wears out, it does age, since the context in which it sits changes. Processes can be applied to manage the stages within and between the software life cycle.

Different terms are used to refer to the development of software systems. There can be greenfield systems, brownfield systems, or legacy systems. Legacy systems can be thought of the ‘old systems that continue to do useful stuff. Legacy systems are also brownfield systems, which software engineers maintain, to make sure they continue to work. Greenfield systems are completely new software products. In the spirit of honesty, more often than not, software engineers will typically be working on brownfield systems and legacy systems than greenfield systems; systems that are towards the end of the software lifecycle than at the start.

The blogs that follow highlight different elements of the software development process. It begins with a discussion about the differences between waterfall and agile. It then goes onto say something about a technique known as Continual Integration/Continual Deployment (CI/CD), which has emerged through the development of cloud computing. CI/CD has been made possible through the development of something called ‘infrastructure as code’, which is worth spending a moment looking at (or listening to). Before we move onto the important subject of software quality, I then share a link to a podcast that discusses a process that aims to enhance quality: code reviews.

Permalink
Share post
Christopher Douce

Listening to Software Engineering Radio

Visible to anyone in the world
Edited by Christopher Douce, Monday 29 September 2025 at 15:39

From time to time, I dip into (and out of) a podcast series called Software Engineering Radio, which is produced by the IEEE. It’s a really useful resource, and one that I’ve previously mentioned in the blog post Software engineering podcasts.

This is the first of a series of blog posts that shares some notes I’ve made from a number of episodes that I have found especially interesting and useful. In some ways, these posts can be through of a mini course on software engineering, curated using the voices of respected experts.

Towards the end of each blog, I share some informal thoughts and reflections. I also share some links both earlier posts and other relevant resources.

I hope this series is useful for students who are studying TM354 Software Engineering, or any other OU module that touches on the topic of software engineering or software development.

Software engineering as a discipline

When doing some background reading (or listening) for TM113, I found my way to SE Radio 149: Difference between Software Engineering and Computer Science with Chuck Connell.

In this episode, there are a couple of sections that I bookmarked. The first is 10:20 through to 12:20, where there is a discussion about differences between the two subjects. Another section runs between 24:10 and 25:25, where there is an interesting question: is software engineering a science, an art, or a craft? The speaker in the podcast shares an opinion which is worth taking a moment to listen to.

According to the software engineering body of knowledge (SWEBOK), engineering is defined as “the application of a systematic, disciplined, quantifiable approach to structures, machines, products, systems or processes” (SWEBOK v4.0, 18-1). Put in my own words, engineering is all about building things that solve a problem, in a systematic and repeatable way that enables you to evaluate the success of your actions and the success of what you have created.

An early point in chapter 18 of SWEBOK is the need to: “understand the real problem” which is expanded to the point that “engineering begins when a need is recognized and no existing solution meets that need” (18-1). Software, it is argued, solves real world problems. This takes us to a related question, which is: how do we define what we are building? This takes us to the next post, which is all about requirements.

Before having a look at requirements, it is useful to break down ‘software engineering’ a little further. The SWEBOK is divided into chapters. The chapters that begin with the word ‘software’ are: requirements, architecture, design, construction, testing, maintenance, configuration management, engineering management, engineering process, models and methods, quality, security, professional practice, economics.

There are three others which speaks to some of the foundations (and have the word ‘foundation’) in their title. They are: computing, mathematical, and engineering.

Reflections

Software is invisible. It is something that barely exists.

The only way to get a grasp on this ‘imaginary mind stuff’ of software is to measure it in some way. The following bits of the SWEBOK is helpful: “Knowing what to measure, how to measure it, what can be done with measurements and even why to measure is critical in engineering endeavors. Everyone involved in an engineering project must understand the measurement methods, the measurement results and how those results can and should be used.” (SWEBOK v4, 18-10). The second sentence is particularly interesting since it links to the other really important element in software engineering: people. Specifically, everyone must be able to understand the same thing.

The following bit from the SWEBOK is also helpful: “Measurements can be physical, environmental, economic, operational or another sort of measurement that is meaningful to the project”.

Next up is software requirements. By writing down our requirements, we can begin to count them. In turn, we can begin to understand and to control what we are building, or working with.

Permalink Add your comment
Share post
Christopher Douce

Considering a vision for TM354 Software Engineering

Visible to anyone in the world
Edited by Christopher Douce, Thursday 26 June 2025 at 20:12

TM354 Software Engineering is an important module with the OU’s Computing and IT Q62 qualification. It is a module that has been around, in one form or another, for a long time. The current version of the module, which dates back to 2014 is roughly summarised in the blog post Exploring TM354 Software Engineering.

One of the interesting characteristics of TM354 is that it more theoretical than it is practical. Rather than using lots of software tools to work with and manipulate code and components, students are introduced to diagrammatic tools in the form of the Unified Modelling Language (UML). I don’t think this is necessarily a bad thing. It forces students to slow down, and to look at the detail, and to reflect on what it means to think about software. By deliberately pushing any practical tools aside, we avoid worrying about specific issues that relate to implementation. An implicit assumption is implementation is development, and engineering is about how we go about structuring and organising the building of software.

The practice of software engineering has moved on since 2014. Agile practices and cloud computing have now become mainstream, and there has been the recognition that artificial distinctions between ‘development’ and ‘operations’, the idea that you move software from one team to another, might not be particularly useful. The notion of technical debt has been defined, and this connects with older more established themes of software metrics. There is an increased recognition of tools: of requirement management tools, version management tools, and ticket management tools. All this mean that the notion of a theoretical module that is separate from the practical world of software engineering is harder to argue.

There are, of course, concepts which remain paramount: the importance of different types of requirements, understanding the different types of software development process, conceptions of software quality, principles of software architecture, and different approaches to software testing.

In the sections that follow, I share something of my own personal experience of software engineering education and then share some of my experiences of working as a software engineer. I then go onto share some rough thoughts about what a reimagined TM354 module might look like.

This opinion piece has been written with a couple of audiences in mind: colleagues who work on different modules, and tutors who may have a connection with the OU’s software engineering modules. The sketch that is presented isn’t a firm reflection of what TM354 might morph into (since I don’t have the power to enact that kind of radical change). It is intended to feed into future debates about the future of the module, and modules that accompany it.

A personal history

When I studied Computer Science as an undergraduate in the early to mid-1990s, I studied a software engineering module, and a complementary practical module that was all about software maintenance. I remember my software engineering module was also theoretical. We all had to sit a 3 hour exam which took place in a dusty sports hall. I remember questions about software cost estimation, software reliability and software testing. Out of these three topics, only software testing remains in TM354.

The complementary (and practical) software maintenance module was very different. We were put in a team, given a document that contained list of changes that were needed to be made, and 40k lines of FORTRAN code, and told ‘mind how you go’. 

We had a lot to figure out. We needed to figure out how to drive various software tools, how to get compilers to compile, and how to work as a team. At the very end of the project, each team had to make a presentation to representatives ‘from industry’. The team bit didn’t go as as we would have liked, but all that was okay: it was all about the learning. A key point that I took away from it was that the people bit was as hard (if not harder) than figuring out how to compile ancient FORTRAN code.

Software engineering as a postdoc

My first proper job (if you can call it that) was as a Research Officer at the University of Brighton. My job was to help colleagues with their projects. I was what you could call a ‘floating technical resource’ that could be deployed as and when required.

One memorable project was all about machine translation. It soon struck me that I had to figure out a client-server software architecture. A key bit to a software puzzle was a client program that was written in Java 1.1, which I took ownership of. The code I inherited was a pathological mess. There were classes and objects everywhere. It was a rat’s nest of bafflement. I rewrote it a bit at a time. I didn’t realise it at the time, but I was refactoring my Java code (and testing as I went) before it was known as refactoring. My client software sent data to a server program using a notation that now would look very similar to the JavaScript Object Notation language. 

Like the software maintenance project, there was a lot to figure out: tools, languages, code structures, software architectures, operating systems, and where code was hosted so it could be run. The end result seemed to work, and work well. I felt I had figured out OO programming.

Software engineering as a software engineer

After a year of being a research officer, I managed to get a job as a software engineer in a small company that manufactured educational equipment. They needed software to work with their hardware, which was used to demonstrate principles of process control, electronics and telecommunications. I worked on a lot projects, but I’ll just mention two of them.

The first project was a maintenance project. There was a lot of software written in a combination of Visual Basic for DOS (yes, this was an actual product) and assembly language. The challenge was to rewrite the software in a combination of Java and C++. The Java bit would display results from experiments, and the C++ bit would take voltage measurements and turn digital switches on and off. Eventually, the code would go deeper. I had to learn about device driver development kits and how to begin to write functions for microcontrollers that drove USB interfaces.

The second project was different. The idea (which came from the marketing department and the managing director) was to make a learning management system that wasn’t too different in intent to the software that generates the university’s module web pages, but at a much smaller scale. Rather than providing files and resources to tens of thousands of students, this system was to allow educators to rapidly deploy materials to workstations within a single engineering laboratory. The big question was, of course, how do we do this?

From a technical perspective we ended up doing what these days is called ‘full stack’ development. We created code for the client side using JavaScript at a time when there were not any fancy JavaScript frameworks. On the server side, we had client side ASP.NET supported by stored procedures that were run in an SQL database. There was also content to deploy that contained Java applets, metadata to figure out, XML documents to parse, a quiz engine to build and report writing facilities to master. Books about eXtreme Programming and Test Driven Development had just come out. We tried pair programming and had a go to apply JUnit. Everything we built had to be installed relatively painlessly with the click of a mouse (but, of course, everything is never that straightforward). My key learning was, of course, that software is so much more than code.

There’s a third point that is worth labouring, and that point relates to process. When I joined, the engineering efforts were just ramping up, which meant there wasn’t much legacy when it came to software engineering processes. A ‘release’ had previously involved transferring software from a development machine to a production machine using a floppy disk. 

A new hire (let’s call him ‘Alan’) joined the team. Having cut his software engineering teeth having worked on an early generation of mobile phone software, Alan had a huge amount of experience and a desire to make sure that we knew what was in every release. He introduced what will now be known as a ‘ticketing’ system to document software defects, and a formal release process, which also involved the use of version management software.

Software engineering as a research fellow

The experience of working on a learning management system led me to a research project that was hosted at the OU that explored how to enhance the accessibility of virtual learning environments (VLEs) for disabled students. 

Before we ever got to code, there was a lot of discussion about architecture, which came to define the project’s output. The key question that we had to solve sounded quite simple: how do we find a way to present users with content that matches their needs and preferences? To answer this question, we need to answer other questions, such as: how do we define what a user needs? Also, how do we describe digital learning materials using metadata in such a way they can be efficiently chosen by a VLE?

These simple sounding questions hide complexity. Conditions can vary on a day by day basis. The digital content needed on one day may be different to what is needed on another. Learners are, of course, the experts of their own condition, and learners (who are all different) need to have ways to express their needs and preferences. What is really important is that a VLE offers learners choice about the types of learning materials. If we were to look at this project through a software engineering lens, the most important element of the whole project was the user’s requirements.

Mid way through this project, I stepped into another role: I became a Computing staff tutor. This meant that I stepped away from the technical aspects of computing and software engineering, and into a role where I had more involved with delivering the presentation of modules and supporting tutors.

Similarities and differences

These projects differed in terms of the users, the tasks, and the environments in which the software was used. They also were different in terms of the technologies that were applied. There were different databases, operating systems and programming languages. I had to make choices from different frameworks and tools. I’ve mentioned a unit testing framework, but I also used a model-view-controller inspired PHP application framework. There were also different software development kits and libraries to work with. There were also different ways to consume and to invoke webservices.

Turning to the similarities, one of the biggest similarities doesn’t relate to what was chosen, but what was done. In every project, I had to carry out some form of critical assessment and evaluation, to make informed decisions that could be justified. This meant finding things out in terms of what things did, and then how they worked.

Software engineers need to be multi skilled. Not only do the need to know how programming languages, data structure and operating systems work, they need to be systematic researchers, be creative problem solvers, and also be effective communicators. There was a reason why, as an undergraduate, we were asked to give a presentation about our software maintenance project.

Software is invisible. Software engineers need to know how to talk about it.

A quick look at QAA

Before writing this piece, I wrote an article called A quick look at the QAA benchmarks (OU blog). When considering the design of a new module, it is worth reviewing the QAA guidance. One aspect that I didn’t extensively review was the Higher Education Qualifications of UK Degree-Awarding Bodies framework.

A bachelor's degree is Level 6 of the FHEQ, and it is worth looking at descriptor 4.15, which states that students must gain “the ability to manage their own learning, and to make use of scholarly reviews and primary sources (for example, refereed research articles and/or original materials appropriate to the discipline)”.  Students attaining level 6 should also be able to “communicate information, ideas, problems and solutions to both specialist and non-specialist audiences” and have “the qualities and transferable skills necessary for employment requiring: the exercise of initiative and personal responsibility. decision-making in complex and unpredictable contexts, the learning ability needed to undertake appropriate further training of a professional or equivalent nature”.  Another point that jumps out at me is: “the holder of such a qualification will be able to evaluate evidence, arguments and assumptions, to reach sound judgements and to communicate them effectively”. It is clear from this guidance that an entire degree must help to develop student’s critical communication and critical thinking skills.

It's worth also digging into the Computing Benchmark statements to see what it says. When it comes to demonstrating computational problem-solving, to demonstrate excellence, students must “be able to demonstrate sophisticated judgement, critical thinking, research design, and well-developed problem-solving skills with a high degree of autonomy” (QAA Subject Benchmark Statement, Computing, March 2022, p.20). This means that modules must work with students to develop those critical thinking skills.

What the QAA guidance lacks is specific guidance about what a module or programme should contain. This is where something called the IEEE Software Engineering Body of Knowledge comes in (SWEBOK v4.0). There’s enough detail in here to cover a whole degree, never mind a single module. Of particular note is chapter 14, which is all about Software Engineering Professional Practice.

As well as the SWEBOK, there is are also the Association of Computing Machinery Curricula Recommendations, which contains a sub-section that concerns Software Engineering. Of these two documents, the SWEBOK is a lot more comprehensive and more up to date than the older 2014 guidance, which is clearly in need of a refresh.

A vision for a new level 3 software engineering module

I hate exams. I also hate end of module assessments (especially when I have to complete one of them), but I hate them less than exams.

An EMA makes a lot more sense in a module like TM354 than a written exam, since it gives students a lot more space to demonstrate their understanding and their professional communication skills.

My proposal is to design a module that combines the teaching of software engineering ideas and concepts with a practical investigation of a software product of a student’s choice. The choice of a product being, of course, guided by a tutor. Like with TM354, I’m suggesting three TMAs, each of has some emphasis on the development of research skills. By the time students complete TM354, they should end up being better equipped to complete the computing dissertation capstone module, which currently goes by the module code TM470.

Students should ideally arrive at this module having studied a level 2 module, where they have developed an understanding of the principles of object-oriented programming and problem decomposition. They may also be aware of some diagramming languages, such as UML.
Drawing on an interesting approach adopted in other modules, I would like to see independent study options, which enables students to demonstrate (and develop) their reading and investigation skills.

Here’s a suggested structure.

Block 1: Processes and Tools

This module will begin with a reminder about software the software development lifecycle (which should have been already covered on an earlier module), which are then discussed in greater depth. The term ‘tools’ is broad. Tools can be used to capture requirements and manage requirements.

Tools also support processes. A discussion about processes would lead us to a discussion about version and configuration management, and onto testing. This is linked to the topics of continuous integration and continuous deployment (CI/CD).

Independent study would include reading articles about materials that are provided within this week.

In terms of the assessment, students must demonstrate their practical understanding or use of tools, and also to draw upon a case study (which may well be written by the module team) where students relate their independent reading to the case study. Students must be able unambiguous reference both articles and software.

Block 2: Technology and Architectures

This block focuses on teaching important and essential ideas, such as software architecture and design patterns. This block should also cover software engineering abstractions that can have different meanings, such as component, containers and frameworks. Drawing on what is covered in an earlier web and cloud module, the link and relationship with cloud computing and cloud architectures should also be explored. The point here is that software engineers need to be able to recognise decisions that have been made, and to be able to begin to articulate alternative decisions. There might also be the space to highlight AI frameworks, but this is very speculative.
Independent study would involve looking at articles about different aspects of architecture. A point of doing this is further help students understand what the academic study of software engineering looks like.

Regarding assessment, students must demonstrate knowledge and understanding of key concepts that are introduced in this block, ideally by sharing potential designs and research with each other.

Block 3: Research and Investigation

This final block is about further developing software research skills. Since software maintenance is a significant part of the software lifecycle, software engineers need to be able to find their way through software stacks that are unfamiliar to them. Software engineers need to be critical thinkers; they need to understand what has been done, and why something has been done.

To help students what they need to do, students might be guided through an investigation, which could then intersect with different tools, teams and stakeholders. This would lead towards the EMA, which is all about producing a report that describes a software system in terms of processes used, tools applied, technology deployed, and its overall architecture.

To help students, this block would present some materials that offer some guidance about how to structure a report. For their block assessment, students would propose a software system or product to investigate. The system might be one that they are familiar with in their workplace, an open source software application, or software component or framework that can be used or applied within other software systems. In return, tutors would offer some practical advice, and perhaps follow up with a one-to-one session if they need further advice and guidance.

End of module assessment

A theoretical EMA is to be delivered in two parts: a formal report (70% of the EMA result), followed by a short presentation (30% of the EMA result). Both components need to be passed to pass the EMA (if this is theoretically permissible by the university assessment guidelines). 

The report is to contain:

  • A description of a software component, product, or service that is the target of an investigation.
  • A rationale for the choice of that component.
  • A summary of its architectural elements, and any software components that it uses, or how the software component is used in other products or services.
  • A summary of software technologies or components that are important to its implementation or realisation, such as technology standards, libraries or languages.
  • A description of software development methodologies that may have contributed to its creation, or a summary of methods that may currently be applied.
  • A summary of any tools that are important to its use and development. This may include, for example, version or configuration management tools.
  • A commentary about how the software is to be deployed, and what supporting software or infrastructure may be needed to facilitate its deployment.

For the presentation component, students are to prepare a ten minute PowerPoint presentation that summarises their report, with an additional ten minutes for questions. Their presentation is to contain:

  • A summary of their chosen software component or product and what it is used for, and who the users or stakeholder might be.
  • Highlight what software technologies it makes use of, or what technologies it might be a part of.
  • Any significant ethical or professional concerns that need to be considered.

Students will deliver their presentation to two people; a tutor, and someone who plays the role of a technical manager, who needs to make use of the report that has been created by the software engineer. For accountability and rigour, the presentations are to be recorded, but these recordings will only be used for quality control purposes.

Reflections

All ideas have to come from somewhere. The vision that is shared has been shaped by my own undergraduate studies, industrial experience, by chairing TM354, and looking at other modules, such as M813 Software Development and M814 Software Engineering. In this article I have laboured points about educational and industrial history to emphasise a disconnect between the two.

What is proposed here is a practical amalgam of both my undergraduate modules, and both the OU’s postgraduate modules, but positioned as an undergraduate module. The idea for the presentation assessment comes from M812, where students have to present their summary of a forensic investigation to a pretend ‘court’. This ensures academic rigour of the written assessment, whilst also helping to develop student’s communication skills. 

One point that I have a clear opinion about is that software engineers need to be able be critical thinkers, and carry out applied research. They also need be mindful about the ethical consequences of their decisions. What is necessary (which is something that is emphasised in other modules I’ve been studying) is the need to develop research skills. By helping students to carry out their own research, students learn more about what it means to study software engineering as an academic subject, and learn more about what it means to carry out study and evaluate software products, which is a necessary and important industrial skill.

Permalink Add your comment
Share post
Christopher Douce

Some notes about agile practices in software engineering

Visible to anyone in the world
Edited by Christopher Douce, Tuesday 1 April 2025 at 21:06

Software engineering is done by people, but what people do to build software depends on the nature of software that is to be created. The culture of individuals, technologies and organisations also plays an important role too.

At the turn of the century, there was a new idea about how to build software; something called agile development. This led to the creation of something called the Manifesto for Agile Software Development If you’re interested in software development and want to know something about what ‘agile’ means, you need to have a look at the manifesto.

I first learnt about agile through something called eXtreme Programming (Wikipedia), and then something called Scrum (Wikipedia) (Don’t use Wikipedia in your TMAs; always use official references). In my eyes, the notable characteristic about agile (or Agile; there’s a difference between small ‘a’ agile, and large ‘A’ agile) is that it is all about people. Agile (in its different forms) helps to establish rituals which can, in turn, help software engineers to talk about that ‘invisible stuff’ which is software.

I recently asked a colleague, Advait Deshpande, who was the chair of an agile practices microcredential what the latest trends were in agile software development. He was kind enough to share links to some interesting articles and resources.

Articles about agile

 Here are some review articles that might be useful to anyone who is starting to study agile:

Edison, H., Wang, X., & Conboy, K. (2021). Comparing methods for large-scale agile software development: A systematic literature review. IEEE Transactions on Software Engineering, 48(8), 2709-2731. Available at https://ieeexplore.ieee.org/abstract/document/9387593/

Vallon, R., da Silva Estácio, B. J., Prikladnicki, R., & Grechenig, T. (2018). Systematic literature review on agile practices in global software development. Information and Software Technology, 96, 161-180. Available at https://www.sciencedirect.com/science/article/pii/S0950584917302975

Other resources

Advait also shared the following two links, which he gives me permission to share here: UK Government: Agile delivery - Agile tools and techniques.

The notion of ‘agile’ has moved beyond software, but to business. It is important to distinguish between the two. This second link emphasises what agile might mean within a business context: Agile Business Consortium: Business Agility.

Post (or peak) agile

Once, agile was the new thing on the block. Now agile has become mainstream. An accompanying question is: have we reached post (or peak) agile? Also, what comes next? One of the criticisms of agile is that it is best suited to smaller teams, which puts a limit to how it can be applied to bigger projects. There have been several attempts to address this:

Advait directed me to a talk that was hosted on YouTube that had a provocative title:

I know Dave Thomas from a book I have on my shelf at home; a book called ‘the pragmatic programmer’ – it is a good read, and is packed filled with some very practical advice. His talk about agile is worth a watch. He presents a critical view of the ‘agile industry’ in a humorous and engaging way. It is worth a watch. He talks about the origins of the agile manifesto, as well as ‘large agile’. An important point is that when thinking about how to create software, we need to think critically too.

Reflections

When I was learning about software engineering as an undergraduate, I was introduced to something called the software development lifecycle (or SDLC). There are different models; there’s a waterfall model, a spiral model, and there was something called SSADM which bored me to hears. It was only after I graduated that I later learnt about agile in all different guises.

When I started working as a software engineer, the company that I worked for didn’t have a software development process, so we had to make one. Culture and experience are themes that can influence decisions about what is done. I was lucky enough to work with someone who had had a lot of experience, for which I was really thankful for.

We set up policies and processes. We also applied techniques that had an agile flavour, bits of pair programming, and aspects of test driven development. Our processes needed to work for both the products and the people who were developing the software. We needed to be pragmatic to get things done.

Acknowledgements

Thanks are extended to Advait Deshpande. I really appreciated the chat and all the links he shared.

Permalink
Share post
Christopher Douce

Introducing the R88 qualification: Computer Science with Artificial Intelligence

Visible to anyone in the world

For my sins, I’ve found myself on four module teams; two in production (TM113, TM253) and two in presentation (TM354, and TM470). The two production modules are a part of an important new qualification the university is producing.

What follows is a set of notes I’ve made that relates to this new qualification. For the official word about the R88, my recommendation is to have a look at the R88 qualification webpage

Firstly, a bit of context: a full time degree is made up of 360 academic credits. The equivalent of one year of study at a brick university is 120 credits. The OU also reflects this, and has three levels of study. Degree classification scores are calculated from results from levels 2 and 3. Level 1 is about skills and knowledge development, but level 1 modules do need to be passed. All modules on this qualification are 30 credits. 

Here is a quick summary of what I know.

Level 1

TM110 Computing fundamentals 1: concepts and Python programming

This is the first module to study. It is likely to include some maths just to prepare everyone for the first maths module that follows. Unlike TM111, it makes use of a textual programming language from the outset. Different themes are interleaved with each other. There are two TMAs and an end of module TMA. 

TM113 Computing fundamentals 2: programming, databases, software engineering

The first presentation of this module is planned for October 2026. This obviously has three related components, and like TM110, the topics are interleaved with each other. This uses the same programming language as before, but uses a different programming environment: Visual Studio Code.  Like all these modules, there is a focus on skills development and employability.

TM129 Technologies in practice

This module has three ten point sections: a bit about robotics and AI, a section about virtual machines and the Linux operating system, and a bit about networking. In AI machines will, invariably need to talk with each other. Knowing something about networking is important.

MST124 Essential mathematics 1

This module is produced by the School of Maths and Stats. It builds on ideas that were introduced in TM110.

Level 2

TM253 Programming and software engineering

This new module is planned for October 2027. This picks up where TM113 left off. It is likely to introduce students to a programming language that is different from Java, and is likely to help students do understand more about software design and architecture. There is also likely to be a significant emphasis on object-oriented software (but other programming paradigms might also get mentioned).

TM258 Introduction to machine learning and artificial intelligence

This is a new module which introduces a range of different AI techniques. I know nothing more than this at the moment, but I’ll hazard a guess to say that ‘search’ is likely to be covered.

M269 Algorithms, data structures and computability

It could be argued that M269 is the most computer science of all these computer science modules. It covers the fundamentals, which means searching and sorting.

M249 Practical modern statistics

Stats is important within machine learning (as well as computer science). The module description says that it covers “time series, multivariate analysis, and Bayesian statistics”. 

Level 3

TM342 Investigating intelligence and ethics

As a postgrad student, I studied a module that had the title ‘natural and artificial intelligence’ that was led by the school of psychology. It was a subject that I really enjoyed. I’m looking forward to learning more about what is going to be covered in this module.

TM343 Artificial intelligence in practice

I don’t know anything about this module, other than I know it is going to be hands on, and may well cover the subject of natural language processing (in some way or another).

TM358 Machine learning and artificial intelligence

This is an existing module which is a part of the BSc (Honours) Data Science qualification. The module description says: “you’ll learn about various machine learning techniques but concentrate on deep neural learning”. In other words, neural networks.

TM470 The computing and IT project

This is what is called a capstone module. Students who take this programme are required to complete a project that is likely to have an AI flavour to it. This is also one of the modules that I tutor. I’ve written quite a few articles about TM470 in this blog.

Other qualifications

There are a number of other related qualifications which are worth knowing about:

Reflections

It’s really exciting to be working on the software engineering bit of two new modules. 

In some ways, this takes me back to my undergraduate days where I studied computer science. On the programme, there was a single AI module (which was a third year module) which I quite enjoyed. Things have, of course, moved on a huge amount; there are new techniques and new technologies. I was only taught about symbolic AI, and nothing about statistical approaches. I only came across neural networks as a postgraduate student in the mid-1990s.

It is interesting to see how mathematics is introduced in this programme. It begins slowly with material in TM110. This reflects my own experience as an undergrad. I never studied maths at A or AS level, so I went to a ‘gentle start’ class. This led onto a 'discrete mathematics' class, which could be termed ‘bits of maths that could be useful for those studying computer science’. I didn’t like it much. To this day I remember proofs, matrices (which is useful for computer graphics), and a lot of probability (lots of questions about playing cards). The equivalent of my discrete maths class is, of course, MST124. Given the importance of statistics in machine learning, there is then M249. 

It’s also important to reflect that software engineering has changed since I studied it. Computing is now everywhere, and that is a characteristic that makes it such an interesting subject. It is in your devices, in your appliances, and in the cloud. A personal objective is to work with others to create materials that not only give the materials industrial relevance, but also to share with students what it means to study software engineering as an academic subject.

Looking back to my time as an undergraduate, one of the modules that I recognise most clearly in M269, the data structures and algorithms module. Some fundamentals never change. What does change is how they are used, and how they are realised. I remember reciting Dijkstra’s algorithm in an exam, just as if it were an ode. I also remember getting a bit baffled by the big O notation, which features in M269.

One of the areas that I know I’m weak on is statistics. When I’m through studying my current module, I may well find my way back to maths.

Disclaimers

This qualification (along with all the others) is subject to change and development.

Acknowledgements

Acknowledgements are due to the School of Computing and Communications directors of teaching who have played an important role is establishing this new qualification.

Permalink
Share post
Christopher Douce

UML Sequence Diagram goose-chase

Visible to anyone in the world

Over the last couple of months, I’ve been updating block 2 of the TM354 module materials as a part of a module refresh. This has meant I’ve had to reacquaint myself with the Unified Modelling Language (UML), which I first learnt as a postgraduate student. I went onto use UML for a short time when I worked as a software developer in industry. It’s a really useful tool; it provides a neat graphical language that helps software engineers to communicate with each other when put pen to whiteboard.

UML describes the structure of a number of diagrams. To explain everything very coarsely, there are two main types of diagrams: structural diagrams (which describes what something is), and behavioural diagrams (which describes what something does).

Introducing the sequence diagram

One of the behavioural diagrams is a sequence diagram, which is a ‘type of’ interaction diagram. A sequence diagram can show what messages can be passed between which modules at a particular moment in time.

Objects are presented using something called lifelines. Messages that are sent between objects are presented using arrows. There is a special type of message, which is called a create message, which (unsurprisingly) creates a new object, which is depicted as an object with a new lifeline.

Books and tools

One of my ‘go to’ books is UML Distilled (OU Library). I have a 2nd edition copy of the physical book, but more recently I’ve been referring to the 3rd edition which I can access through the OU library.

A tool I’ve started to use is Visual Paradigm (which is also used in M814 Software Engineering, the postgraduate module). Visual Paradigm enables me to draw UML diagrams, including sequence diagrams pretty easily.

Sequence diagrams can depict the creation of new objects, by the creation of a new lifeline. UML Distilled says that a ‘solid’ line is used. Visual Paradigm, on the other hand, insists on using a ‘dashed line’. After an hour of fighting with Visual Paradigm, and uncovering some relevant documentation, it would not relent. 

Which one is correct?

Digging into the standard

After a bit of digging, I discovered that someone else has been asking the same question (Stackoverflow).

One of the replies directed me to the UML 2.51 specification (PDF).

Page 577 contains the following line: ‘An object creation Message (messageSort equals createMessage) has a dashed line with an open arrow head.’

My conclusion? Visual Paradigm implements a more recent version of the UML language than is described in my textbook.

Reflections

Does all this matter? Yes, and no. It depends on how you use UML.

If you use sequence to occasionally figure out what is happening within a nest of objects, drawing informal diagrams for either yourself or other developers that you work with, it won’t matter whether you use dashed or solid lines.

If you apply UML in a more formal way, it might matter.

My own view is that I prefer the solid lines. I’m more familiar with dashed lines showing the return control flow from objects. On the other hand, a different line highlights a ‘special’ function, of which object creation is one of these. 

Permalink Add your comment
Share post
Christopher Douce

A sketch of M813 Software Development and M814 Software Engineering

Visible to anyone in the world
Edited by Christopher Douce, Tuesday 10 June 2025 at 17:09

After becoming the module chair of TM354 Software Engineering, I had a look at two related postgraduate modules, M813 Software Development and M814 Software Engineering

These two modules sit alongside a number of other modules that make up the MSc in Computing programme. My intention was to see what related topics and subjects are taught, and whether there were any notable differences about how they were taught. 

This blog aims to highlight some of the key elements of these modules. To prepare this post, I had a good look through the module materials, including the assessment materials, and spoke with each of the module chairs. My intention of looking at these modules is to identify what themes and topics might potentially feed into a future replacement of TM354, or another related module. This summary is by no means comprehensive; the points I pick up on do, of course, reflect my interests.

I hope these notes are useful to anyone who is interested in either software engineering, or postgraduate computing, or both. Towards the end of the blog, I share a quick compare and contrast between the two modules and share some links to resources for anyone who might be interested.

M813 Software Development

M813 aims to “to provide the skills and knowledge necessary to develop software in accordance with current professional practice, approaches and techniques”.

The key module learning aims are to:

  • teach you a variety of fundamental techniques for software development across the software lifecycle, and to provide practice in the use of these techniques
  • give you enough knowledge to be able to choose between different development techniques appropriate for a software development context
  • make you aware of design and technology trade-offs involved in developing enterprise software systems
  • enable you to evaluate current software development practices
  • give you an understanding of current and emerging issues in software development
  • give you the research skills needed to stay at the leading edge of software development.

The module description suggests that students “will have an opportunity to engage with an organisational problem of your choice, working towards a fit-for-purpose software solution” and students “will also have an opportunity to carry out some independent research into issues in software development, including analysing, evaluating and presenting results”.

It makes use of a set text, Head First Design Patterns, accessed through the university library. To help students with the more technical bits, it shares some resources about a graphical tool, Visual Paradigm, which enables students to create diagrams using the Unified Modelling Language (UML).

The module has 10 units of study, which are spread over four blocks. The module’s assessment strategy summarised below, followed by each of the blocks.

Assessment strategy

Like many other modules, there are two parts of assessment: tutor marked assessments (TMAs), and an examinable component, which is an end of module assessment (EMA). Interestingly, the TMAs adopt a more practical and software development skills perspective, whereas the EMA is more about carrying out research which is applied to a study context. To pass the module, students need to gain an average score of 50% in both of the components.

TMAs 1 and 3 account for 30% of the continually assessed part of the module. Due to the practical focus of TMA 2, this assessment accounts for 40% of the overall TMA score.

Block 1: Software development and early lifecycle

This block is described as helping to “learn the principles and techniques of early software lifecycle, from requirements and domain analysis to software specification. You will engage with a number of practices, including capturing and validating requirements, and UML (Unified Modelling Language) modelling with activity and class diagrams.”

The model opens with a research activity which involves finding and reading academic articles. There are three other research activities which build on this first searching activity. These activities helps students to understand what the academic study of software engineering looks like. Plus, when working as a practicing software engineer, it’s important to know how to find and evaluate information about methods, approaches, and frameworks.

This unit beings to introduce students to a tool that they will use during the module; Visual Paradigm. Throughout the module, students will learn more about different UML diagrams, such as use cases, class diagrams, and activity diagrams.

Unit 1, introducing software development, shares a couple of perspectives: a philosophical perspective and a historical perspective (history is always useful), before mentioning risk, quality and then moving onto starting to look at UML.

Unit 2, requirements and use cases, covers the characteristics of requirements and the forms that they can be presented. Unit 3, from the context to the system, starts with activity diagram (which are all about representing a context) through to class diagrams, which is all about beginning to realise a design of software using abstractions. Finally, unit 4, specifying what the system should do, touches on more formal aspects of software specification.

Block 2: Design and code

This next block explores “principles and techniques of software design, construction, testing and version control”. Other topics include design patterns, UML modelling with state diagrams and creating of software using the Java language. Out of all the blocks in the module, this is the one that has a really practical focus.

In addition to links to further video tutorials about Visual Paradigm, there’s some guidance about how to start to use Microsoft Visual Studio Code, and some initial development activities.

Unit 5, design, introduces some basic design principles, and new forms of diagram: communication diagrams and object diagrams. Unit 6, from design to code, shares a bit more detail about the principles of object-oriented programming, and goes onto introducing the topic of configuration management. Unit 7, design patterns, continues the theme of object-oriented programming by introducing a set of patterns from the Gang of Four text, which is complemented by a software development activity. 

Block 3: Software architectures and systems integration

Block 3 goes up a level to explore how to “develop software solutions based on software architectures and frameworks”. 

Unit 8, software architectures introduces the notion of architectural patterns, and how to model patterns using UML. Another useful topic introduced is state machines. An important theme that is highlighted is the idea of layer of software which, in turn, is linked to the notion of persistence (which means ‘how data can be saved’). This is complemented by unit 9, component-based architectures, which offers a specific example.  The module concludes with unit 10, service-oriented architectures.

Block 4: EMA preparation

This fourth block relates to the module’s end of module assessment (EMA), where students have to carry out some applied research into a software context in which they are familiar with. To help students to prepare, there are some useful preparatory resources.

Reflections

I really liked that this module brings in a bit of history, describing the history of object-oriented programming. I also liked that it shared some really useful descriptions about the differences between scholarship and research. There are some common elements between M813 and TM354, such as requirements and the use of UML, but I’ll say more about this in a later section.

M814 Software Engineering

M814 is “about advanced concepts and techniques used throughout the software life cycle” and replaces two earlier 15 point modules: M882 Software Project Management and M883 Requirements Engineering.

The module aims are to:

  • develop your ability in the critical evaluation of the theories, practices and systems used in a range of areas of Computing
  • provide you with a specialised area of study in order that you can experience and develop the frontiers of practice and research in focused aspects of Computing and its application
  • encourage you, through the provision of appropriate educational activities, to develop study and transferable skills applicable to your employment and continuing professional development
  • enable you to develop a deeper understanding of a specialist area of Computing and to contribute to future developments in the field.

Although this module is less ‘applied’ than M813, there are some important elements. Students make use Git and GitHub, and use a simulation and modelling tool, InsightMaker.

The module has four study blocks, containing 26 study units; a lot more than M813. These are summarised in the following sections. Students are also required to consult a set text, Mastering the requirements process by Robertson and Robertson, which is also available through the OU Library.

Assessment strategy

The module has three TMAs and an end of module exam, which is taken remotely (as opposed to an EMA). TMAs 1 and 3 have a weighting of 30% each, with TMA 2 being slightly more substantial, accounting for 40%. Students have to pass both the TMAs and the exam, gaining an average of 50% in each.

The exam covers all module learning outcomes and is split into two sections. For the second section students would have needed to be familiar with a research article.

Block 1: Software engineering context

The first two units, unit 1, software in the information society and unit 2, the organisational and business context, introduces software engineering. This is followed by an introduction to the organisational context through unit 3, organisational context, codes and standards. The title of this unit refers to professional codes, and professional and technical standards. Accompanying topics include software and the law, which includes intellectual properly, trademarking, patents, and data protection (GDPR) legislation. The final unit, unit 4, addresses ethics and values in software engineering.

Block 2: Software engineering methods and processes

Block 2 concerns software engineering methods and processes. The first two units highlights the notion of the process model, project management, and quality management, which includes the ISO 9001 standard and the Capability Maturity Model (CMMI). These are presented in unit 6, software activities and unit 7, software engineering processes. 

The module then covers unit 8, agile processes and unit 9, managing resources, which includes materials about SCRUM, Kanban, and something called the SAFe framework, which is a set of workflow patterns for implementing agile practices. There is also a case study which describes how agile is used in practice. I remember seeing some photographs that show how developers have been sharing information about project status using whiteboard and other displays. The module concludes with unit 10,  managing uncertainty and risk, and unit 11, software quality.

A part of this block makes use of simulation, introducing a ‘simulation modelling tool’ which can be used to experiment with the concept of Brooks’ law. As an aside, this reminds me of a short article that touched on a similar topic. In the context of M814, I like how the idea of simulation has been applied in an interesting and pedagogically helpful way.

Block 3: Software deployment and evaluation

Block 3 concerns software deployment and evolution. In other words, what happens after implementation. It includes some materials about DevOps (the integration of development with the operation of software), and continual integration and delivery. There are three units: unit 12, software configuration management, which introduces Git and GitHub, unit 13, software deployment, and unit 14, software maintenance and evolution.

This block returns to simulation, specifically exploring Lehman’s 2nd law (Wikipedia), which means that software complexity increases unless something is done to reduce it. Students are also directed to a text book, Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation, by Humble and Farley. 

Block 4: Back to the beginning

The final block returns to the beginning by looking at requirements engineering, extensively drawing on the set text, Mastering the Requirements Process. It introduces what is meant by requirements engineering, a subtopic within software engineering. Unit titles for this block includes scoping the business problem, functional and non-functional requirements, fit criteria and rationale, ensuring quality of requirements, and reusing requirements. The block concludes with a useful section: unit 26, current trends in software engineering.

Reflections

I really liked the introductory sections to this module; they adopt a philosophical tone. I also really like how it uses case studies. What is notable is that there are a lot of materials to get through, but all the topics and units are certainly appropriate and are needed to cover the module in a good amount of depth.

Similarities and differences

There is understandably some cross over between M813 and M814; they complement each other. M813 is more of an ‘applied’ module than either M814 or TM354, but M814 does contain a few practical elements. It’s use of simulations is particularly interesting. In comparison to the undergraduate software engineering module, TM354, the two postgraduate modules do clearly require the application of higher academic skills, such as understanding what it means to carry out scholarship.

In my opinion, there appear to be more similarities between M813 and TM354 than with M814. It is worth noting that TM354 introduces topics that can be found in both postgraduate modules.

TM354 and M813 both emphasise design patterns. An important difference is that in M813, students are required to demonstrate how patterns might be applied, whereas on TM354 students have to necessarily demonstrate their understanding of design patterns that have been chosen by the module team. Both modules also explore the notions of software architectures and state machines.

There are differences between TM354 and M813 in terms of tools. TM354 steers away from the use of diagramming tools, but by way of contrast, M813 makes extensive use of Visual Paradigm. TM354 makes use of NetBeans for the design patterns task, whereas M813 introduces students to Visual Studio Code.

By way of contrast, M814 covers wider variety of concepts which are important to the building of ‘software in the large’; the importance of software maintenance and the characteristics of software quality.

UML is featured in all three modules. They all refer to software development methods and requirements engineering. Significantly, they all use the Roberston and Robertson text. The differ in terms of the depth they explore the topic.

To conclude, software development and software engineering are huge subjects. The three modules that are mentioned in this blog can only begin to scratch the surface. Every problem will have a unique set of requirements, and every problem will require different methods. There are two key elements: people and technology. Software is designed by people and used by people. Where there’s people, there’s always complexity. Adding technology in the mix adds an additional dimension of complexity.

Resources

The following links takes you to some useful OpenLearn resources:

Acknowledgements

Many thanks to Arosha Bandara who spent some time introducing me to some the key elements of M814. I also extend thanks to Yujin Yu. Both Arosha and Yujin are professors of software engineering. The current chair of M814 is Professor Andrea Zisman, who is also a professor of software engineering. Thanks are also extended to the TM354 module team: Michael Ulman, Richard Walker, Petra Wolf and Andrea Zisman.

Permalink Add your comment
Share post
Christopher Douce

Exploring TM354 Software Engineering

Visible to anyone in the world

Over the last year I’ve taken over as the incoming module chair for TM354 Software Engineering, taking over from Leonor Barroca, who has done a brilliant job ever since the module was launched back in 2014. I first learnt about TM354 through a module briefing which took place in September 2014.

What follows is a summary of the various elements that can be found within the TM354 module website. I’ve written this blog whilst wearing my ‘tutor hat’; to help students who are new to this module.

It goes without saying that two of the most important elements are, of course, the module calendar, and the assessment page which provides access to all the TMAs. One thing that I tend to do whenever I study a module is to get a printout of each of the TMAs, using the ‘view as single page’ option, just so I get an early idea about what I have coming up. You should also take some time to review the module guide and the accessibility guide.

Key resources: the blocks

TM354 is based around three printed blocks which can also be downloaded as PDFs by visiting the resources tab:

  • Block 1: Units 1-4 From domain to requirements
  • Block 2: Units 5-8 From analysis to design
  • Block 3: Units 9-12: From architecture to product

Complementing these blocks is, of course, the module glossary, which can be accessed through the resources pages.

In OU modules, the glossary is pretty important. It presents the module team’s definition of key terms. If there is an exam or an EMA question which calls for a definition, you should always draw on terms that are defined by the glossary. A practical tip is: do spend time looking at and going through the module glossary.

Software

There are three bits of software that you will need to use, and the first of these is optional:

A sketching tool: In your TMAs you will be required to draw some sketches using a graphical language called the unified modelling language (UML). UML is a really useful communication tool. It can be used to depict the static structure of software (which bits it contains), and the dynamic interaction between components (which is how they are used with each other). How you draw your diagrams is completely up to you. You can draw a sketch by hand, draw a sketch using the tools that you have in your word processor, or you can download a tool to help you. My recommendation is to use a tool that specifically helps you to draw UML diagrams. This way, the software gives you a bit of help, saving you time (although you have to spend a bit of time learning a tool). I use a tool called Visual Paradigm, which is available under a student licence, but other tools, such as UMLet might be useful. There are a lot of tools available, but if you’re pressured for time, a pencil, ruler and paper, and digital photograph will be sufficient.

ShareSpace: this is an OU tool which you will use to share some of your software designs with fellow students. Software engineering is a team sport. ShareSpace is used to simulate the sharing and collaboration between fellow software engineers. As well as posting your sketches, you will be asked to comment on the design of fellow students. When you leave comments, you will be able to see comments about your own design.

NetBeans: Netbeans is an integrated development environment; a tool for developing software. You will use Netbeans in the final block of the module to look at, and change some software code that relates to design patterns. If you’re familiar with other development environments, such as IntelliJ, or even BlueJ (from earlier studies with M250) you could use those instead.

Forums

The module has a number of forums. A practical recommendation is to subscribe to each of these, so you are sent email copies of the messages that are posted to them. 

There is a module forum, where you can ask questions about the module, and a forum for each of the TMAs. You can use these TMA forums to ask questions about the assessments if you’re unclear about what you need to do. Do bear in mind that the moderator can only offer guidance and might direct you towards relevant bits of the module materials.

There is a tutor group forum, where you can interact with your TM354 tutor. Your tutor may well share some materials through this forum, so it is important that you subscribe to it, or check it from time to time.

There is what is called a ‘online tutorial forum’. Tutorials are run in clusters. What this means is that groups of tutors work together to offer a programme of tutorials (which are sometimes known as learning events). These tutors will use this forum to share resources that relate to their tutorials. They may, for example, post copies of PowerPoint presentations that formed the basis of their tutorials, which may contain useful notes in the notes section of each slide.

Finally, there is the café forum. This is an informal area to chat with fellow students about TM354 and OU study. This area isn’t extensively monitored by the forum moderator.

One thing to note is that sometimes the names of these forum areas can and do change. The names of the forums here might not be the names of the forums that you have on your module website.

Study guides

Although most of the module materials are available through the printed blocks, there are some important elements of the module that are only available online. Within the module calendar, you will see study guide pages. To make sure you go through each of these. Sometimes, these guides are presented along side other accompanying online resources that you need to work through to answer some of the TMA questions.

Resources pages

The Resources pages (which is sometimes known as the resources tab) is a place that collates everything together: all the guides (module, accessibility and software guides), PDF versions of the blocks, online version of each of the units (which can be found within each of the blocks), and any additional resources that need to be studied:

  • Choosing closed-box test cases
  • Monoliths versus microservices
  • Introducing Jakarta EE
  • Implementing use cases

Towards the bottom of this page, there is a link to a zip file which contains some source code that is used with TMA 3, along with some NetBeans software installation instructions.

The final bit of the Resources pages that I would like to emphasise is the Download link, which can be found on the right hand side of the page. Through this link, you can access all the module resources in different formats. You can, for example, download some of the media files onto your mobile device for you to review later, or you can download ePub versions of all the study guides and units onto an e-reader.

iCMAs

TM354 also has a set of interactive marked assessments (iCMAs). These are designed to help you to learn and to remember some of the key module concepts. The iCMAs do not formally contribute to your overall assessment result.

Tutorials

Before my final section, I’ll say something about tutorials. Do try to attend as many as you can. There are tutorials that introduce you to each of the block, and help to guide you through what is required for the TMAs. There are also a series of exam revision tutorials. Do try to attend as many as you can, since different tutors will present ideas in different ways.

Reflections

There is quite a lot to TM354; there are a lot of resources, which take a lot of reading. To familiarise myself with the materials I’ve taken an incremental approach: studying a bit at a time. Although the printed blocks are central to the module, it is important to pay attention to the online materials too.

My biggest tips are:

  • Get a printout of the module guide.
  • Get a printout of each of the TMAs.
  • Make sure that you thoroughly read the module guide. You might want to get a printout of this too.
  • Do remember to regularly refer to the module glossary. These definitions are important.
  • Attend as many tutorials as you can.

Permalink Add your comment
Share post
Christopher Douce

Software engineering podcasts

Visible to anyone in the world
Edited by Christopher Douce, Thursday 16 May 2024 at 09:45

On the TM354 Software Engineering module forum, the following question was posed: ‘does anyone know of any software engineering podcasts?’  TM354 tutor, Tony Bevis gave a comprehensive reply. With permission, I am sharing selected elements from Tony’s post, listed in no particular order.

SE Radio

This SE Radio (se-radio.net) is pitched as the podcast for professional software engineers. The following sentences are drawn from the SE Radio about page: ‘The goal is to be a lasting educational resource, not a newscast. …  we talk to experts from throughout the software engineering world about the full range of topics that matter to professional developers’. It is interesting that this podcast has a formal link to a recognised publication: ‘SE Radio is managed by the volunteers and staff of IEEE Software, a leading technical magazine for software professionals published by the IEEE Computer Society. All content is licensed under the Creative Commons 2.5 license’. Episodes appear to be quite long; an hour or so.

What the Dev?

What the Dev? is a podcast from SD Times magazine. It is said to ‘cover the biggest and newest topics in software and technology’. The magazine has an accompanying weekly email newsletter which contains a summary of current technology news items and a weekly podcast. Each podcast appears to be relatively short. The ones I have listened to were approximately 20 minutes.

Agile Toolkit Podcast

Agile is an important software development approach. The Agile Toolkit podcast 

aims to share ‘conversations about agile development and delivery’ through an archive that runs from 2005 through to the current day. They appear to be pretty long, so if listening to podcasts to learn more about agile, it is important to be selective in terms of the podcasts that are listened to. 

Open Source Podcasts

Open Source technology is an important subject to software engineers. When doing a bit of internet searching, I discovered something called the Open Source Podcasts last.fm channel which aims to share ‘conversations and advice from Open Source technologists on a wide range of topics’ and summarises links to a range of different podcasts.

A quick search for the term Software Engineering on last.fm takes me to a podcast channel called Software Engineering DailyIt really does appear that there is a topic or a technology made available practically every day. These podcasts range in length between half and hour and an hour.

Hello World

Hello World is a magazine published by the Raspberry Pi Foundation. It is free for computer science educators. I am regularly send email updates about new episodes. The focus is primarily about computing education in schools. The Hello Word podcasts are a good and interesting listen, especially if you're interested in moving towards computing education.

Reflections

There are a lot of resources out there. There are so many podcasts and recordings, that I feel overwhelmed. I have yet to establish a regular podcast listening habit, and I have yet to find a convenient way (that works for me) to access these different channels.

I quite like What the Dev? since the episodes are quite short; I can be listening to a couple of these whilst getting on with other things. It is good to note that the first one mentioned on this blog is recognised by the IEEE Software magazine, and this deserves a more detailed look. The daily software engineering podcast looks to be of interest too. 

What is surprising to me is how many bits of technology that feature in these podcasts that I don’t recognise; a lot is new to me. I’m hoping that some of these podcasts will enable me to learn more about new technologies, understand their role and purpose, and how software engineers might use them.

Acknowledgements

A big thank you to Tony. I’m going to be doing a lot of listening!

Permalink Add your comment
Share post
Christopher Douce

Degree apprenticeship: DTS Themes

Visible to anyone in the world
Edited by Christopher Douce, Tuesday 5 March 2024 at 08:44

The Open University offers a Computing undergraduate degree apprenticeship for England and Wales. The English degree apprenticeship is known as the Digital and Technology Solutions Professional Degree Apprenticeship.

The current version of the standard, which is defined by the institute for apprenticeships is version 1.2 (Institute for apprenticeships)

Apprentices need to pass two elements: their degree bit (the academic element), and the apprenticeship (the work based learning element). Both of these elements are broken down into smaller parts. The academic bit is broken down into academic modules. The apprenticeship is defined in terms of knowledge, skills and behaviour (KSB) attributes which are, in turn, grouped together into sets of themes. To gain their apprenticeship apprentices must provide evidence of being able to satisfy all the KSBs which make up the themes. 

Passing the apprenticeship bit is a two-step process. Apprentices must demonstrate competency across these themes before entering what is called a ‘gateway’ process, which takes them to something that is called an endpoint assessment (EPA). The EPA is a professional conversation where the apprentice speaks with an assessor.

What follows is a summary of the themes for both parts of one apprenticeship pathway: the software engineering professional pathway. Some themes can only attract what could be called a pass grade, whereas others can attract a distinction grade. For concision, only the criteria that relates to the pass are highlighted here. A further note is that the themes are split into two bits: a core set of themes, and themes that relate to a specific pathway. For detailed information, do refer to the DTS standard.

A further note is that all the themes highlighted here, and can be found within the standard, are also mentioned within the apprenticeship ePortfolio tool (which is known as MKM). Where there is a heading, there will also be a space to record evidence.

Towards the end of this summary, there is some guidance about the recording of evidence. This is important; without evidence it is not possible to pass through the gateway process, or to complete the final end point assessment.

DTS apprenticeship themes

Core themes

The Organisational Context

Reviews the roles, functions and activities relevant to technology solutions within an organisation. (K7)

Core Technical Concepts

Critically evaluates the nature and scope of common vulnerabilities in digital and technology solutions (K11)

Explains core technical concepts for digital and technology solutions, including:

  • The approaches and techniques used throughout the digital and technology solution lifecycle and their applicability to an organisation’s standards and pre-existing tools. (K6)
  • Data gathering, data management, and data analysis. (K12/K14)
  • Computer networking concepts. (K16)

Applied Technical Solutions

Demonstrates the use of core technical concepts for digital and technology solutions, including:

  • Initiate, design, code, test and debug a software component for a digital and technology solution. (S4)
  • Security and resilience techniques. (S9)
  • Initiates, designs, implements and debugs a data product for a digital and technology solution. (S10)
  • Plans, designs and manages simple computer networks. (S12)
  • Applies the principles of data analysis for digital and technology solutions. (K13/S11)

Leading and Working Together

Explains how teams work effectively to produce a digital and technology solution applying relevant organisational theories using up to date awareness of trends and innovations. (K8/S7/B4/B6/B7)

Describes the concepts and principles of leadership and management as they relate to their role and how they apply them. (K9/K10/S8)

Social Infrastructure - Legal, Ethical and Sustainability

Applies relevant legal, ethical, social and professional standards to digital and technology solutions considering both technical and non-technical audiences and in line with organisational guidelines. (K19/S15/B1/B2/B5)

Explains sustainable development approaches within digital technologies as they relate to their role including diversity and inclusion. (K20/B8)

Software Engineer themes

Underlying Principles

Describes scenarios covering all stages of a development lifecycle, identifying techniques and methods are applied in each case. (K21/SEK1)

Explains the principles of a range of development techniques, for each stage of the software development cycle that produce artefacts and the contexts in which they can be applied. (K22/SEK2)

Explains the principles of a range of development methods and approaches and the contexts in which they can be applied. (K23/SEK3)

Technical Solutions

Describes. how to interpret and implement a design, compliant with functional, non-functional and security requirements. (K24/SEK4)

Describes how tools that support teamwork can be used effectively. (K28/SEK8)

Innovation and Response

Describes how they respond to changing priorities and problems arising within software engineering projects by making revised recommendations, and adapting plans as necessary, to fit the scenario being investigated. (S20/SES5)

Explains how they determine, refine, adapt and use appropriate software engineering methods, approaches and techniques to evaluate software engineering project outcomes. (S21/SES6)

Legal, Ethics and Landscape

Describes how they extend and update software development knowledge with evidence from professional and academic sources by undertaking appropriate research to inform best practice and lead improvements in the organisation. (S23/SES8)

Preparing for the End Point Assessment

Towards the end of the apprenticeships, apprentices need to complete a significant work-based project. As well as writing a 6k word report, there must be evidence collected that relates to the following themes.

Core themes

The Organisational Context

Identifies the role digital technology solutions play in gaining a competitive advantage by adapting and exploiting them (K1)

Explains the principles of strategic decision making concerning the acquisition or development of digital and technology solutions. (K2)

Project Requirements

Analyses relevant evidence to produce a proposal for a digital and technology based project in line with legal, ethical and regulatory requirements whilst ensuring the protection of personal data, safety and security (S3/B3)

Project Planning and Resources

Produces a project plan which estimates risks and opportunities and determines mitigation strategies. (K3/S2)

Evaluates appropriate techniques and approaches that are used in creating a business case (K4)

The project applies techniques to estimate cost and time resource constraints. (K15)

Researches information on innovative technologies/approaches and investigates and evaluates them in the development of a digital and technology solution. (S14)

Solution Proposal

Analyses the business problem behind the project proposal to identify the role of digital and technology solutions. (S1)

Project Delivery

Carries out the identified solution proposal utilising a range of digital tools and standard approaches. (K5/S5)

Manages the project delivery to achieve digital and technology solutions. (S6)

Project Evaluation

Justifies their methods of research and evaluation which determined the selection of digital and technology solutions identified for the project. (K18)

Presents an overview of the project to appropriate stakeholders using appropriate language and style. (K17/S13/B5)    

Software Engineer themes

Technical Solutions

Analyses the factors affecting product quality and the approaches controlling them throughout the project development process. (K25/SEK5).

Selects and applies software tools appropriate to the Software Engineering project solution. (K26/SEK6)

Outlines approaches to the interpretation and use of artefacts. (K27/SEK7)

Innovation and Response

Identifies and defines a non-routine, unspecified software engineering problem. (S16/SES1)  

Recommends a software engineering solution that is appropriate for the project brief. (S17/SES2)

Selects and applies analysis methods, approaches and techniques in software engineering projects to deliver an outcome that meets requirements. (S18/SES3)

Demonstrates how they implement software engineering projects using appropriate software engineering methods, approaches and techniques. (S19/SES4)

Evaluates their selection of approach, methodology, analysis and outcomes to identify both lessons learned and recommendations for improvements to future projects software engineering projects. (S22/SES7)

Evidence for the themes

Evidence for all these themes must be uploaded to the apprenticeship ePortfolio. There is two types of evidence: witness statements, or evidence through the academic study. For the apprenticeship element, witness statements are considered to be a stronger form of evidence than completing tutor marked assessments. 

Witness statements can be prepared by a line manager, or a delegated mentor of colleague. They present a narrative summary of what an apprentice has done or achieved and should be anything between 100 and 150 words. These statements should be uploaded to the apprentice’s ePortfolio tool by the apprentice.

Acknowledgments

The key reference for this post is, of course, the DTS standard. The text for some of these headings have been drawn from the MKM ePortfolio.

Permalink Add your comment
Share post
Christopher Douce

Book review: Two novels about DevOps

Visible to anyone in the world

When I started to do some background reading into how TM354 Software Engineering might need to be updated, I was guided towards two curious novels. 

From October 23 I start to study A233 Telling stories: the novel and beyond, as a part of my gradual journey through a literature degree. For quite a while, I have been thinking there have been very little to connect novels and software engineering, other than obvious: the development of Word processing tools that can be used to write novels, and the Amazon cloud infrastructure used to distribute eBooks.

What follows is a very short (and not very through) review of two books that are all about DevOps: The Phoenix Project, by Gene Kim (and others), and The Unicorn Project, which is also by Kim. 

The Phoenix Project

I shall begin by sharing an honest perspective: the idea of a novel about software development did not excite me in the least. The text has a subheading that seemed to strengthen my prejudices: “a novel about IT, DevOps, and Helping Your Business win”. This is no crime drama or historical novel. The closest genre that one could attribute to The Phoenix Project is: thriller. I feel it occupies a genre all of its own, which could be labelled: IT business thriller.

The main protagonist is Bill Palmer, who has the unenviable job title of “Director of Midrange Technology Operations”. Bill works for a mysterious American company called Parts Unlimited. A lot happens in the early chapters: Bill is invited in to have a chat with a manager, who gives him a promotion. He is then asked to take a lead in getting the Phoenix Project, a new mission critical software system to work. Failure means the business would lose any potential competitive advantage, and the IT infrastructure might be outsourced, which means that people would lose their jobs.

Before Bill can get settled, he is hit by a payroll outage, which means the employees and unions are angry. He also quickly realises that the whole IT setup is in a complete state. Kim and his co-writers do a good job at attempting to convey a sense of paralysis and panic. The reason for this is expressed through the notion of ‘technical debt’, which means that existing IT infrastructure has become increasing complicated over time. Quick fixes now can, of course, lead to further problems down the line. Parts Unlimited has not been ‘paying down’ their technical debt.

An important element of the novel is the division between the Ops (operations) bit of IT, and the development division. There are other competing teams, which also play a role: there is the QA (quality assurance), and the security team. Security is important, since if an organisation doesn’t keep its auditors happy, the directors may face legal consequences.

I think I would be mean to describe the characters as one dimensional, since plot clearly takes precedence over characterisation. The main protagonist Bill is the most richly described. His organisational skills and sense of calm in the face of chaos is explained through his military background. 

Ubergeek Brent plays an important role, but I really wanted to know what made him tick. Erik Reid, an unofficial mentor to Bill plays the role of a Yoda-like mystic who provides insightful advice, who draws on his extensive knowledge of lean manufacturing. A notable character is Sarah Moulton, the Senior Vice President of Retail Operations, who takes on the unenviable role of the villain.

What struck me was the amount of technical detail that exists within the text. There are references to services, languages, and source code management. There is also the important notion of the ‘release’, which is a persistent problem, which pervades both this text, and its follow-up. Whilst I enjoyed the detail, I’m unsure about the extent to which the lay reader would grasp the main point that the book was making: to gain efficient business value from IT, it is best to combine together operations and development. Doing this enables the creation of tighter feedback loops, and reduces operational risk. Along the journey, there are these moments which raise an eyebrow. An example of this is where there is unambiguous contrition from a security manager once he sees an error in his thinking.

Bill identifies barriers and instigates change. After a “challenging” release of Phoenix, he ultimately prevails. During the updates, there is the emergence of a ‘side project’, which makes use of new fangled cloud technology to deliver value to the business. In turn, this generates income that makes shareholders happy. Political battles ensue, and Bill then gets on a fast track to a further promotion.

Apparently, The Phoenix project was popular amongst developers when it first came out, but I’ve been peripherally distant from the domain of software engineering, which means I’ve been a bit late to the party. Before providing further comment, I’ll move onto the sequel: The Unicorn Project.

The Unicorn Project

When I read The Phoenix Project, one of my criticisms was about the identity of the main protagonist. Novelists can not only use their craft to share a particular reality, but they can also have the potential to effect change. Whilst I liked Bill and the positive role that he took within the novel, given the clear and persistent gender disparities in the sector, I did feel that a female protagonist would have been more welcome. This unarticulated request was answered through The Unicorn Project in the form of Maxine Chambers, the lead protagonist in Kim’s follow up novel. 

Maxine is collateral damage from the payroll failure. Despite being hugely talented, she is side-lined; temporarily reassigned to The Phoenix Project. Her starting point was to try to get a build of all the software that was being developed, but faced persistent complexity, not just in terms of software, but in terms of finding out who to speak with to get things done.

Whilst the main project was saturated with bureaucratic burden, Maxine gradually found “her people”; smart like minded people who were also frustrated by the status quo. She also spoke with the business mystic and mentor, Erik Reid, who was very happy to share his words of wisdom. Ubergeek Brent also makes an appearance, but his backstory continued to remain opaque.

A really interesting part of the text is where Maxine ‘goes into the field’ to learn what happens in the Parts Unlimited stores. Drawing on the notion of ethnographic observations, she learns first-hand of the difficulties experienced by the store workers. Another interesting element which occurs towards the end of the novel is the movement towards embedding institutional learning, and drawing upon the creativity that exists within the workforce. In comparison to The Phoenix Project, there is more emphasis about culture, specifically, developing a no blame culture.

A key theme of The Unicorn Project is shared with The Phoenix Project: it is important to combine development and operations together, and it is helpful to perform continually integration since users can gain access to the new features more readily. A notable section highlighted the challenge of carrying out code merges during marathon meetings. If code is continually integrated, then there isn’t the need for all those uncomfortable meetings. Significantly, the Unicorn project also goes further than The Phoenix Project, since it is also about the power of teamwork, collaboration and the potential of smaller projects positively affecting and influencing others. Like Bill, the formidable Maxine is successful. 

Reflections

My initial scepticism of these novels comes from my view that novels are made from story and character, not technology. What is very clear is that although technology plays an important role, people are, by far, the most important. The novels foreground the role of teams and their organisation, the importance of sharing knowledge, and the importance of collaboration and leadership. It is clear that soft skills matter for the simple reason that software is something that is invisible; developers must be able to talk about it, and to each other. This is also why organisational culture is so important.

An important reflection is that both Bill and Maxine have difficult and very stressful jobs. They are both shown to work ridiculously long hours, often over the weekend. In the novels, IT is depicted as a difficult occupation, and one that is far from being family friendly. The families of both protagonists are featured, and they both suffer.

Although both of these novels are stories about the success of heroes battling against impossible odds, the hyperreality of the chaos within Parts Unlimited makes their success difficult to believe. Conversely, the hyperreality that is expressed through the impossible administrative burdens of the ticketing systems offers a warning to those who have to work with these systems on a daily basis.

The mystical mentor Erik is, of course, difficult to believe. He is a device use to share the pragmatic business and manufacturing theories that are central to the themes that are common to both books. I didn’t mind Erik. Like with Brent, I wanted to know more of his backstory, but with a limited work count and a lot of themes to work through, I understand why creative trade-offs were made to foreground more pressing technical topics.

Whilst I found the broader context, automotive spares, mildly interesting, I found myself becoming bored by the theme of IT being used to gain ever increasing amounts of money through the persistent and relentless pursuit of the customer. Although I accept that IT can be thought of a product of capitalism, there are more interesting ways that IT can be used and applied. Technology can be used to reflect humanity, just as humanity is reflected in technology. Whilst capital is important, there are other subjects that are more interesting. I think I would like to read an IT business thriller about cyber security, as opposed to one about a business that has found a new way to sell engine monitoring apps.

To conclude, these two novels were fun. They were also informative without being overly didactic. Although IT business thriller books is not my favourite genre, I can say that I enjoyed reading them. I’m more a fan of Victorian romances.

Epilogue

In 2004, I was working as a Software Engineer in a small company that designed and manufactured educational equipment used to teach the principles of electrical engineering and computing. 

One day in April, the manager director bounded into the office where I worked.

“We’re selling our e-learning division! This means that we won’t be able to sell our flagship learning management system anymore. We need to find a solution. We had been working on an earlier project, but that didn’t work out. So, we need you to head up the development of a new learning management system”.

That new learning management system was given an internal codename. It wasn’t very original. 

We called it Project Phoenix.

References

Kim, G. et al. (2014) The Phoenix Project: A Novel about IT, DevOps, and Helping Your Business Win. 1st edition. Place of publication not identified: IT Revolution Press.

Kim, G. (2019) The Unicorn Project. IT Revolution Press.

Permalink Add your comment
Share post
Christopher Douce

SEAD/LERO Research Conference ‘23

Visible to anyone in the world
Edited by Christopher Douce, Thursday 20 July 2023 at 09:56

I attended my first joint OU SEAD/LERO research conference, which took place between 4 July and 6 July 23. SEAD is an abbreviation for Software Engineering and Design Research Group a research group hosted within the OU’s School of Computing and Communications. The conference was joined by members of LERO, the Science Foundation Ireland Research Centre for Software, which based in Limerick.

What follows is a summary of the two days I attended. There was a third day that I didn’t attend, which was all about further developing some of the research ideas that were identified during the conference, and researcher professional development

The summary is intended for the delegates of the conference, and for anyone else who might be interested in what happens within the SEAD research group. All the impressions (and any accompanying mistakes in my note taking) are completely my own. What is summarised here isn’t an official summary. Think of it as a rough set of notes intended to capture some of the themes that were highlighted. It is also used to share some potential research directions and areas that intend to be further developed and explored.

Day 1: Introductions and research discussions

Bashar Nuseibeh kicked off the day by highlighting the broad focus of the conference: to consider the role of software in society. Although I missed the first minutes of his opening address due to traffic, there was a clear emphasis on considering important related themes, such as social justice.

The first session was an ice breaker session. This was welcome, since I was an incomer to the group, and there were many delegates who I had not met before. We were asked to prepare the answers for three questions: (1) Who you are, including where you are based and your role? (2) What is your main research area/interest?, and (3) Something you love about your research and something you dislike. (Not bureaucracy!)

Having a go to answer these myself, I work as a staff tutor. My research interests have moved and changed, depending on what role I’ve been doing. Most recently, it has been about the pedagogy of online teaching and learning. When I was a researcher on an EU funded project, I was looking at the accessibility of online learning environments and supporting students who have additional requirements. Historically, my research has been situated firmly in the area of software engineering; specifically, the psychology of computer programming, maintenance of object-oriented software, and software metrics (informed by research about human memory). I have, however, returned to the domain of software engineering, moving from the individual to communities of developers by starting to consider the role of storytelling in software engineering, working with colleagues Tamara Lopez and Georgia Losasso.

What I like about the research is that it is really interesting to discover how different disciplines can be applied to create new insights. What can be difficult is that different disciplines can sometimes use different languages.

Invited talk: navigating the divided city

Next up was an invited talk by Prof. John Dixon from the OU’s Social Psychology research group. John’s presentation was about “intergroup contact, conflict, desegregation, and re-segregation in historically divided societies”. John described how technology was used to explore human mobility preferences. Drawing on research carried out as a part of the Belfast Mobility Project. The project studies, broadly speaking, where people go when they navigate their way through spaces, and can be said to sit within an intersection between social science and geography. Technology was used by researchers to study activity space segregation and patterns of informal segregation, which can shed light on social processes. 

John also highlighted tensions that a researcher must navigate, such as the tension between open science (where data ca be made available to other researchers) and the extent to which it is ethical to share detailed information about the movement of people across a city.

There was a clear link between the talk and the theme: the connection between software and society. This talk also resonated with me personally: as a regular user of an activity tracker called Strava, I was already familiar with some of the ethical concerns that were shared. After becoming a user of Strava, I changed a couple of settings to ensure that my identity is disguised. Also, a year ago I noticed that the activity tracker has started to hide the start point and the end point of any activity that I was publicly sharing. A final point from the part of the day is that both technology and software can lead to the development of new methods and approaches.

Fishbowl: Discussing society and software

Talking of new methods and approaches, John’s talk (and a lunch break) was followed by an event that was known as the ‘fishbowl session’, which introduce a ‘conference method’ that I had never heard of before.

In some respect, the ‘fishbowl’ session was a discussion with rules. Delegates sat on one of ten chairs in the middle of the room, and have a conversation with each other, whilst trying to connect together either the main theme of the discussion (software and society) or some of the topics that emerge from the discussions.  We were encouraged to discuss “anything where software has a role to play”.

The fishbowl discussed consequences of technology, collective education, critical thinking (of users), power of automation, concentration of power (in corporations), the use of AI (such as large language models), trade-offs, and complex systems. On the subject of AI, one view I noted down was that perhaps the use of AI ought to be limited to low risk domains, and leave people to the critical thinking (but this presupposes that we understand all the risks). There was also a call to ensure that AI tools to explain their “reasoning”, but this also implicitly links back to points about skills and knowledge of users. This is linked to the question: how do we empower people to make decisions about the systems that they use?

Choices were also discussed. Choices by consumers, and by developers, especially in terms of what is developed, and what is good to develop. Also, when uncovering and specifying requirements, it is important to consider what the negatives might be (an observation which reminds me of the concept of ‘negative use cases’ which is highlighted in the OU’s interaction design module).

I noted down some questions that were highlighted: how do we present our discipline? Do we research how to “do software” and leave it up to industry? Should we focus on the evaluation of the impact of software on communities and society? An interesting quote was shared by Bashar, which was: “working in software research is working for society”.

A final reflection I noted was that societal problems (such as climate change) can be thought as wicked problems, where there is no right answer. Instead, there might be solutions that are not very right or wrong, or solutions that are better or worse than others.

It was difficult to distil everything down to a group of neat topics, but here are some headings that captured some of points that were discussed during the fishbowl session: resilience, care, sustainability, education, safety and security, and responsibility.

At the end of the session, all delegates were encouraged to join a group that reflected their research interests. I joined the sustainability group.

Group Work 1 - Expansion of themes from the fishbowl

After a coffee break it was time to do some work. The guidance from the agenda was to “to develop some proposals for future research (problem; research objectives; research questions; methods; impact)”. 

The sustainability group comprised of four members: three from SEAD, one from LERO.

After broadly discussing the link between sustainability and software engineering, we produced a sketch of a poster that shared the following points:

  • How can we make connections and causal links between different (sub)systems explicit.
  • How can we engineer software to be holistically ‘resource aware’?
  • What is the meta-language for sustainable software systems?
  • What are the heuristics for sustainable software systems?

On the surface of it, all these points are pretty difficult to understand. 

The first point relates to the link between software, economics, and society. Put another way, what needs to be done to make sure that software systems can make a positive contribution to the various dimensions of our lives. By way of further context, the notion of Doughnut Economics was shared and discussed.

The second point relates to the practice of developing software. Engineers don’t only need to consider how to develop software systems that use resources in an efficient way, they also need to consider how software teams use and consume resources.

The third point sounds confusing, but it isn’t. Put another way: how do we talk about, or describe, or even rate the efficiency, or sustainability of software systems. Going even further, could it be possible to define an ISO standard that describes what elements a sustainable software system could or should contain?

The final point also sounds arcane, but when unpacked, begins to make a bit of sense. In other words: are there rules that software engineers could or should apply when evaluating the energy use, or overall sustainability of software systems? There are, of course, some links from this topic to the topic of algorithms and data structures (which is explored in modules such as M269 Algorithms, data structures and computability) which considers efficiency in terms of time and memory. A simple practical rule might be, for example: “rather than continually polling for a check in status of something, use signals between software elements”. There is also a link to the notion of software patterns and architecture (with patterns being taught on TM354 Software Engineering).

Day 2: Ideate and prototype

The second day kicked off with summaries from the various groups. The responsibility team spoke about the role of individuals, values, and organisations. The care group highlighted motivation, engagement, older users and how to help people to develop their technical skills. The education had been discussing computing at schools, education for informed choices, critical thinking, and making sure that the right problem is addressed. The resilience group discussed support through communities, and the safety and security group asked whether safety related to people, or to process.

A paraphrased point from Bashar: “look to the literature to make sure that the questions that are being considered haven’t been answered before” also, reflecting on the earlier keynote, “consider radical methods or approaches, and consider the context when trying to understand socio-economic systems”.

Group Work 2 - ideate and prototype

Back in our groups, our task was to try to operationalise (or to translate) some of our earlier points into clearer research questions with a view to coming up with a research agenda.

Discussing each of the points, we returned to the meaning of the term sustainability, along with what is meant by resource utilisation by code, also drawing upon the UN sustainable development goals https://sdgs.un.org/goals .

We eventually arrived at a rough agenda, which I have taken the liberty of describing in a bit more detail. The first point begins from a high level. Each subsequent points moves down into deeper levels of analysis, and concludes with a point about how to proactively influence change:

  1. What types of software systems or products consume the most energy?
  2. After identifying a high energy consuming product or system, use a case study approach to holistically understand how energy used, also taking into account software development practices and processes.
  3. What are the current software engineering practices of developers who design, implement and build low energy computing devices, and to what extent can sharing knowledge about practice inform sustainable computing?
  4. What are the current attitudes, perceptions and motivations about the current generation of software engineers and developers, and how might these be systematically assessed?
  5. After uncovering practices and assessing attitudes, how might the university sector go about influencing organisations to enact change?

Relating to the earlier call to “draw on the literature”, a member of our team knew of some references that could be added to the reference section of our emerging research poster:

Lago, P. et al. (2015) Framing sustainability as a property of software quality. Communications of the ACM, Volume 58, Issue 10, pp.70–78. https://doi.org/10.1145/2714560

Lago, P. (2019) Architecture Design Decision Maps for Software Sustainability. 2019 IEEE/ACM 41st International Conference on Software Engineering: Software Engineering in Society (ICSE-SEIS), 25-31 May 2019, IEEE. https://doi.org/10.1109/ICSE-SEIS.2019.00015

Lago, P. et al. (2021). Designing for Sustainability: Lessons Learned from Four Industrial Projects. In: Kamilaris, A., Wohlgemuth, V., Karatzas, K., Athanasiadis, I.N. (eds) Advances and New Trends in Environmental Informatics. Progress in IS. Springer, Cham. https://doi.org/10.1007/978-3-030-61969-5_1 

Manotas, I. et al. (2018) An Empirical Study of Practitioners' Perspectives on Green Software Engineering. 2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE). 14-22 May 2016. https://doi.org/10.1145/2884781.2884810

Wolfram, N. et al. (2018) Sustainability in software engineering. 2017 Sustainable Internet and ICT for Sustainability (SustainIT). 06-07 December 2017. https://doi.org/10.23919/SustainIT.2017.8379798

(A confession: I added the Manotas reference when I was writing up this blog, since it looked like a pretty interesting recommendation, especially have previously been interested in the empirical studies of programmers).

Conference visit: Bletchley Park

The second day concluded with a visit to Bletchley Park, which isn’t too far from the campus. It seemed appropriate to visit a place where socio-technical systems played such an important role. I had visited Bletchley Park a few times before (I also recommend the computing museum, which is situated on the same site), so I sloped off early to try to avoid the rush hour to London.

Day 3: Consolidate and plan next steps

This final day contained a workshop that had the title “consolidate and plan next steps” and also had a session about professional development. Unfortunately, due to my schedule, I wasn't able to attend these sessions.

Reflections

I really liked the overarching theme of the event: the connection between software and society. Whilst listening to the opening comments it struck me that there were some clear points of crossover between research carried out within the SEAD group, and the research aims of the OU Critical Information Studies research group.

It was great working with others in the sustainability group to try to develop a very rough and ready research agenda. It was also interesting to begin to discover how fellow researchers in other institutions had been thinking along similar lines and have already taken some of our ideas further. 

One of my next steps is to continue with reading and exploring with an aim of developing a more thorough understanding of the research domain.

It was interesting that I was the only staff tutor at the event. It is hard for us to do research, since our time split in three different ways: academic leadership and management (of part time associate lecturers), teaching, and whatever time remains can be dedicated to research. For the next few years, my teaching ‘bit’ of time will be put towards doing my best to support TM354 Software Engineering.

Looking forward, what I’m going to try to do is to integrate different aspects of my work together: integrate the teaching bit with the research bit, with the tutor management bit. I’m also hoping (if everything goes to plan) to tutor software engineering for the first time.

As well as integrating everything together, another action is to begin to work with SEAD colleagues to attempt to put together a PhD project that relates to sustainable computing.

Update 20 July 23: After doing a couple of internet searches to find more about DevOps, I discovered a new book entitled Building Green Software (O'Reilly), which is due to be published in July 24. I also found an interview with the lead author (YouTube), and learnt about something called the Green Software Foundation. I feel really encouraged by these discoveries.

Permalink
Share post
Christopher Douce

Generative AI and the future of the OU

Visible to anyone in the world
Edited by Christopher Douce, Tuesday 20 June 2023 at 10:24

On 15 June 2023 I attended a computing seminar about generative AI, presented by Michel Wermelinger.

In some ways the title of his seminar is quite provocative. I did feel that his presentation relates to the exploration of a very specific theme, namely, how generative AI can play a role in the future of programming education; a topic which is, of course, being explored by academics and students within the school.

What follows is a brief summary of Michel's talk. As well as sharing a number of really interesting points and accompanying resources, Michel did a lot of screensharing, where he demonstrated what I could only describe as witchcraft.

Generative AI tools

Michel showed us Copilot, which draws on code submitted through GitHub. Copilot is said to use something called OpenAI Codex. The witchcraft bit I mentioned was this: Michel provided a couple of comments in a development environment, which were parsed by the Copilot, which generated readable and understandable Python code. There was no messing about with internet searches or looking through instruction books to figure out how to do something. Copilot offered immediate and direct suggestions.

Copilot isn’t, of course, the only tool that is out there. There are now a bunch of different types of AI tools, or a taxonomy of tools, which are emerging. There are tools where you pay for access. There are tools that are connected with integrated development environments (IDEs) that are available on the cloud, and there are tools where the AI becomes a pair programmer chatbot. There are other tools, such as learning environments that offer both documentation and the automated assessment of programming assignments.

The big tech companies are getting involved. Amazon has something called CodeWhisperer. Apparently Google has something called AlphaCode, which has participated in competitive programming competitions, leading to a paper in Nature which questions whether ChatGPT and AlphaCode going to replace programmers? There’s also something called StarCoder, which has also been trained on GitHub sources.  

AI can, of course, be used in other ways. It could be used to offer help and support to students who have additional requirements. AI could be used to transcribe lectures, and help student navigate across and through learning materials. The potential of AI being a useful learning companion has been a long held dream, and one that I can certainly remember from my undergraduate days, which were in the last century.

Implications

An important reflection is that Copilot and all these other AI tools are here to stay. It wouldn’t be appropriate to try to ban them from the classroom since they are already being used, and they already have a purpose. Michel also mentioned there is already a textbook which draws on Generative AI: Learn AI-assisted Python programming

Irrespective of what these tools are and what they do, everyone still needs to know the fundamentals. Copilot does not replace the need to understand language syntax and semantics and know the principles of algorithmic thinking. Developers and engineers need to know what is meant by thorough testing, how to debug software, and to write helpful documentation. They need to know how to set breakpoints, use command prompts, and also know things about version and configuration management.

An important question to ask is: how do we assess understanding? One approach is an increasing use of technical interviews, which can be used to assess understanding of technical concepts. This won’t mean an academic viva, but instead might mean some practical discussions which both help to assess student’s knowledge, and help them to prepare for the inevitable technical interviews which take place in industry.

New AI tools may have a real impact on not only what is taught but how teaching is carried out, particularly when it comes to higher levels of study. This might mean the reformulation of assignments, perhaps developing less explicit requirements to expose learners to the challenge of working with ambiguity, which students must then intelligently resolve.

Since these tools have the potential to give programmers a performative boost, assignments may become more bigger and more substantial. Irrespective of how assignments might change there is an imperative that students must learn how to critically assess and evaluate whatever code these tools might suggest. It isn’t enough to accept what is suggested; it is important to ask the question: “does the code that I see here make sense of offer any risks, given what I’m trying to do?”

A term that is new to me is: prompt engineering. This need to communicate in a succinct and precise way to an AI to get results that are practical and useful within a particular context. To get useful results, you need to be clear about what you want. 

What is the university doing?

To respond to the emergence of these tools the university has set up something called the Generative AI task and finish group. It will be producing some interim guidance for students and will be offering some guidance to staff, which will include the necessity to be clear about ethical and transparent use about AI. It is also said to highlight capabilities and limitations.  There will also be guidance for award boards and module results panels. The point here is that Generative AI is being looked at. 

Michel suggested the need for a working group within the school; a group to look at what papers coming out, what the new tools are, and what is happening across the sector at other institutions. A thought that it might be useful to widen it out to other schools, such as the School of Physical Sciences, and any others which make use of any aspect of coding and software development.

Reflections

Michel’s presentation was a very quick overview of a set of tools that I knew very little about. It is now pretty clear that I need to know a lot more about them, since there are direct implications for the practice of teaching and learning, implications for the school, and implications for the university. There is a fundamental imperative that must be emphasised: students must be helped to understand that a critical perspective about the use of AI is a necessity.

Although I described Michel’s demonstration of Copilot as witchcraft all he did was demonstrate a new technology.

When I was a postgraduate student, a lecturer once told me that one of the most fundamental and important concepts in computing was abstraction. When developers are faced with a problem that becomes difficult, they can be said to ‘abstract up’ a level, to get themselves out of trouble, and towards another way of solving a problem. In some senses, AI tools represent a higher level of abstraction; it is another way of viewing things. This doesn’t, of course, solve the problem that code still needs to be written.

I have also heard that one of the fundamental characteristics of a good software developer or engineer is laziness. When a programmer finds a problem that requires solving time and time again, they invariably develop tools to do their work for them. In other words, why write more code than you need to, when you can develop a tool that solves the problem for you?

My view is that both abstraction and laziness are principles that are connected together.

Generative AI tools have the potential to make programmers lazy, but programmers must gain an appreciation about how and why things work. They also need to know how to make decisions about what bits of code to use, and when. 

It takes a lot of effort to become someone who is effective at being lazy.

Permalink Add your comment
Share post
Christopher Douce

Preparing to chair TM354 Software Engineering

Visible to anyone in the world

Over the years I’ve had a connection with a number of Computing modules. I started as a tutor on M364, which then became TM356. When I became a staff tutor, I joined the TM352 for a short period of time, where I made a couple of very minor contributions, and TT284, where I offered some suggestions about web development frameworks. Most recently, I’ve been helping behind the scenes on TM112.

In the coming months, I’m going to be taking over the chairing TM354 Software Engineering. This module closely aligns with some of my long standing research interests. When I was a doctoral student, I studied the maintenance of object-oriented software, during which I looked at the subject of software metrics, where I made a very tiny contribution to the area. After completing all my studies, I worked in industry for a number of years, before returning to the university sector.

In September 2014, I attended a TM354 module briefing, where I wrote a quick summary of all the main components of the module. Since the briefing, I understand that the module has gradually changed and evolved over time.

From time to time, I shall be writing blog posts as an incoming module chair.

Figuring everything out

After a handover meeting, I have the following questions and the following tasks. 

I should add that I have mostly answered some of these questions:

  • Who do I need to speak to, to get things done? I know our curriculum manager, and fellow members of the module team, but there might well be other people who I need to know about.
  • What are the key dates and times by which things need to be done? I think I’ve seen a seen a document that contains the title ‘schedule’.
  • What are the biggest issues and challenges that immediately need to be dealt with? There is a lot going on at the moment in the university; I need to know what to prioritise.
  • What bits of software do I need to know about, and where should I go to find everything out?

Here are my immediate tasks. I have started some of them, but I need to work on others:

  • Acquaint myself with the module guide, assessment guide and accessibility guide.
  • Read all the module materials carefully (there is module mailing that is likely to be coming to me over the next couple of days)
  • Go through all the software engineering textbooks the outgoing module chair has left me.
  • Review all the assessment materials; the exams, the TMAs and and iCMAs.
  • Look at how the module makes use of Open Design Studio.
  • Listen to or watch any podcasts or videos that are used within the module.
  • Identify the file store or file areas that everyone uses to carry out assessment authoring.
  • Learn how much time every module team member has allocated to the module.

Reflections

I view TM354 as a really important level 3 module.

It is also a really interesting subject, since it links many different subjects together. On one hand, software engineering is quite a technical subject. On the other, it is about people and organisations; creating software is an intrinsically human activity. Software engineering processes and tools help to guide, manage and often magnify the creative contributions that people make to the development of software.

I would like to publicly acknowledge the contribution and efforts of our outgoing module chair, Leonor Barroca, who has worked on the module since the first presentation.

Permalink Add your comment
Share post
Christopher Douce

TM354 Software Engineering: briefing

Visible to anyone in the world
Edited by Christopher Douce, Monday 11 September 2023 at 16:27

On Saturday 27 September I went to a briefing for a new OU module, TM354 Software Engineering.   I have to secretly confess that I was quite looking forward to this event for a number of reasons: I haven’t studied software engineering with the OU (which meant that I was curious), I have good memories of my software engineering classes from my undergraduate days and I also used to do what was loosely called software engineering when I had a job in industry.  A big question that I had was: ‘to what extent is it different to the stuff that I studied as an undergrad?’  The answer was: ‘quite a bit was different, but then again, there was quite a bit that was the same too’.

I remember my old undergrad lecturer introducing software engineering by saying something like, ‘this module covers all the important computer stuff that isn’t in any of the other modules’.   It seemed like an incredibly simple description (and one that is also a bit controversial), but it is one that has stuck in my mind.  In my mind, software engineering is a whole lot more than just being other stuff.

This blog post summary of the event is mostly intended for the tutors who came along to the day, but I hope it might be useful for anyone else who might be interested in either studying or tutoring the module.  There’s information about the module structure, something about the software that we use, and also something about the scheduling of the tutorials.

Module structure

TM354 has three blocks, which are also printed books.  These are: Block 1 – from domain to requirements, Block 2 – from analysis to design, and Block 3 – from architecture to product.  An important aspect to the module is a set of case studies.  The module is also supported by a module website and, interestingly, a software tool called ShareSpace that enables students to share different sketches or designs.  (This is a version of a tool that has been used in other modules such as U101, the undergraduate design module, and T174, an introduction to engineering module).

Block 1 : from domain to requirements

Each block contains a bunch of units.  The first unit is entitled ‘approaches to software development’, which, I believe, draws a distinction between plan driven software development and agile software development.  I’ve also noted down the phrase ‘modelling with software engineering’.  It’s great to see agile mentioned in this block, as well as modelling.  When I worked in industry as a developer, we used bits of both.

The second unit is called requirements concepts.  This covers functional requirements, non-functional (I’m guessing this is things like ‘compatibility with existing systems’ and ‘maintainability’ – but I could be wrong, since I’ve not been through the module materials yet), testing, and what and how to document.  Another note I’ve made is: ‘perspectives on agile documentation’.

Unit three is from domain modelling to requirements.  Apparently this is all about business rules and processes, and capturing requirements with use cases.  Prototyping is also mentioned.  (These are both terms that would be familiar with students who have taken the M364 Interaction Design module).  Unit four is all about the case study (of which I have to confess that I don’t know anything about!)

Block 2: from analysis to design

Unit five is about structural modelling of domain versus the solution.  Unit six is about dynamic modelling, which includes design by contract.  Unfortunately, my notes were getting a bit weak at this point, but I seem to remember thinking, ‘ahh… I wonder if this relates to the way that I used to put assertions in my code when I was a coder’.  This introduction was piquing my interest.

Unit seven was entitled, ‘more dynamic modelling’, specifically covering states and activities, and capturing complex interactions.  Apparently the black art of ‘state machines’ are also covered in this bit.  (In my undergrad days, state machine were only covered in the completely baffling programming languages course) .  Unit eight then moves onto the second part of the case study which might contain domain modelling, analysis and design.

Block 3: from architecture to product

This block jumped out at me as being the most interesting (but this reflects my own interests).  Unit nine was about ‘architecture, patterns and reuse’.  Architecture and requirements, I’ve noted, ‘go hand in hand’.  In this section there’s something about architectural views and reuse in the small and reuse in the large.  During the briefing there was a discussion about architectural styles, frameworks and software design patterns.

When I was an undergrad, software patterns hadn’t been discovered yet.  It’s great to see them in this module, since they are a really important subject.  I used to tell people that patterns are like sets of abstractions that allow people to talk about software.  I think everyone who is a serious software developer should know something about patterns.

Unit ten seems to take a wider perspective, talking about ‘building blocks and enterprise architectures’.  Other topics include component based development, services and service oriented architectures (which is a topic that is touched upon in another module, and also potentially the forthcoming TM352 module that covers cloud computing).

Unit eleven is about quality, verification, metrics and testing.  My undergrad module contained loads of material on metrics and reliability, and testing was covered only in a fairly theoretical way, but I understand that test-driven development is covered in this module (which is a topic that is linked to agile methods).  I’ll be interested to look at the metrics bit when this bit of the module is finalised.

The final unit takes us back to the case study.  Apparently we look at architectural views and patterns.  Apparently there are also a set of further topics that are looked.  I’m guessing that students might well have to go digging for papers in the OU’s huge on-line library.

Software

I’ve mentioned ShareSpace, which is all about sharing of software models with other students (modelling is an important skill), to enable students to gain experience of group work and to see what other students are doing and creating: software development invariably happens in teams.  Another important bit of software is an open source IDE (integrated development environment) called NetBeans.  I’m not sure how NetBeans is going to be used in this module, but it is used across a number of different OU modules, so it should be familiar to some TM354 students.

Assessment

TM354 comprises of three tutor marked assignments, a formative quiz at the end of every unit (that students are strongly encouraged to complete), and an end of module exam.  The exam comprises of two parts: a part that has questions about concepts, and a second bit that contains longer questions (I can’t say any more than this, since I don’t know what the exam looks like!)

Tutorials

Each tutor is required to deliver two hours of face to face tuition, and eight hours of on-line sessions through OU Live (as far as I understand).  In the London region, we have three tutors, so what we’re doing is we’re having all the groups come to the same events and we’re having each tutor deliver a face to face session to support students through every block and every TMA. 

We’re also planning on explicitly scheduling six hours of OU Live time, leaving two hours that the tutor can use at his or her discretion throughout the module (so, if there are a group of students who struggle with concepts such as metrics, design by contract, or patterns, a couple of short ad-hoc sessions can be scheduled). 

All the OU Live sessions will be presented through a regional OU Live room.  This means that students in one tutor group can visit a session that is delivered by another London tutor.  The benefit of explicitly scheduling these sessions in advance is that all these events are presented within the student’s module calendar (so they can’t say that they didn’t know about them!)  All these plans are roughly in line with the new tuition strategy policy that is coming from the higher levels of the university.  A final thought regarding the on-line sessions is that it is recommended that tutors record them, so students can listen to the events (and potentially go through subjects that they find difficult) after an event has taken place.

A final note that I’ve made in my notebook is ‘tutorial resources sharing (thread to share)’.  This is connected to a tutor’s forum that all TM354 tutors should have access to.  I think there should be a thread somewhere that is all about the sharing of both on-line and off-line (face to face) tutorial resources.

Permalink Add your comment
Share post

This blog might contain posts that are only visible to logged-in users, or where only logged-in users can comment. If you have an account on the system, please log in for full access.

Total visits to this blog: 3160072