OU blog

Personal Blogs

Christopher Douce

Software Engineering Radio: Design and Designing

Visible to anyone in the world
Edited by Christopher Douce, Sunday 5 October 2025 at 10:09

A quick search of software engineering radio yields a number of themes that relate to design: experience design, design for security and privacy, design of APIs, and also architectural design.

This post shares some useful points shared on two episodes. The first asks questions about and reflect on the software design process. The second, which is complementary, relates to how engineers share their designs with others through the use of diagrams. An earlier blog post, TM354 Working with diagrams addresses the same subject, but in a slightly different way, emphasising diagramming tools.

Before I get into what struck me as interesting in both of those podcasts, it is useful to have a quick look at Chapter 3 of SWEBOK, which contains a useful summary of some software Design principles (3-4):

  • Abstraction: Identify the essential properties of a problem.
  • Separation of concerns: Identify what concerns are important to stakeholders.
  • Modularization: Break larger components into smaller elements to make the problem easier to understand.
  • Encapsulation: Hides unnecessary detail.
  • Separation of interface and implementation: Every component should have a clear interface.
  • Uniformity: Ensure consistency in terms of how the software is described.
  • Completeness: It does what is supposed to “and leaves nothing out”.
  • Verfiability: A characteristic of software to determine whether what has been built can be checked to make sure it meets requirements.

These principles seem very ‘code centric’. When we look at the titles of other software engineering radio podcasts we can clearly see that we need to consider wider perspectives: we need to think about architecture, organisational structure what we need to do to realise important non-functional requirements such as accessibility and usability.

Software design

In SE Radio 333: Marian Petre and André van der Hoek on Software Design discuss not only software design, and the also the practice of expert software engineers. It opens with the point that it is important to distinguish between the outcome of design and the process of designing. There was also a reflection that design (and designing) ‘is something that is done throughout [the software development lifecycle]’, beginning with eliciting (designing) requirements.

We are also introduced to an important term: design thinking. This is defined as the process of ‘understanding the relationship between how we understand the problem and how we understand the solution, and thinking reflectively about the relationship between the two’. Design thinking is mentioned in the SWEBOK, where it is defined in a similar (but slightly similar) way “design thinking comprises two essentials: (1) understanding the need or problem and (2) devising a solution” (3-2). The SWEBOK definition of design thinking is, arguably, quite a narrow definition since the term can refer to both an approach and a mindset, and it is a mindset that can reside “inside all the individuals in the organisation”.

A related question is how do experts go about design? Experts go deep (into a problem) as well as going broad (when looking for solutions). When experts go deep, they can dive into a problem quickly. Van der Hoek shared an interesting turn of phrase, suggesting that developers “talked on the whiteboards”. It is important to and necessary to externalise ideas, since ideas can be then exposed to others, discussed and evaluated. An expert designer needs be able to listen, to disagree, and have the ability to postpone making decisions until they have gathered more information. Experts, it is said, borrow, collaborate, sketch, and take breaks.

Expert software designers also have an ability to identify which parts of a problem are most difficult, and then begin to solve those bits. The are able to see the essence of a problem. In turn, they know where to apply attention, investing effort ‘up front’. Rather than considering which database to choose, they might tackle the more fundamental question of what a database needs to do. Expert designers also have a mindset focussed toward understanding and strive for elegance since “an elegant solution also communicates how it works” (41:00)

Experts can also deliberately ‘break the rules’ to understand problem constraints or boundaries (43:50). Expert designers may also use design thinking practices to generate ideas and to reveal assumptions by applying seemingly odd or surprising activities. Doing something out of the ordinary and ‘using techniques to see differently’ may uncover new insights about problems or solutions. Designers are also able to step back and observe and reflect on the design process.

Organisational cultural constraints can also play a role. The environment in which a software designer (a software engineer or architect) is employed can constrain them from working towards results and applying creative approaches. This cultural context can ‘supress’ design and designers especially if organisational imperatives not aligned with development and design practices.

Marion Petre, an emeritus professor of computing at the OU referred to a paper by David Walker which describes a ‘soup, bowl, and table’ metaphor. A concise description is shared in the abstract of the article: “The soup is the mix of factors that stimulates creativity with respect to a particular project. The bowl is the management structure that nurtures creative outcomes in general. And the table is the context of leadership and vision that focuses creative energies towards innovative but realizable objectives.” You could also argue that soup gives designers energy.

The podcast also asked what do designers need to do to become better designer? The answer was simple, namely, experts find the time look the code and designs of other systems. Engineers ‘want to understand what they do and make it better’. An essential and important point is that ‘experts and high performing teams are reflective’; they think about what they do, and what they have done.

Diagrams in Software Engineering

An interesting phrase that was shared in Petre and van der Hoek’s podcast was that developers ‘talked using whiteboards’. The sketching and sharing of diagrams is an essential practice within software engineering. In SE Radio 566: Ashley Peacock on Diagramming in Software Engineering different aspects of the use and creation of diagrams are explored. Diagrams are useful because of “the ease in which information is digestible” (1:00). Diagrams and sketches can be short-lived/long lived. They can be used to document completed software systems, summarise software that is undergoing change, and be used to share ideas before they are translated into architecture and code.

TM354 Software Engineering makes extensive use of a graphical language called the Unified Modelling Language (UML). UML has 12 types of diagrams, of which 2 or 3 types are most frequently used. Class diagrams are used to share key abstractions, ideas within the problem domain and a design, and their dependencies (how the bits relate to each other). Sequence diagrams can be used to show the interactions between different layers of software. Activity diagrams can be used to describe the connections between software and the wider world. UML is important since it provides a share diagramming language that can be used and understood by software engineers.

Reflections

One of the aspects that I really appreciated from the first podcast was that it emphasises the importance and significance of the design process. One of my first duties after becoming a permanent staff tutor at the OU was to help to support the delivery of some of the design modules. I remember there were three of them. There was U101 Design thinking: creativity for the 21st century, an earlier version of T240 Design for Impact, T217, and T317 Innovation: Designing for Change. Even though I was familiar with a sister module from the School of Computing and Communications, TM356 Interaction design and the user experience, being exposed to the design modules opened my eyes to a breadth of different approaches that I had never heard of before and could have applicability within computing.

U101 introduced me to the importance of play. T217 (and the module that came before it, T211) introduced me to the contrasting ideas of divergent and convergent thinking. The idea of divergent thinking relates to the idea mentioned in the first podcast of thinking beyond the constraints. I was also introduced to the double-diamond design process (Design Council, PDF). Design processes are different in character to software development processes since they concern exploring different ways to solve problems as opposed to distilling solutions into architectures and code.

A really important point from the first podcast is that design can (and should) happen across the entire software development lifecycle. Defining (and designing) requirements at the start of a project is much as creative process as the creation and specification of tests and test cases.

It is important and necessary to highlight the importance of reflection. Thinking about what we have, how well our software has been created, and what we need all help to refine not just our engineered artefacts, but also our engineering processes. Another point that resonates is the role that organisational structures may play in helping to foster design. To create good designs, we rely on the support of others, but our creativity may be attenuated if the value of ‘play’ is views as frivolous or without value.

Effective designers will be aware of different sets of principles, why they are important, how they might be applied. This post opened by sharing a set of software design principles that were featured in the SWEBOK. As suggested, these principles are viewed as very code centric. There are, of course, other design principles that can be applied to user interfaces (and everyday things), such as those by Donald Norman. Reflecting on these two sets of principles, I can’t help but feel that there is quite a gap in the middle, and a need for software architecture design principles.  Bass et al. (2021) is a useful reference, but there are other resources, including those by service providers, such as Amazon’s Well-Architected guidance. Engineers should always work towards understanding. Reflecting on what we don’t yet full understand is as important as what we do understand.  

References

Bass, D. L., Clements, D. P and Kazman, D. R. (2021) Software Architecture in Practice, 4th edn, Upper Saddle River, NJ, Addison Wesley.

SWEBOK v.4 (2024) Software Engineering Body of Knowledge SWEBOK. Available at: https://www.computer.org/education/bodies-of-knowledge/software-engineering

Walker, D. (1993), The Soup, the Bowl, and the Place at the Table. Design Management Journal (Former Series), 4: 10-22. https://doi.org/10.1111/j.1948-7169.1993.tb00368.x

Permalink
Share post
Christopher Douce

Software Engineering Radio: Testing

Visible to anyone in the world
Edited by Christopher Douce, Thursday 2 October 2025 at 13:28

The term ‘software testing’ can be associated with a very simple yet essential question: ‘does it do what it supposed to do?’

There is, of course, a clear and obvious link to the topic of requirements, which express what software should do from the perspective of different stakeholders. A complexity lies in the fact that different stakeholders can have requirements that can sometimes conflict with each other.

Ideally it should be possible to trace software requirements all the way through to software code. The extent to which formal traceability is required, and the types of tests you need to carry out will depend on the character of the software that you are building. The tests that you need for a real-time healthcare monitor will be quite different to the tests you need for a consumer website.

Due to the differences in the scale, type and character of software, software testing is a large topic in software engineering. Chapter 5 of SWEBOK v4, the software engineering body of knowledge highlights different levels of test: unit testing, integration testing, system testing, and acceptance testing. It also highlights different types of test: conformance, compliance, installation, alpha and beta, regression, prioritization, non-functional, security, privacy, API, configuration, and usability.

In the article, The Practical Test Pyramid, Ham Vocke describes a simple model: a test pyramid.  At the bottom of the pyramid are unit tests that test code. These unit tests run quickly. At the top, there are user interface tests, which can take time to complete. In the middle there is something called service tests (which can also be known as component tests). Vocke’s article is pretty long, and quickly gets into a lot of technical detail.

What follows are some highlights from some Software Engineering radio episodes that are about testing. A couple of these podcasts mention this test pyramid. Although testing is a broad subject, the podcasts that I’ve chosen emphasise unit testing.

The first podcast concerns the history of unit testing. The last podcast featured in this article offers some thoughts about where the practice of ‘testing’ may be heading. Before sharing some personal reflections, some other types of test are briefly mentioned.

The History of JUnit and the Future of Testing

Returning to the opening question, how do you know your software does what it supposed to do? A simple answer is: you get your software to do things, and then check to see if it has done what you expect. It is this principle that underpins a testing framework called JUnit, which is used with software written using the Java programming language.

The episode SE Radio 167: The History of JUnit and the Future of Testing with Kent Beck begins with a short history of the JUnit framework (3:20). The simple idea of JUnit is that you are able to write tests as code; one bit of code tests another. All tests are run by a test framework which tells you which tests pass and which tests fail. An important reflection by Beck is that when you read a test, it should tell you a story. Beck goes on to say that someone reading a test should understand something important about the software code. Tests are also about communication; “if you have a test and it doesn’t help your understanding … it is probably a useless test”.

Beck is asked to explain the concept of Test Driven Development (TDD) (14:00). He describes it as “a crazy idea that when you want to code, you write a test that fails”. The test only passes when that code that does what the test expects. The podcast discussion suggests that a product might contain thousands of tiny tests, with the implication that there might be as much testing code as production code; the code that implements features and solves problems.

When considering the future of testing (45:20) there was the suggestion that “tests will become as important to programming as the compiler”. This implies that tests give the engineers useful feedback. This may be especially significant during periods of maintenance, when code begins to adapt and change. There was also an expression of the notion that engineers could “design for testability” which means that unit tests have more value.

Although the podcast presents a helpful summary of unit testing, there is an obvious question which needs asking, which is: what unit tests should engineers be creating? One school of thought is that engineers should create tests that cover as much of the software code as possible, also known as code coverage. Chapter 5 of SWEBOK shares a large number of useful test techniques that can help with the creation of tests (5-10).

Since errors can sometimes creep into conditional statement and loops, a well known technique is known as boundary-value analysis. Put more simply, given a problem, such as choosing a number of an item from a menu, does the software do what it supposed to do if the highest number is selected (say, 50)? Also, does it continue to work if the highest number just before a boundary is selected (say, 49)?

Working Effectively with Unit Tests

Another podcast on unit testing is SE Radio 256: Jay Fields on Working Effectively with Unit Tests. Between 30:00 and 33:00, there is an interesting discussion that highlights some of the terms that feature within Vocke’s article. A test that doesn’t cross any boundaries and focus on a single class could be termed a ‘solitary unit test’. This can be contrasted with a ‘sociable unit test’, where tests work together with each other; one test may influence another. Other terms are introduced, such as stubs and mocks, which are again mentioned by Vocke.

Automated Testing with Generative AI

To deliberately mix a metaphor, a glimpse of the (potential) future can be heard within SE Radio 633: Itamar Friedman on Automated Testing with Generative AI. The big (and simple) idea is to have AI helper to have a look at your software and ask it to generate test cases for you. A tool called CoverAgent was mentioned, along with an article entitled Automated Unit Test Improvement using Large Language Models at Meta (2024). A point is: you still need a software engineer to sense check what is created. AI tools will not solve your problems, since these automated code centric tools know nothing of your software requirements and your software engineering priorities.

Since we are beginning to consider artificial intelligence, this leads onto another obvious question, which is: how do we go about testing AI? Also, how do we make sure they do not embody or perpetuate biases or security risks, especially if they are used to help solve software engineering problems.

Different types of testing

The SWEBOK states that “software testing is usually performed at different levels throughout development and maintenance” (p.5-6). The key levels are: unit, integration, system and acceptance.

Unit testing is carried out on individual “subprograms or components” and is “typically, but not always, the person who wrote the code” (p.5-6). Integration testing “verifies the interaction among” system under test components. This is testing where different parts of the system are brought together. This may need different test objectives to be completed. System testing goes even wider and “is usually considered appropriate for assessing non-functional system requirements, such as security, privacy, speed, accuracy, and reliability” (p.5-7). Acceptance testing is all about whether it is accepted by key stakeholders, and relate back to key requirements. In other words, “it is run by or with the end-users to perform those functions and tasks for which the software was built”.

To complete a ‘test level’ a number of test objectives may need to be satisfied or completed. The SWEBOK presents 12 of these. I will have a quick look at two of them: regression tests, and usability testing.

Regression testing is defined as “selective retesting of a SUT to verify that modifications have not caused unintended effects and that the SUT still complies with its specified requirements” (5-8). SUT is, of course, an abbreviation for ‘system under test’. Put another way a regression test check to make sure that any change you have made hasn’t messed anything up. One of the benefits of unit testing frameworks such as JUnit is that it is possible to quickly and easily run a series of unit tests, to carry out a regression test.

Usability testing is defined as “testing the software functions that support user tasks, the documentation that aids users, and the system’s ability to recover from user errors” (5-10), and sits at the top of the test pyramid. User testing should involve real users. In addition to user testing there are, of course, automated tools that help software engineers to make sure that a product deployment works with different brewers and devices.

Reflections

When I worked as a software engineer, I used JUnit to solve a very particular problem. I needed to create a data structure that is known as a circular queue. I wouldn’t need to write it in the same way these days since Java now has more useful libraries. At the time, I needed to make sure that my queue code did what I expected it to. To give me confidence in the code I had created, I wrote a bunch of tests. I enjoyed seeing the tests pass whenever I recompiled my code.

I liked JUnit. I specifically liked the declarative nature of the tests that I created. My code did something, but my tests described what my code did. Creating a test was a bit like writing a specification. I remember applying a variety of techniques. I used boundary-value analysis to look at the status of my queue when it was in different states: when it was nearly full, and when it was full.

Reflecting Beck, I appreciated that my tests also told a story. I also appreciated that these tests might not only be for me, but might be useful for other developers who might have the misfortune of working with my code in the future.

The other aspect of unit testing that I liked was that it proactively added friction to the code. If I started to maintain it, pulling apart function and classes, the tests would begin to break. The tests became statements of ‘what should be’. I didn’t view tests in terms of their code coverage (to make sure that every single bit of software was evaluated) but in terms of simple practical tools that gave alternative expressions of the purpose of my software. In turn, they helped me to move forward.

It is interesting and useful to reflect on the differences between the test pyramid and the SWEBOK test levels. In some respect, the UI testing of the pyramid can be aligned with acceptance testing of the SWEBOK. I do consider the integration and system testing to be helpful.

An important point that I haven’t discussed is the question of when should a software engineer carry out testing? A simple answer is, of course, as soon as practically possible. The longer it takes to identify an issue, the more significant the impact and the greater the economic cost. The ideal of early testing (or early problem detection) is reflected in the term ‘shift-left’ testing, which essentially means ‘try to carry out testing towards the left hand side of your project plan’. Put even more simply: the earlier the better.

Returning to the overriding aim of software testing, testing isn’t just about figuring out whether your software does what it supposed to do. It is also about managing risk. If there are significant societal, environmental, institutional and individual impacts if software doesn’t work, you or your organisation needs to do whatever it can to ensure that everything is as correct and as effective as possible. Another point is that sometimes the weak spot isn’t the code, but in the spaces where people and technology intersects. Testing is socio-technical.

To conclude, it is worth asking a final question. Where is the software testing heading? Some of these podcasts suggest some pointers. In the recently past, we have seen the emergence of automation and the engineering of software development pipelines to facilitate continual deployment or delivery of software. I do expect that artificial intelligence, in one form or another, will influence testing practice, but AI tools can’t know everything about our requirements. There will be testing using artificial intelligence and testing of artificial intelligence. As software reaches into so many different areas of society, there will also be testing for sustainability.

Resources

JUnit is one of many bits of technology that can help to automate software testing. Two other  tools s I have heard of are called Cucumber which implements a language called Gherkin, a formal but human-readable language which is used to describe test cases. I’m also aware of something called Selenium which is “a suite of tools for automating web browsers”.

Since software testing is such an important specialism within software engineering, there are a series of industrial certifications that have been created by the International Software Testing Qualifications Board (ISTQB). As well as offering foundation level certifications, there are also certifications for specialisms such as agile, security and usability. Many of the topics mentioned in the certifications are also mentioned in Chapter 5 of SWEBOK v4.

I was alerted to a site called the Ministry of Testing which shares details of UK conferences and events about testing and software quality.

One of the points that I picked up from the podcasts was the point that, when working at forefront of an engineering subject, there is a lot of sharing that takes place through blogs. A name that was mentioned was Dan North who has written two articles that resonate: We need to talk about testing (or how programmers and testers can work together for a happy and fulfilling life), and Introducing BDD (BDD being an abbreviation for Behaviour Driven Development).

Acknowledgements

Many thanks to Josh King, a fellow TM354 tutor, who was kind enough to share some useful resources about testing.

Permalink Add your comment
Share post
Christopher Douce

Software Engineering Radio: Technical Debt

Visible to anyone in the world
Edited by Christopher Douce, Wednesday 1 October 2025 at 09:42

Imagine you’re taking a trip to catch up with one of your best friends. You also need to get a few things from the shop; let’s say, a couple of pints of milk. You have a choice. You could head directly to your friend’s house and be on time, and do the shopping later. Or alternatively, you could change your route, visit the shop, and arrive at your friend’s place a little bit later than agreed. This really simple dilemma encapsulates what technical debt is all about.

When it comes to software, software engineers may prioritise some development activities over others due to the need to either ship a product, or to get a service working. The impact of prioritisation may have implications for software engineers who have to take over the work at a later date.

As suggested earlier, software can quality attributes: it can be efficient, it can be secure, or it can be maintainable. In some cases, a software engineer might prioritise getting something working over its maintainability or comprehensibility. This means that the software that is created might be more ‘brittle’, or harder to change later on. The ‘debt’ bit of technical debt means it will be harder to migrate the software from one operating environment to another in the future. You might avoid ‘paying’ or investing time now to get something working earlier, but you may well need to ‘pay down’ the technical debt in the future when you need to migrate your software to a new environment.

On Managing Technical Debt

In SE Radio 481: Ipek Ozkaya on Managing Technical Debt, Ozkaya is asked a simple question: why should we care about technical debt? The answer is also quite simple: it gives us a term to help us to think about trade-offs. For example, “we’re referring to the shortcuts that software development teams make. … with the knowledge they will have to change it; rework it”. Technical debt is a term that is firmly related to the topic of software maintenance.

Another question is: why does it need to be managed (5:10)? A reflection is that “every system has technical debt. …  If you don’t manage it, it accumulates”. When the consequences of design decisions begin to be observed or become apparent, then it becomes technical debt, which needs to be understood, and the consequences need to be managed. In other words, carrying out software maintenance will mean ‘doing what should have been done earlier’ or ‘adapting the software so it more effectively solves the problems that it is intended to solve’. My understanding is that debt can also emerge, since the social and organisational contexts in which software exists naturally shift and change.

Interestingly, Ozkaya outlines nine principles of technical debt. The first one is: ‘Technical debt reifies an abstract concept’. This principle speaks to the whole concept. Reification is the ‘making physical’ an abstract concept. Ultimately, it is a tool that helps us to understand the work that needs to be carried out. A note I made during the podcast was that there is a ‘difference between symptoms and defects’. Expressed in another way, your software might work okay, but it might not work as effectively or as efficiently (or be as maintainable) was yo would like, due to technical debt.

It is worth listening to Ozkaya’s summary of the principles, which are also described in Kruchten et al. (2019). Out of all the principles, principle 5, architecture technical debt has the highest cost of ownership, strikes me as being very significant. This relates to the subject of architectural choices and architectural design.

Krutchen et al. suggest that “technical debt assessment and management are not one-time activities. They are strategic software management approaches that you should incorporate as ongoing activities” (2019). I see technical debt as a useful conceptual tool that can help engineers to make decisions about practical actions about what work needs to be done, and communicate to other about that work, and why it is important.

Reflections

I was introduced to the concept of technical debt a few years ago, and the concept instinctively made a lot of sense. Thinking back to my time as a software engineer I often were face with dilemmas and trade-offs. Did I spend time ‘right now’ to change how I gathered real-time data from a hardware device, or do I live with it and just ship the product?

The Kruchten text introduces the notion of ‘bankruptcy’. External events can cause business bankruptcy. In the case of software, this was facilitated by a software vendor ending support for a whole product line, and the need to rewrite a software product using different languages and tools.

When looking through software engineering radio I noticed an earlier podcast, SE Radio 224: Sven Johann and Eberhard Wolff on Technical Debt covers the same topic. Interestingly, they reference a blog post by Fowler, Technical Debt Quadrant, Fowler suggests a simple tool that can be used to think about technical debt, based on four quadrants. Decision about technical debt should be ‘prudent and deliberate’.

Returning to the opening dilemma: do I go straight to my friend’s house, or do I go get some milk on the way and end up being late? It depends on who the friend is and why I am meeting them. I depends on the consequences. If I’m going round there for a cup of tea, I’ll probably get the milk.

Resources

Fowler, M. (2009) Technical Debt Quadrant. Available at: https://martinfowler.com/bliki/TechnicalDebtQuadrant.html

Kruchten, P., Nord, R. and Ozkaya, I. (2019) Managing Technical Debt: Reducing Friction in Software Development. 1st edition. Addison-Wesley Professional.  Available at: https://library-search.open.ac.uk/permalink/44OPN_INST/la9sg5/alma9952700169402316

Permalink Add your comment
Share post
Christopher Douce

Software Engineering Radio: Software architecture

Visible to anyone in the world

Software architecture is quite a big topic. I would argue that it ranges from software design patterns all the way up the design and configuration of cloud infrastructures, and the development of software development and deployment pipelines.

Software architecture features in a number of useful Software Engineering Radio podcasts. What follows is a brief summary of two of them. I then share an article by a fellow TM354 tutor and practicing software architect who shares his thoughts from 25 years of professional experience.

An important point is that there are, of course, links between requirements, non-functional requirements and architectural design. Architectures help us to ‘get stuff done’. There are, of course, implicit links and connections to other posts in this series, such as to the one that is about Infrastructure as Code (IaC).

On the Role of the Software Architect

In SE Radio 616: Ori Saporta on the Role of the Software Architect Saporta suggests that design doesn’t just happen at the start of the software lifecycle. Since software is always subject to change, this means that a software architect has a role across the entire software development lifecycle. Notably, an architect should be interested in the ‘connections between components, systems and people’. ‘You should go from 30,000ft to ground level’ (13:00), moving between the ‘what’ problem needs to be solved to the ‘how’ problems can be solved.

Soft skills are considered to be really important. Saporta was asked how might engineers go about ‘shoring up’ their soft skills? A notable quote was: “it takes confidence and self-assurance to listen”. Some specific soft skills were mentioned (29:20). As well as listening, there is the need for empathy and the ability to bridge, translate or mediate between technical and non-technical domains. Turning to the idea of quality, which was addressed in an earlier blog, quality can be understood as a characteristic, and a part of a process (which reflects how the term was earlier broken down by the SWEBOK).

A software architect means “being a facilitator for change, and being open for change” in other words, helping people across the bridge. An interesting point was that “you should actively seek change”, to see how the software design could improve. An important point and a reflection is that a good design accommodates change. In software, ‘the wind [of change] will come’ since the world is always moving around it.

Architecture and Organizational Design

The second podcast I would like to highlight is SE Radio 331: Kevin Goldsmith on Architecture and Organizational Design. Goldsmith’s is immediately asked about Conway’s Law (Wikipedia), which was summarised as “[o]rganizations … which design systems … produce designs which are copies of the communication structures of these organizations”. Put more simply, the structure of your software architecture is likely to reflect the structure of your organisation.

If there is an existing organisation where different teams do different things “you tend to think of microservices”; a microservice being a small defined service which is supported by a surrounding infrastructure.

If a new software start-up is created by a small group of engineers, a monolith application may well be created. When an organisation grows and more engineers are recruited, existing teams may split which might lead to decisions about how to break up a monolith. This process of identifying and breaking apart services and relating them to functionality can be thought as a form of refactoring (which is a fancy word for ‘structured software changes to software code’). This leads to interesting decisions: should the organisation use their own services, or should they use public cloud services? The answer of, course, relates back to requirements.

An interesting question ‘was which comes first: the organisational structure or the software structure’ (13:05)? Organisations could embrace Conway’s law, or they could do a ‘reverse Conway manoeuvre’, which means that engineering teams are created to support a chosen software architecture.

A really interesting point is that communication pathways within organisations can play a role; organisations can have their tribes and networks (49:30). It is also important to understand how work moves through an organisation (54:50). This is where, in my understanding, the role of the business analyst and software architect can converge.

Towards the end of Goldsmith’s podcast, there was a fascinating reflection about how Conway’s law relates to our brain (57:00). Apparently, there’s something called the Entorhinal cortex “whose functions include being a widespread network hub for memory, navigation, and the perception of time” (Wikipedia). As well as being used for physical navigation, it can also be used to navigate social structures. In other words, ‘your brain fights you when you try to subvert Conway’s law’.

Reflections

In my eyes, the key point in Saporta’s podcast was the metaphor of the bridge, which can be understood in different ways. There could be a bridge moving from the technical to the non-technical, or could be moving from the detail of code and services to the 30,000ft view of everything.

Goldsmith’s podcast offers a nice contrast. I appreciated the discussion about the difference between monoliths and microservices (which is something that is discussed in the current version of TM354). An important point is that when organisations flex and change, microservices can help to move functionality away from a monolith (or other microservices). A microservice can be deployed in different ways, and realised using different technologies.

I found the discussion about the Entorhinal cortex. Towards the end of my doctoral studies, I created a new generation of software metrics that was inspired by the understanding that software developers need to navigate their way across software code bases. It had never occurred to me that the same neural systems may be helping us to navigate our connections with others.

On a different (and final) note, I would like to highlight the work of an article called Software Architecture Insights by Andrew Leigh. Andrew is a former doctoral student from the OU School of Computing and Communications, a current software engineering tutor, and a practicing software engineer. He shares four findings which are worth having a quick look at, and suggests some further reading.

References

Leigh, A. (2024) Software Architecture Insights, ITNow, 66(3), pp. 60–61. Available at: https://doi.org/10.1093/itnow/bwae102.

Permalink Add your comment
Share post
Christopher Douce

Software Engineering Radio: Security and secure coding

Visible to anyone in the world
Edited by Christopher Douce, Sunday 5 October 2025 at 10:10

Digital security is an important specialism in computing. The OU offers a BSc (Hons) in Cyber Security which features TM359 Systems penetration testing. Security is obviously and clearly important within software engineering. The extent to which security is required should be made explicit within non-functional requirements. Any software product that is created should be created and deployed with security in mind.

There are a number of podcasts in Software Engineering radio that addresses security from different perspectives, such as SE Radio 640: Jonathan Horvath on Physical Security and SE Radio 467: Kim Carter on Dynamic Application Security Testing.

One of the podcasts that caught my attention was about secure coding.

Secure coding

SE Radio 658: Tanya Janca on Secure Coding discusses secure coding from a number of perspectives: code, tools and processes to help create robust software systems. She begins at 1:50 (until 2:11) by introducing the principle of least privilege. This led to a discussion of user security and the significance of trust. The CIA triad, Confidentiality, Integrity and Availability is introduced between 10:00 and 11:45. 

A notable section of this podcast is the discussion about secure coding guidelines, between 27:00 and 32:12. Some of the principles shared included the need to validate and sanitise all input, to sense check data, to always use parameterised [database] queries since how and where you write your database queries is important. The software development lifecycle was mentioned between 41:32 and 50:18, which led to a discussion about different types of testing tools and approaches (static and dynamic testing, pen testing and QA testing).

A really notable quote I noted down was the reflection that “software ages very poorly”. There are simple reasons for this. Requirements can change. They change because of changes to the social and technical contexts in which software is used.

Reflections

The podcast scratches the surface of a much bigger topic. One thing that I have picked up from other podcasts is that it is possible to embed code checking within a CI/CD software deployment process. Having a quick look around, I’ve discovered an article by OWASP called OWASP DevSecOps Guideline which discusses ‘linting code’.

The concept of ‘lint’ and ‘linting’ deserves a little bit of explanation. The ‘lint’ in software engineering is, of course, a metaphor. Lint (Wikipedia) is bits of fluff or material that can accumulate on your jumper or trousers. You can get rid of lint using a lint roller, or Sellotape.

There used to be a program called ‘lint’ which ‘went over’ any source code that you have written. Although your source code might compile and run without any problems, this extra program will identify additional bits of code that might potentially be problematic. Think of these bits of code as being pieces of white tissue paper that are sitting on your black trousers. Your ‘lint’ software (which is also called static analysis software) will highlight potential problems that you might want to have a look at.

Continuing looking at OWASP, I was recently alerted to the OWASP Top Ten list which is described as “a standard awareness document for developers and web application security. It represents a broad consensus about the most critical security risks to web applications”. It presents a summary of common security issues that software engineers need to be aware of.

Each of these items are described in a lot of detail and go a lot further than my simplistic knowledge of secure coding. A personal reflection is: software engineers need to know how to read these summaries. This also means: I need to know how to read these summaries.

Python is going to be used in TM113. I’ve been made aware of Six Python security best practices for developers which from an organisation called Black Duck (which is thoroughly in keeping with the yellow rubber duck theme of this new module.

A bit more searching took me to the National Cyber Security Centre (NCSC) and 8 principles of the Secure development and deployment guidance (2018). This set of principles takes a broad perspective, ranging from individual responsibility and learning, through to effective and maintainable code, to creation of a software deployment pipeline.

A final reflection is that none of all this discussion about security is new. Just as there are some classic papers on modular decomposition within software engineering, I’ve been made aware of a 1975 paper entitled The Protection of Information in Computer Systems. I haven’t seen this paper before, and I’ve not read it yet; it requires a whole lot of dedicated reading that I need to find time for.

The geek in me is quite excited at the references to old (and influential) operating systems of times gone by. The set of eight principles (a bit like the NCSE guidelines) contains one of the most important principles in security I know of, namely, the principle of “Least privilege: Every program and every user of the system should operate using the least set of privileges necessary to complete the job. Primarily, this principle limits the damage that can result from an accident or error”.

I have some reading to do.

The article of this article mentions “architectural structures - whether hardware or software - that are necessary to support information protection”. This takes me directly onto the next blog which is all about software architecture.

Acknowledgements

Thank you to Lee Campbell for sharing those additional resources.

Permalink
Share post
Christopher Douce

Software Engineering Radio: Software quality

Visible to anyone in the world
Edited by Christopher Douce, Tuesday 30 September 2025 at 16:19

In one way or another, all the previous blogs which draw on Software Engineering Radio podcasts have been moving towards this short post about software quality. In TM354 Software Engineering, software quality is defined as “the extent to which the customer is satisfied with the software product delivered at the end of the development process”. It offers a further definition, which is the “conformance to explicitly stated requirements, explicitly documented development standards, and implicit characteristics that are expected of all professionally developed software”. The implicit characteristics can relate to non-functional requirements, or characteristics such as maintainability and readability.

The Software Engineering Body of Knowledge (SWEBOK) emphasises the importance of stakeholders: “the primary goal for all engineered products is to deliver maximum stakeholder value while balancing the constraints of development, maintenance, and operational cost, sometimes characterized as fitness for use” (SWEBOK v4, 12-2).

The SWEBOK also breaks ‘software quality’ into a number of subtopics: fundamentals, management processes, assurance processes, and tools. Software quality fundamentals relates to software engineering culture and ethics, notions of value and cost, models and certifications, and software dependability and integrity levels.

Software quality

After doing a trawl of Software Engineering Radio, I’ve discovered the following podcast: SE Radio 637: Steve Smith on Software Quality. This podcast is understandably quite wide ranging. It can be related to earlier posts (and podcasts) about requirements, testing and process (such as CI/CD). There are also connects to the forthcoming podcasts about software architecture, where software can be built with different layers. The point about layers relates to an earlier point that was made about the power and importance of abstraction (which means ‘dealing with complexity to make things simpler’).  For students who are studying TM354, there is a bit of chat in this podcast about the McCabe complexity metric, and the connection between testing and code coverage.

Towards the end of the podcast (45:20) the connection between organisational culture and quality is highlighted. There is also a link between quality and lean manufacturing approaches, which have then inspired some agile practices, such as Scrum.

Reflections

Software quality is such an important topic, but it is something that is quite hard to pin down without using a lot of words. Its ethereal quality may explain why there are not as many podcasts on this topic when compared to more tangible subjects, such as requirements. Perhaps unsurprisingly, the podcasts that I have found appear to emphasise code quality over the broader perspective of ‘software quality’.

This reflection has led to another thought, which is: software quality exists across layers. It must lie within your user interfaces design, within your architectural choices, within source code, within your database designs, and within your processes.

One of the texts that I really like that addresses software quality is by Len Bass et al. In part II of Software Architecture in Practice, Bass et al. identify a number of useful (and practical) software quality attributes: availability, deployability, energy efficiency, integrability, modifiability, performance, safety, security, testability, and usability. They then later go on to share some practical tactics (decisions) that could be made to help to address those attributes.

As an aside, I’ve discovered a podcast which features Bass, which is quite good fun and worth a listen: Stories of Computer Science Past and Present (2014) (Hanselminutes.com). Bass talks about booting up a mainframe, punched card dust, and the benefit of having two offices.

References

Bass, D. L., Clements, D. P and Kazman, D. R. (2021) Software Architecture in Practice [Online], 4th edn, Upper Saddle River, NJ, Addison Wesley.

Permalink
Share post
Christopher Douce

Software Engineering Radio: Code reviews

Visible to anyone in the world
Edited by Christopher Douce, Tuesday 30 September 2025 at 10:44

Although software engineering is what you call a technical discipline, the people element is really important. Software engineering processes are as much about people as they are about product. Some software engineering processes apply a process known a code review.

Code reviews

In SE Radio 400: Michaela Greiler on Code Reviews a code review is described as ‘a process where a peer offers feedback about software code that has been created’ (1:11). Greiler goes on to say that whilst testing can check the functionality of software, code reviews can help to understand ‘other characteristics of software code’, such as its readability and maintainability (2:00). The notion of these other characteristics relates to the notion of software quality, which is the subject of the next blog. Significantly, code reviews can facilitate the sharing of knowledge between peers. Since software is invisible, it is helpful to talk about it. Valuable feedback is described as feedback that leads of change.

Greiler’s podcast suggests that reviews can be carried out in different ways (8:00). They can be formal, or informal. Code can be emailed, or code could be ‘pulled’ from source code repositories. Feedback could be shared by having a chat, or should be mediated through a tool. The exact extent and nature of a review will be dictated by the characteristics of the software. The notion of risk plays a role in shaping the processes.

One of the parts of this podcast I really liked was the bit that emphasised the importance of people skills in software engineering. Tips were shared on giving (and receiving) feedback (19:20). I noted that a reviewer should aim add value, and the engineer’s whose code is being reviewed should try to be humble and open minded. A practical tip was to give engineers a ‘heads up’ about what is going to happen, since it gives them a bit of to prepare and be explicit about the goals of a review. The podcast also emphasised a blog post by Greiler that had the title: 10 Tips for Code Review feedback.

Towards the end, there was a comment that code reviews are not typically taught in universities (34:25). I certainly don’t remember being involved in one when I was an undergraduate. In retrospect, I do feel as if it would have been helpful.

A final question was ‘how do you get better at code reviews?’ The answer was simple: show what happens during a review, and practice doing them.

Reflections

When I was a software engineer, I spent quite a bit of time reading through existing code. Although I was able to eventually figure out how everything worked, and why lines of code and functions were written, what would have really been useful was the opportunity to have a chat with the original developer. This only happened once.

Although this conversation wasn’t a code review (I was re-writing the software that he had originally written in a different programming language), I did feel that the opportunity to speak with the original developer was invaluable. I enjoyed the conversation. I gained confidence in what I was doing and understanding, and my colleague liked the fact that I had picked up on a piece of work he had done some years earlier.

I did feel that one of the benefits of having that chat is that we were able to appreciate characteristics of the code that were not immediately visible or apparent, such as decisions made that related to the performance of the software. The next blog is about the sometimes slippery notion of software quality (as expressed through the conversations in the podcasts).

Permalink
Share post
Christopher Douce

Software Engineering Radio: Infrastructure as Code (IaC)

Visible to anyone in the world

In the last post of this series, I shared a link to a podcast that described CI/CD. This can be broadly described as a ‘software engineering pipeline where changes are made to software in a structured and organised way which are then made available for use’. I should add that this is my own definition!

The abbreviation CI/CD is sometimes used with the term DevOps, which is a combination of two words: development and operations. In the olden days of software engineering, there used to be two teams: a development team, and an operations team. One team build the software; the other team rolled it out and supported its delivery. To all intents and purposes, this division is artificial, and also unhelpful. The idea of DevOps is to combine the two together.

Looking at all these terms more broadly, DevOps can be thought as a philosophical ideal about how work is done and organised, whereas CI/CD release to specific practices. Put more simply, CI/CD makes DevOps possible.

A broader question is: how do we make CI/CD possible? The answer lies in the ability to run process and to tie together bits of infrastructure together. By infrastructure, we might mean servers. When we have cloud computing, we have choices about what servers and services we use.

All this takes us to the next podcast.

Infrastructure as Code (IaC)

In SE Radio 482: Luke Hoban on Infrastructure as Code Hoban is asked a simple question: What is IaC and why does it matter (2:00)?  The paraphrased answer is that IaC can describe “a desired state of the [software] environment”, where that environment is created using cloud infrastructure. An important point in the podcast is “when you move to the cloud, there is additional complexity that you need to deal with”. This software environment (or infrastructure) can also be an entire software architecture which comprises of different components that do different things (and help to satisfy your different requirements). IaC matters, since creating an infrastructure by hand introduces risk, since engineers may forget to carry out certain steps. A checklist, in some senses, becomes code.

When the challenge becomes “how to compose the connections between … thousands of elements, infrastructure becomes a software problem”. Different approaches to solve this software are mentioned. There is a declarative approach (you declare stuff in code), and an imperative approach (you specify a set of instruction in code). There are also user interfaces as well as textual languages. Taking a declarative approach, there are models that make use of formalisms (or notations) such as JSON or YAML. A scripting approach, through the use of application programming interfaces may make use of familiar programming languages, such as Python, which allow you to apply existing software engineering practices. When you start to use code to describe your infrastructure, you can then begin to use software engineering principles and tools on that code, such as version control.

Towards the end of the podcast, a question was asking about testing (52:30), which is a topic that will be discussed in a later blog. Engineers may create unit tests to check to see what elements have been created and to validate characteristics of a deployed infrastructure. Integration testing may be carried out using a ‘production like’ staging environment before everything is deployed.

Reflections

A computing lecturer once gave a short talk during a tutorial that changed how I looked at things. He said that one of the most fundamental principles in computing and computer science is the principle of abstraction. Put in other words, if a problem becomes too difficult to solve in one go, break the problem down into the essential parts that make up that problem, and work with those bits instead.

A colleague (who works in the school) once expressed the same idea in another way, which was “if you get into trouble, abstract up a level”. In the context of the discussion “getting into trouble” means everything becoming too complicated to control. The phrase “abstract up a level” means breaking the problem down into bits, and putting them into procedures or function, which you can then start to manage more easily.

Infrastructure as Code is a really good example of “abstracting up” to solve a problem that started to become too complicated.

IaC has facilitated the development of a CI/CD pipeline. Interestingly, a CI/CD pipeline can also facilitate the development of IaC.

Permalink Add your comment
Share post
Christopher Douce

Software Engineering Radio: Waterfall versus Agile

Visible to anyone in the world

The first Software Engineering Radio podcast featured in this blog speaks to a fundamental question within software projects, which is: how much do we know? If we know everything, we can plan everything. If we don’t know everything, then we need to go a bit more carefully, and figure things out as we go. This doesn’t necessarily mean that we don’t have a plan. Instead, we must be prepared to adjust and change what we do.

Waterfall versus Agile

In SE Radio 401: Jeremy Miller on Waterfall Versus Agile two different approaches are discussed; one is systematic and structured, whereas the other is sometimes viewed being a bit ‘looser’. In this podcast, I bookmarked a couple of small clips. The first is between 16:20 and 19:00, where there is a question about when the idea of agile was first encountered. This then led to a discussion about eXtreme Programming (XP) and Scrum. The second fragment runs between 45:40 and 47:21, which returns to the point about people. This fragment highlights conflicts within teams, the significance of compromise and the importance of considering alternative perspectives. This not only emphasises the importance of people in the processes, but also the importance of people skills within software engineering practices.

Following on from this discussion, I recommend SE Radio 60: Roman Pichler on Scrum. Roman is asked ‘what is Scrum and where does it come from?’ An important inspiration was considered to be ‘lean thinking’ and an article called ‘the new product development game’. It was later described as ‘an agile framework for developing software systems’ (47:50) which focuses on project and requirements management practices. Scrum can be thought of a wrapper where other software development practices can be used (such as eXteme Programming, and continual integration and deployment).

It is worth highlighting some key Scrum principles and ideas, which are discussed from 2:50 onwards. An important principle is the use of small autonomous multidisciplinary self-organising team (21:10) of up to 7  (plus or minus 2) people that comprises of developers, a product owner and a Scrum master. The Scrum master (24:15) is responsible for the ‘health’ of the team and remove barriers to progress. The team is empowered to make their own decisions about how they work during each development increment, which is called a sprint. A sprint  (7:20) is a mini project that has a goal, where something that is built that ‘has value’ to the customer (such as, an important requirement, or group of requirements), and is also ‘potentially shippable’.

Decisions about what is built during sprints is facilitated through something called a product backlog (28:50), which is a requirements management tool, where requirements are prioritised. How requirements are represented depends on the project. User stories were mentioned as ‘fine grained’ requirements. In Scrum, meetings are important. There is a daily Scrum meeting (13:10), sprint reviews, and a retrospective (43:35). The retrospective is described as important meeting in Scrum, which takes place at the end of each sprint to help the team reflect on what went well and what didn’t go well.

Reflections

When I was an undergraduate, we were all taught a methodology that went by natty abbreviation of SSADM. I later found out that SSADM found its way into a method called Prince, which is an approach used for the management of large projects. (Prince is featured in the OU’s postgraduate project management module).

I was working in industry when Beck’s book about XP came out. When I worked as a software engineer, I could say that we applied a small ‘agile’ approach with a more traditional project management methodology. We used techniques from XP, such as pair programming, and continually kept a Gantt chart up to date.

At the time, none of us knew about Scrum. Our Gantt chart was Scrum’s burn down chart. We didn’t have a project backlog, but we did have an early form of a ‘ticket system’ to keep track what features we needed to add, and what bugs needed to be fixed.

One of the things that we did have was version control. Creating a production version of our software products was quite a intensive labour process. We had to write release notes, which had to be given a number, a date and saved in a directory. We built new installation routines, and manually copied them to a CD printing machine, which as asking for trouble. What we needed was something called CI/CD, which is the topic of the next post.

Permalink
Share post
Christopher Douce

Software Engineering Radio: Software engineering processes

Visible to anyone in the world
Edited by Christopher Douce, Monday 29 September 2025 at 17:41

Software engineering is about the creation of large software systems, products or solutions in a systematic way with a team of people.

Since software can serve very different needs and has necessarily different requirements, there are a variety of ways that software can be built. These differences relate to the need to take account of different levels of risk. You would use different processes to create a video game, than you would for an engine management system. Software engineering processes are also about making the ‘invisible stuff’ of software visible to software engineers and other stakeholders.

One of the abbreviations that is sometimes used is SDLC, an abbreviation for Software Development Lifecycle. Software has a lifecycle which begins with requirements and ends with maintenance. Although software never wears out, it does age, since the context in which it sits changes. Processes can be applied to manage the stages within and between the software life cycle.

Different terms are used to refer to the development of software systems. There can be greenfield systems, brownfield systems, or legacy systems. Legacy systems can be thought of the ‘old systems that continue to do useful stuff. Legacy systems are also brownfield systems, which software engineers maintain, to make sure they continue to work. Greenfield systems are completely new software products. In the spirit of honesty, more often than not, software engineers will typically be working on brownfield systems and legacy systems than greenfield systems; systems that are towards the end of the software lifecycle than at the start.

The blogs that follow highlight different elements of the software development process. It begins with a discussion about the differences between waterfall and agile. It then goes onto say something about a technique known as Continual Integration/Continual Deployment (CI/CD), which has emerged through the development of cloud computing. CI/CD has been made possible through the development of something called ‘infrastructure as code’, which is worth spending a moment looking at (or listening to). Before we move onto the important subject of software quality, I then share a link to a podcast that discusses a process that aims to enhance quality: code reviews.

Permalink
Share post
Christopher Douce

Listening to Software Engineering Radio

Visible to anyone in the world
Edited by Christopher Douce, Monday 29 September 2025 at 15:39

From time to time, I dip into (and out of) a podcast series called Software Engineering Radio, which is produced by the IEEE. It’s a really useful resource, and one that I’ve previously mentioned in the blog post Software engineering podcasts.

This is the first of a series of blog posts that shares some notes I’ve made from a number of episodes that I have found especially interesting and useful. In some ways, these posts can be through of a mini course on software engineering, curated using the voices of respected experts.

Towards the end of each blog, I share some informal thoughts and reflections. I also share some links both earlier posts and other relevant resources.

I hope this series is useful for students who are studying TM354 Software Engineering, or any other OU module that touches on the topic of software engineering or software development.

Software engineering as a discipline

When doing some background reading (or listening) for TM113, I found my way to SE Radio 149: Difference between Software Engineering and Computer Science with Chuck Connell.

In this episode, there are a couple of sections that I bookmarked. The first is 10:20 through to 12:20, where there is a discussion about differences between the two subjects. Another section runs between 24:10 and 25:25, where there is an interesting question: is software engineering a science, an art, or a craft? The speaker in the podcast shares an opinion which is worth taking a moment to listen to.

According to the software engineering body of knowledge (SWEBOK), engineering is defined as “the application of a systematic, disciplined, quantifiable approach to structures, machines, products, systems or processes” (SWEBOK v4.0, 18-1). Put in my own words, engineering is all about building things that solve a problem, in a systematic and repeatable way that enables you to evaluate the success of your actions and the success of what you have created.

An early point in chapter 18 of SWEBOK is the need to: “understand the real problem” which is expanded to the point that “engineering begins when a need is recognized and no existing solution meets that need” (18-1). Software, it is argued, solves real world problems. This takes us to a related question, which is: how do we define what we are building? This takes us to the next post, which is all about requirements.

Before having a look at requirements, it is useful to break down ‘software engineering’ a little further. The SWEBOK is divided into chapters. The chapters that begin with the word ‘software’ are: requirements, architecture, design, construction, testing, maintenance, configuration management, engineering management, engineering process, models and methods, quality, security, professional practice, economics.

There are three others which speaks to some of the foundations (and have the word ‘foundation’) in their title. They are: computing, mathematical, and engineering.

Reflections

Software is invisible. It is something that barely exists.

The only way to get a grasp on this ‘imaginary mind stuff’ of software is to measure it in some way. The following bits of the SWEBOK is helpful: “Knowing what to measure, how to measure it, what can be done with measurements and even why to measure is critical in engineering endeavors. Everyone involved in an engineering project must understand the measurement methods, the measurement results and how those results can and should be used.” (SWEBOK v4, 18-10). The second sentence is particularly interesting since it links to the other really important element in software engineering: people. Specifically, everyone must be able to understand the same thing.

The following bit from the SWEBOK is also helpful: “Measurements can be physical, environmental, economic, operational or another sort of measurement that is meaningful to the project”.

Next up is software requirements. By writing down our requirements, we can begin to count them. In turn, we can begin to understand and to control what we are building, or working with.

Permalink Add your comment
Share post
Christopher Douce

TM470 Considering software requirements

Visible to anyone in the world
Edited by Christopher Douce, Thursday 11 April 2024 at 09:30

If your TM470 project is all about the developing software to solve a problem, requirements are really important. Requirements are all about specifying what needs to be built and what software needs to do. A good set of requirements will also enable you to decide whether or not your software development has been successful. They can help you to answer the question: “does it do what we expect it to do?” There is a direct link between requirements and testing.

The exact nature of your requirements will depend on the nature of your project. There are different types of requirements. Two high level types of requirements are: functional requirements and non-functional requirements. Modules such as TM354 Software Engineering provide some further information about the different types and categories, and different aspects you might want to consider. 

One thing that you need to decide on is: how to you write down your requirements? The decisions that you take will, of course, relate to what your project is all about. Some projects will need formal approaches, perhaps using Volere shells, whereas other projects may use something like use case diagrams. If your project is interaction design heavy, your requirements may be embodied with artefacts such as sketches, prototypes, scenarios and personas. To learn more about these different approaches, you need to refer back to the module materials for some of the modules you have studied. You should also consider having a look in the OU library to see what you can find.

There is also, of course, also a link between your chosen project model, and your choice of requirements. Stakeholders are also of fundamental importance: you need to know who to speak with to uncover what your requirements are. You need to make a decision about how to record your requirements, and justify why you have adopted a particular approach. Different people will, of course, understand requirements in different ways. How you speak to fellow software engineers will be different to how you speak to end users.

I recently listened to a really interesting podcast about requirements engineering from something called Software Engineering Radio, which is associated with the IEEE Software magazine. Here's a link to the podcast: Software Requirements Essentials: SE Radio 604 Karl Wiegers and Candase Hokanson.

Although this is just over an hour (and I know everyone is busy), it is worth a listen.

Some key themes and topics addressed in this podcast includes:

  • What do requirements mean?
  • What is requirements elicitation?
  • How can requirements be presented? Or, what is does a requirement specification look like?
  • Do users know what they need?
  • How much requirements analysis is needed?

The podcast concludes with a question which begins: what tips would you share for someone who is involved with an ongoing project? (The answer to this question is very pragmatic)

Reflections

An interesting reflection (and comment that emerged from this podcast) is that the requirements approach that you adopt relates to the risks that are inherent within your project, and the implications of any potential software failures. This, in turn, is linked to the LSEP issues which are starting to be explored within your TM470 TMA 2.

When you are addressing requirements, you can highlight different requirement gathering approaches in your literature review. Do use module materials that you have previously studied as a jumping off point to do some further reading about the subject by looking at resources you can find in the OU library, but do be mindful about getting sucked into various ‘rabbit holes’; requirements engineering is a subject all of its own. When it comes to your TM470 project, you need to make practical decisions, and justify your decisions.

Permalink Add your comment
Share post

This blog might contain posts that are only visible to logged-in users, or where only logged-in users can comment. If you have an account on the system, please log in for full access.

Total visits to this blog: 3149319