OU blog

Personal Blogs

Picture of James Sokolowski

New blog post

Visible to anyone in the world

Tools stream in outline


A brief history of the five Tools stream approaches

Three characteristics of complex situations of change and uncertainty have shaped the way the systems concepts associated with the five systems approaches emerged as part of the historical development of the systems field since the mid-twentieth century:

  • A concern for interrelationships and interdependencies. During the 1960s and 1970s the focus was very much on interrelationships, so methods were developed that explored these in depth − for example system dynamics (SD) and the viable system model (VSM).
  • An appreciation of multiple perspectivesBy the mid 1970s it was clear that interrelationships were not neutral concepts. The relative importance of a particular interrelationship depended on the different purposes you could ascribe to any single situation. Therefore methods were developed that helped explore the implications of applying different perspectives to the same situation − for example strategic options development analysis (SODA) and soft systems methodology (SSM).
  • The critical acceptance that boundary judgements circumscribe systems designDuring the mid 1980s it became clearer among systems practitioners that these perspectives were not neutral. Perspectives determined what was seen to be relevant and what was not. They determined what was ‘in’ someone’s assessment of the situation and what lay outside it. Whoever defined the dominant perspective controlled the boundary of the inquiry or intervention. Therefore a third key element of a systems approach became the importance of studying boundaries and of critiquing boundary decisions and who made them, in terms of the ethical and political questions they raised. Critical systems heuristics (CSH) was developed in the late 1970s to deal with this.

However practitioners who were familiar with the older approaches have tended to modify them to incorporate the new ideas as they came along. So the newer approaches have enriched a growing field of options rather than displacing the older ones.

The key concepts associated with each systems approach that you have used are summarised over the next few screens, but remember that an important feature of each approach is its flexibility. The users of all five approaches have often been able to adapt them to cope with new requirements.

Part 2  System dynamics

System dynamics (founded in the late 1950s by Forrester) is an approach to understanding complex situations over time.

SD configures such situations as systems with internal feedback loops and time delays that affect the behaviour of the entire system. What makes using system dynamics different from other approaches to studying complexity is the use of feedback loops, and stocks and flows, in displaying nonlinearity.

Forrester started work on devices to control radar in the late 1950s. He then moved into the field of industrial relations, and from there into modelling global resource depletion as part of attempts to model sustainable development.

Described image
Figure P6.15 System dynamics model showing the launch of a low-cost airline (Systems Approaches reader, Figure 2.28)

Part 3  Viable system model

VSM is a model of system viability. A viable system is a system able to keep an independent existence; that is to be autonomous.

To do so it needs to be organised in such a way as to meet the demands of surviving in a changing environment. One of the prime features of systems that survive is that they are adaptable.

The model was developed by the cybernetician Stafford Beer who effectively founded management cybernetics – now known as organisational cybernetics – a set of ideas that are being developed and used by cyberneticians worldwide.

VSM, like SD, is particularly significant in surfacing interrelationships and interdependencies among relevant factors – giving substance to the notion of joined up thinking − and hence addressing the conventional thinking trap associated with narrow-mindedness, sometimes referred to as silo thinking or reductionism.

Described image
Figure P6.16 Viable system model (Systems Approaches reader, Figure 3.1)

Part 4  Strategic options development and analysis

Cognitive mapping is a technique developed by Colin Eden for revealing and actively shaping the mental models, or belief systems (mind maps, cognitive models) that people use to perceive, contextualise, simplify, and make sense of otherwise complex situations.

The notion of cognitive mapping is based on a way of constructing meaning designed to facilitate negotiation and help participants agree on acceptable plans of action. Strategic options development and analysis (SODA) is used to cultivate organisational change through attention to, and valuing of, individual perspectives in a concerted manner.

Both facilitation (process) skills and conventional knowledge management (content) skills are involved.

It uses three, hierarchically organised levels (Figure P6.17):

  • goals
  • potential issues (strategic directions)
  • options.
Described image
Figure P6.17 Hierarchical sets (Systems Approaches reader, Figure 4.12)

Part 5  Soft systems methodology

Soft systems methodology (SSM) is an approach to organisational process modelling developed by Peter Checkland and others through a ten year program of action research.

The primary use of SSM is in the analysis of complex situations where there are divergent views or even active disagreement about the nature and definition of the problem.

To intervene in such situations the soft systems approach uses the notion of a system as an interrogative device that will enable conversation and learning among concerned parties.

To support this process, multiple systems are designed as conceptual models each based upon a particular worldview. These are then used for interrogating real-world situations of interest.

Both SODA and SSM are sometimes referred to as problem-structuring (as against problem-solving) approaches as they both centrally deal with issues of multiple perspectives.

They are both well placed to address the conventional thinking trap associated with dogmatism – the tendency to hold steadfastly to one perspective irrespective of possible importance of other perspectives.

Described image
Figure P6.18 Iconic representation of SSM’s learning cycle (Systems Approaches reader, Figure 5.9)

Part 6  Critical systems heuristics

Critical systems heuristics (CSH) is a framework for reflective practice based on practical philosophy and systems thinking, developed originally by Werner Ulrich.

The basic idea of CSH is to support boundary critique – a systematic process for handling boundary judgments critically through boundary reflection and boundary discourse.

CSH, like SSM, emerged from an ethical systems tradition promoted through the works of the American systems philosopher C. West Churchman. CSH raises issues of ethics in systems design regarding what is good (and harmful) and what is right (and wrong) as well as issues of politics regarding who has control and to what effect.

As a framework for reflective practice, CSH is well placed to address the systems thinking traps associated with holism (supposing that all entities can be taken into account in systems design) and pluralism (supposing that all perspectives can be taken into account in systems design).

Described image
Figure P6.19 Boundary critique as boundary reflection and boundary discourse (Study Guide, Figure 6.5)

That is the end of Appendix 1. Either continue directly into Appendix 2 or return to the main text

Appendix 2

People stream in outline

The Tools stream focused on teaching mainstream systems approaches and methods. However the People stream aimed to trawl beyond the systems mainstream, in the cognitive sciences in particular, in pursuit of other ideas and phenomena that might be useful to the systems practitioner.

The eleven headline issues you have met in the course of the People stream are reproduced below for convenience. There are, of course, plenty of other issues raised by the People stream, so please feel free to also make use of any others that seem relevant to you; the choice is yours.

  • Headline issue 1.1 – Mental models and rationality. Policy principles, even at the highest strategic level, draw on underlying mental models that shape perceptions and determine what is seen as rational.
  • Headline issue 2.1 – Limits to knowledge and predictability: The world seems predictable. But this is a human construction, and little is known about the complexities on which it depends. The ‘ignore-ance’ about situations is usually vastly greater than you might think.
  • Headline issue 2.2 – Multiple approaches to tackling problems: There are many different frameworks for thinking about problems and challenges. A change of conceptual framework may lead to a change of method.
  • Headline issue 3.1 – Individual differences in cognition: The importance of recognising the wide range of ways in which people can differ in their cognitive processes, preferences, strengths and weaknesses, and hence in how they communicate, make sense of situations and tackle problems.
  • Headline issue 3.2 – Modes of thinking other than the ‘rational’: We are all familiar with the kind of thinking that involves consciously working things out. But it is easy to demonstrate that the brain also has very different pattern-based ways of working which can, nevertheless, be very effective, though they may be all or partly out of consciousness
  • Headline issue 4.1: The nature of metaphor: The role metaphors and similar processes play: in communicating about complex subjective experiences and states; in category-formation; and in their capacity, like any model, to highlight some aspects of a situation, marginalise others, and introduce distortions.
  • Headline issue 4.2: Projection: Our perceived world is a result of sense-making activities in our brains ‘projected’ outwards. It can therefore be subject to various anomalies, particularly in ambiguous situations. Strategic thinking usually involves highly ambiguous situations.
  • Headline issue 5.1 – The intervention system: If you are designing a system for carrying out an intervention, what needs to be included?
  • Headline issue 5.2 – The nature of systems approaches: What are the family characteristics that distinguish systems approaches from other approaches?
  • Headline issue 5.3 – Evaluating interventions: How can you tell how successful an intervention has been?
  • Headline issue 6.1 – Systems thinking and wisdom: The work of the German psychologists Paul Baltes and Ursula Staudinger suggest useful parallels between systems thinking and the concept of wisdom. Further parallels can be made with ideas of triple-loop learning and the art of bricolage.

The five short sections that follow are memory joggers for the main landmarks in parts 1−5.

Part 1  Mental frameworks, metaphors, Gulf War

Lakoff, using the Gulf War as an example, argued that strategic decisions can’t avoid being based on metaphor. However, different underlying metaphors lead to different implicit rules about what is rational. It is therefore important to recognise the metaphors being used, and to try out a variety of contrasting metaphors.

Described image
Figure P6.20 Burning oil wells in Kuwait (People stream Part 1, Figure P1.1)

Part 2  Why is today so like yesterday?

The apparent predictability of the everyday world is embedded within an immensely complex web of factors that are mostly outside a person’s awareness, for example unconscious mental processing, network complexity, tipping point dynamics, evolving relationship patterns, and the ‘wickedness’ of many social issues. The familiar world is a relatively predictable construction within this unpredictable wilderness – a cognitive niche that is the human equivalent of an animal niche or territory.

It was suggested that you could think of a strategy as a way of creating a manageable garden within this unmanageable wilderness (Figure P6.21). There are usually many equally appropriate gardens that could be created. Hence there is need for negotiation and decision making so that collective efforts can be focused on creating an agreed garden.

The experience of resolving problematic issues can take many forms. Ten different metaphors for problem-solving were suggested, each leading to a different problem-solving approach and hence a different rationality.

Described image
Figure P6.21 Strategy as the creation of a garden in a wilderness (People stream Part 2, Figure P2.24)

Part 3  The idiosyncratic world of thinking

Good strategic thinking is hugely demanding and needs to be able to draw on every thinking mode available. Limiting it to rational thinking based on explicit data is much too restrictive.

It was suggested that interpersonal differences in cognitive style and preference (MBTI, five senses) could lead to different strengths, weaknesses and needs in making sense of situations and tackling problems.

Two different intra-personal modes were explored: an evolutionarily old mode that is associative, automatic, unconscious, parallel and fast (intuitive thinking is one expression of this) versus a more evolutionarily recent, distinctively human, mode that is rule-based, controlled, conscious, serial and slow (rational thinking is one form of this).

Ways of working with metaphor landscapes showed that some problems can be solved entirely at the level of metaphor, without any of the usual trappings of rational analysis.

Described image
Figure P6.22 Amazing world of the imagination (illustration from Alice in Wonderland by Lewis Carroll) (People stream Part 3, Figure P3.11)

Part 4  Patterns in the mind

Metaphors are a kind of model, albeit in the mind rather than on paper. Like any model, they highlight some aspects, but also marginalise others and introduce distortions. You can reduce the risk of error by seeking out alternative metaphors or models, for example by inviting critique from others.

Metaphor-related concepts that were introduced included ‘conceptual metaphors’ versus ‘metaphoric linguistic expressions’; ‘target’ versus ‘source’; the links between comparison, analogy, metaphor and category; the notion of ‘logical level’; and the role of radial metaphors.

‘Projection’ is the result of perception mediated by models. What you perceive as ‘out there’ is due to a model in your brain being activated and projected out onto the world.

In an ambiguous situation several different patterns may be activated, which may result in your perception switching between them.

If you are aware of your projections, you may be able to use them as a useful indicator of your internal state, as a source of creative ideas, to let you imagine how the world might be different, or to help you appreciate what others are experiencing.

Projections you are unaware of may have unexpected consequences (positive or negative) for you and others.

Described image
Figure P6.23 A typical radial set of metaphors: descriptions of happiness (People stream Part 4, Figure P4.22)

Part 5  Some approaches from other practitioner communities

Two metaphor-based methods were explored: Vincent Nolan’s version of synectics and Caitlin Walker’s Metaphors@Work, (some case-studies of Caitlin’s work are provided as an optional extra resource).

The methods gave some insight into the features that any intervention might require, gave a different perspective on the Tools stream approaches, and looked at some of the issues around evaluating and choosing such methods.

Synectics raised issues such as the need to check that the basic conditions for problem solving are present, the difference between a powerful wish and a technical goal, the practical value of unpolished or even nonsensical ideas, and the requirements for constructive, rather than destructive, feedback.

Metaphors@Work shows how different subjective personal positions can be externalised and shared, and thereby used as a basis for negotiating shareable personal positions, which can then be mapped back onto real situations.

Part 5 is the end of the main People stream teaching, since Part 6 is used for integrating the two streams.

Described image
Figure P6.24 Big wish (People stream Part 5, Figure P5.10)

Permalink 1 comment (latest comment by Jan Pinfield, Sunday, 8 Sep 2019, 15:47)
Share post
Picture of James Sokolowski

Synectics

Visible to anyone in the world
Edited by James Sokolowski, Sunday, 11 Aug 2019, 16:07


Permalink
Share post
Picture of James Sokolowski

Useful Rich Picture Diagram Symbols

Visible to anyone in the world


Permalink
Share post
Picture of James Sokolowski

Dealing with Conflict

Visible to anyone in the world
Edited by James Sokolowski, Wednesday, 7 Aug 2019, 03:29

Mary Follett (1868-1935), classified three ways of dealing with conflict.

  1. Domination - The imposition of one party's view on all the other parties.
  2. Compromise - Often a degraded outcome, the best that can be agreed by all conflicting views.
  3. Integration - An outcome where all desires have found a place.  

Peter Checkland, uses two terms.

  1. Consensus - Where all parties agree
  2. Accommodation - Where all partiews


I prefer to view the terms Consensus and Accommodation to mean the following.

  1. Consensus - All parties vote on a set of options and the majority choice is selected.  Subtly different to compromise, in that a consensus will leave a range of outcomes.  Some people may be dissatisfied, others satisfied.
  2. Accommodation - Slightly broader than integration, in that is seeks a 'good' outcome for all, rather than the optimal outcome.  integration is where everyone gets their way, whereas accommodation has everyone getting a result they can be happy with.

Additional Conflict Resolution Outcomes

  1. Indoctrination - Where parties believe they have what they want, but unaware that they would have chosen differently had their thinking not been influenced by indoctrination.  

Dynamics of Conflict

Irrespective of which definitions are preferred, or how they are understood, the dynamics of conflict will always be present.  The environment is always changing.  Problematic situations will always be in flux.  Consequently, conflict will never be solved indefinitely.  Nature has to have oscillation between bi-polar points on a spectrum.  This means that even if a consensus is reached, humans will begin to forget the original problem their ancestors solved.

 "If forgotten, history is doomed to repeat itself."  

Permalink
Share post
Picture of James Sokolowski

Free Your Organisation from Performance Excuses with Double-Loop Learning

Visible to anyone in the world
Edited by James Sokolowski, Monday, 5 Aug 2019, 00:34

We all hold values and beliefs which govern how we chose to complete our day-jobs.  However, sometimes the consequences of our actions do not produce the results we had expected.  In these situations, there has been a mismatch between our intended actions and our actually results.  When mismatch occurs, we look for new ways of doing things so that this undesirable consequence doesn't repeat itself.  This type of learning behavior is defined as 'single-loop' learning.

Single-loop learning changes how we execute our daily tasks, based on prior performance outcomes.  What single-loop learning doesn't do is alter our values and beliefs governing how we should act.  Only double-loop learning will shift these governing variables.  The power of double-loop learning is that it can free individuals and organisations from accepting excuses for less-than desirable performance.  Below is a real-life example of double-loop learning in action.


    

Supermarket Merchandising; A Product Feature Problem

  • Governing Values.  
  1. Product features must always be full of merchandise and presentable at all times.
  2. No stock is to be left unaccounted for.  (The store's inventory computers must have a record of where everything is).
  3. No merchandise can be 'plugged'. (Plugging is when product in forced into a gap that belongs to another product).
  4. No employee overtime is ever allowed.
  5. Associates are not allowed to order product. (Computers must automatically re-order when product is sold).
  • Actions
  1. An Associate has only 40 mins left in his shift.  He notices a product feature is nearly empty.  There's lots of gaps on the shelving and there's not enough product left to make this feature look presentable.  He therefore decides to dismantle this feature, and replace it with a different product that he has plenty of in the back storeroom.  However, the remaining off-coming merchandise now needs to be given a new location.  Unfortunately by this point, the Associate has ran-out of work hours must clock-off.  He is not (under any circumstances) allowed to work overtime.  Consequently, he has no time to properly scan these off-coming items into the back storeroom.  He sees a space on a nearby shelf, which normally holds a different product that the store has currently sold-out of.  He therefore decides to 'plug' this off-coming product into that gap.  He knows plugging is not allowed, but he has ran out of time and see little alternative.
  • Consequence
  1. The Associate ran-out of time to complete his tasks correctly (according to the stores governing values).  The consequence of this, is that the Associate decided to plug merchandise into a location temporarily.  When the Store Manager later questions why this product is in the wrong locations, the Associate present a defensive justification for his actions.  Despite the Store Manager being annoyed, the Associate quotes store's policies on 'overtime' and 'unaccounted' stock, and uses this in his defense against his Store Manager.  Both the Associate and the Store Manager ends this discussion equally frustrated with each other.
  • Single-loop Learning (Actually Outcome)
  1. To prevent the risk of upsetting his Store Manager with the same problem in the future, this Associate learns that he should not attempt to fix the product features when he has less than 1 hour left in his shift.  The Associate realizes that had he done nothing, and left this feature looking empty and unfilled, he would have not been blamed.  The Store Manager would still have noticed the feature being empty, but would not have been able to attribute blame on the Associate directly (as everyone is responsible for maintain the product features).  The Associate would have been able to hide his strategy behind the guise of 'collective responsibility'.
  • Double-loop Learning (Lost-Opportunity)

Had the Store Manager and Associate discussed the problem openly, using a double-loop learning perspective, the could have made systemic improvements to their 'governing values'.  Both the Associate and the Store Manager could have altered their 'governing values' towards over-ordering.  Going forward, the Associate could adopt a new daily routine.  Each morning he would take 5 mins to scan every feature, and order replacement merchandise well in advance.  The Store Manager would agree to confirm the additional stock orders.  Ordering new merchandise well in advance is a very quick process.  Checking daily, would ensure the Associate is never placed in the same predicament.  It will also ensure all features remain full and presentable, in a much more time-efficient manner.  

Had both the Associate and the Store Manager discussed this problem from a double-loop learning perspective, they may found other solutions that involve altering their 'governing values'.  Instead the Associate adopted his new strategies of not caring about product features (deferring blame), when he has less than 1 hour left on the clock.  

  • What's the Difference?
Why is "adopting the new product-ordering routine" a 'double-loop' rather than 'single-loop' outcome?  

Surely, the Associate has just altered his 'actions' with a more effective 'single-loop' solution.

A key difference is that 'governing values' are socially accepted constructs, devised through collective agreement between multiple stakeholders and perspectives.  Ordering new product would not have been socially agreeable according to the old governing values, as the Store Manager originally did not want Associates ordering new product manually.  This 'double-loop' solution requires the entire Store adopting an systemic change in viewpoint to their ordering strategy.  The 'single-loop' solution did not alter the systemic strategy, it only alter the execution of the existing systemic strategy.  

Permalink
Share post
Picture of James Sokolowski

A Useful Exercise for Uncovering Bi-Polar Attributes

Visible to anyone in the world



Permalink
Share post
Picture of James Sokolowski

The Difference Between Cognitive Maps and Causal-loop Diagraming

Visible to anyone in the world

An example of a cognitive map


An example of a causal-loop diagram

Cognitive maps and causal-loop diagrams may appear very similar, however there are subtle differences.  The significance of these differences can affect the strategies adopted. 

  • Cognitive mapping focuses on the thinker, whereas causal-loop diagrams focuses on the situation.
  • Causal-loop diagrams attempts to model actual causal interconnections within a situation.  Cognitive maps attempts to represent peoples perceptions and feelings.
  • Causal-loop diagrams attempts to answer what is happening, whereas Cognitive mapping focus more on how the perceivers feel about what is happening (sometimes human perceptions of reality are more important that reality itself).
  • Causal-loop diagrams are often created by an individual.  Cognitive mapping requires the perspectives of many.


Sometimes being able to accurate construct a dynamic model of a system is not enough.  A good example of this is the 5-a-day campaign.  This is a common national campaign in many countries that encourages people to eat at least 5 portions of fruit and veg per day.  This recommendation originated from the World Health Organisation, which suggested everyone should be consuming at least 400g of fruit or veg per day.  The human body is a very refined system and consequently scientist and academics have been able to map this system using causal-loop diagram techniques.  However, these techniques would reveal that the human biological system demands more than 400g.  In fact some studies show that doubling the daily consumption of fruit and veg (to 800g) significantly increases protection against ALL forms of mortality.  The problem is that everybody views the world differently, and have developed different perspectives on life and different beliefs on what they value most.  It seems national governments tend to understand this phenomenon well.  In many Western countries the average consumption of fruit and veg is 3 portions.  Expecting society to increase their portion intake to 10 pieces would have been an unrealistic expectation.  Campaigning for 5 portions (and average increase of 2) is viewed more realistic and consequently effective.  Causal-loop diagramming would not have necessarily revealed this strategy however, cognitive mapping would have certainty uncovered the complexity of this problem much better.  A rigorous and thorough cognitive mapping exercise would likely reveal the extent people value the necessity to each fruit and veg.

A final point...

Some messy problems simply have no clear, calculable solution.  Logical approaches do not necessarily help in these situations.  Anyone who may have tried using causal-loop diagram to model such instances will quickly discover how complexity is so hard to map accurately.  Therefore intuition comes to the rescue.  Cognitive maps can better model a group's opinion, perspectives and feelings in situations where no factual solutions are not available.    



Permalink
Share post
Picture of James Sokolowski

Rough Assumptions of How Myers-Briggs Temperament Types May Affect Systems Thinking

Visible to anyone in the world

Research suggests that Myers-Briggs Type Indicators affect how we differ in our approach to strategic thinking and use of systems thinking tools.  I've devised a grid matrix which attempts to plot MBTI personality type against their possible preferences for tackling strategic problems.  I've also provide possible recommendations for helping support different temperaments in their efforts as thinking strategically.

NB. Please don't anyone be offended.  I constructed this model in a 2hr spurt of creativity, and can't vouch for it's academic rigor.


All strategic problems contain a level risk and uncertainty to overcome.  Risk and uncertainty can produce anxiety in those faced with finding a solution.  I believe people develop coping strategies for managing this anxiety at a young age, which over time affects how they prefer to tackle strategic thinking later in life.  I further believe the underlying drivers behind these coping strategies are based on their intrinsic tolerance levels to risk and uncertainty.  Anticipating someone’s tolerance for risk and uncertainty may assist managers in better understand how someone is likely to approach strategic thinking.  The below grid matrix contains my assumptions on the human biases which may exist when applying systems tools.  I have also attempted to assign the MBTI to each quadrant of this matrix.  MBTI Temperaments contain two MBTI, and this matrix allows for two traits to be circled.



Permalink
Share post
Picture of James Sokolowski

Viable System Thinking: Don't ask what a thing is, ask what it does.

Visible to anyone in the world

The Concept of "System"

It is not too uncommon to think of a system as individual parts that are involved in dynamic interactions.  VSM thinking requires that you step away from thinking about the constitute parts and how they react with one another, toward a focus on the process and the purpose of that process.  The system boundaries are thus drawn around the process and not the parts of the organisation.  

Surely a lion in a zoo is the same as a lion in its natural habitat?  Not necessarily.  Yes, the object (in this case a lion) is technically identical, but system is not.  The system of having a lion in a zoo has a purpose to attract visitors, whereas the system of having a lion in its natural habitat has the purpose of being a predator at the top of a food-chain.  Thinking about systems in this way, highlights how the individual parts of the system is not all that relevant.  The process, and the purpose of the process, is more pertinent when trying to build a viable system model.

The Concept of "Variety"

In VSM thinking the concept of variety also plays an important role and needs to be viewed in a specific way.  Take a classroom environment.  The class will consist students that posses a variety of different backgrounds, home situations, prior knowledge, motivations etc.  The role of the teacher is to somehow teach the lesson in a manner that reduces the variety of approaches to one that can be broadly viable to the variety of different learning requirements of the students present.  This type of thinking about variety has important implications for the VSM.  Firstly, the complexity of a system is now viewed on the number of different possible variations.  Secondly, the Law of Requisite Variety (Ashby), states that the variety of different options in a system must be equal or less than the variety of options available to the regulating system to which it belongs to.  If the higher order system does not contain sufficient options to adapt to all the possible variations it needs to regulate, then the entire system is not suitable to meet all extremes.  Therefore there are only two strategic options.  Either the variety of the regulating system is increased, or the variety of the sub-system is decreased.  So going back the classroom example, - either the school increases the number of available classes so that the same lesson can be taught in different ways, or the school streamlines the students to reduced variety of learning needs.


Permalink
Share post
Picture of James Sokolowski

Stafford Beer - Father of the Viable System Model

Visible to anyone in the world
Edited by James Sokolowski, Saturday, 8 Jun 2019, 18:59

"Any viable system contains and is contained in a viable system."

Stafford Beer devised the Viable System Model, based on his theory that a system is only viable by virtue of its sub-system themselves being viable. 

Overview of the Model

The model consists of 5 sub-systems and an environment.  

  1. Operations - the set of activities the organisation which provides value to the environment.
  2. Coordination - the set of protocols that coordinate operations so that different operations do not cause problems for each other.
  3. Delivery - the management activities associated with allocating resources for the operations.
  4. Development - the management activities associated with understanding the environment and future trends.
  5. Policy - the balancing activities to ensure the organisation works as a system, especially balancing the decision-making between the two Delivery and Development systems.


The two most critical tensions in the VSM are:

  • the tensions between the autonomy of the parts versus the cohesion of the whole.
  • the tensions between the current and future needs.

Two fundamental concepts in VSM are:

  • Wholeness - Attributes the systems has as a whole which the sub-systems do not have as components.
  • Emergence - Attributes that emerge as necessary to manage immediate risks/opportunities in the environment.

Too much autonomy and no cohesion and the system's 'wholeness' is lost.  Too much cohesion and no autonomy and emergent attributes fail to capitalise on the environmental risks and opportunities that immediately occur.

Using the VSM as a diagnostic tool, involves assembling key features into the perfect ideal situation.  This 'ideal' is then compared to the perceived reality of the current VSM structure.  The differences that are noticed guide action to move the perceived situation towards the ideal.

Permalink
Share post
Picture of James Sokolowski

Systems Dynamic Modelling - And Top Tips for Naming

Visible to anyone in the world
Edited by James Sokolowski, Saturday, 18 May 2019, 02:14

Picking and Naming Variables

The choice of words is vital.  Each variable must be a noun. Avoid the use of verbs or directional adjectives.  For any variable, always have in mind a specific unit of measure so that the variable can be quantified.  

Basic Tips

identify loop types using R or B to signify reinforcing or balancing.  Always circle the R or B with a small curve travelling in the same direction as the feedback loop itself.

Useful Images

Signing Rules

Balancing

Puzzling Feedback

Reinforcing Loops



Permalink Add your comment
Share post
Picture of James Sokolowski

How To Spot Systems Dynamic Effects

Visible to anyone in the world
Edited by James Sokolowski, Thursday, 16 May 2019, 14:05

One why to identify the effects of a system is to recognise how situations are described.  A rich array of metaphors exist in common English.

  • "it only made things worse"
  • "no matter how hard I try"
  • "no good deed goes unpunished"
  • "it came back to bite me"
  • "the fix only made things worse"
  • "a stitch in time saves nine"
  • "it's quicker in the long-run"
  • "plan early, plan twice"

We've all heard these expressions before, however their real value is not in providing advice for an immediate choice of action, but rather they offer clues to the underlying problem that exists within the system.  For example the above list might be better interpreted as....

  • "we've just uncovered an unintended consequence in our system"
  • "even when working at full performance, this problem cannot be addressed by one person working hard."
  • "every time we do the job properly, it has never cause a problem, maybe this is the standard we need to achieve every time?"
  • "we cannot cut-corners in this part of the process, as this has a great impact on the system."
  • "we've just uncovered an unintended consequence in our system"
  • "early intervention at this stage of the process improves our system"
  • "the entire system requires that we invest more time to this stage of the process, otherwise there's a knock-on effect afterwards"
  • "we are not getting accurate information early enough from the rest of the system, to justify planning this early"
Permalink Add your comment
Share post
Picture of James Sokolowski

Expert Lessons from Expert System Thinkers

Visible to anyone in the world
Edited by James Sokolowski, Thursday, 16 May 2019, 13:43

Jay Forrester

Portray Photo of Jay Forrester

The primary advantage of computer models over mental models lies in the way a computer model can reliably determine the future dynamic consequences of how the assumptions within the model interact with one another.  A secondary advantage of computer models over mental models is that interrelated assumptions are made explicit.  Unclear and hidden assumptions are exposed by the mathematically programs and thus causes hidden assumptions to be debated and examined.

"Because all models are wrong, we reject the notion that models can be validated in the dictionary definition sense of 'establishing truthfulness', instead focusing on creating models that are useful... we argue that focussing on the process of modelling rather than on the results of any particular model speeds learning and leads to better models, better policies, and a greater chance of implementation and system improvement."

Enter any troubled company and speak with its employees and one will generally find people perceive reasonably correctly their immediate environments.  They can tell you what problems they face, and can produce rational solutions to their problems.  Usually the problems are blamed on outside forces, but a dynamic analysis often shows how the internal policies are causing the troubles.  In fact a downward spiral can develop in which the presumed solutions make the difficulties even worse.


Donella Meadows

Portrait Photo of Donella Meadows

  1.  Get The Beat.  Before you disturb the system in any way, watch how it behaves.  Ask people who have been around the system a long time.  If possible graph actual data from the system.  
  2.  Listen to The Wisdom of The System. Aid and encourage the forces and structures that help the system run itself.  Remember the current system has often evolved naturally, so seek the value in the current system and re-enforce it's most successful practices.
  3.  Expose Your Mental Models to The Open Air.  Everything you know, hypothesis or assume can explain is nothing more than a mental model.  Get your model out there and invite others to shoot it down.  Consider all other models plausible until you find evidence to prove the contrary.
  4.  Stay Humble. Stay a Learner.  It is just as important to trust your intuition as it is to trust your rationality.  Lean on both approaches equally.
  5.  Locate Responsibility in The System.  Look for ways where the system creates its own behaviours.  Sometimes outside events can be controlled, but sometimes they can't.  Sometimes blames or trying to control outside influences only blinds one to the easier task of increasing responsibility within the system.
  6.  Make Feedback Policies for Feedback Systems. A dynamic self-adjusting system cannot be governed by a static, unbending policy.  Design policies that change depending on the state of the system.
  7.  Honour and Protect Information. A decision-maker can't respond to information he or she doesn't have, or react correctly to inaccurate information.
  8.  Pay Attention to What is Important, Not Just What is Quantifiable. Our culture is often obsessed with numbers, and consequently what gets measured becomes more important than what we can't measure.  Decide what is more important quantity or quality and ensure the most important is discussed and spoken about.
  9.  Go for The Good of The Whole.  Don't maximise parts of the systems while ignoring the whole.  Aim to enhance the entire system.
  10.  Expand Time Horizons.  The official time horizon extends beyond the payback period of the current investment, or the next election, or even the next generation.  When you walk you must pay attention to obstacles at your feet, just as much as you pay attention to the obstacles in the distance.
  11.  Expand The Boundary of Caring.  No systems is separated from the world itself.  Real systems are interconnected and so caring beyond the immediate boundaries are necessary.
  12.  Expand Thought Horizons.  Seeing systems as a whole requires perspectives from many different disciplines.  All involved must be in learning model to solve the problem together.

Peter Senge

Portrait of Peter Senge

A learning organisation is one where "people continually expand their capacity to create the results they truly desire." The five disciplines of a learning organisation are Systems Thinking, Personal Mastery, Mental Models, Shared Vision and Team Learning.

  1. Today's problems come from yesterday's 'solutions'
  2. The harder you push, the harder the system pushes back
  3. Behaviour grows better before it grows worse
  4. The easy way out usually leads back in Temptation for individuals to adopt the easiest solution, tend to creep back.
  5. The cure can be worse than the disease Passing the burden onto the inventor to fix, when ultimately the underlining problem remains unfixed.
  6. Faster is slower  Virtually all systems are have intrinsically optimal rates of growth, which are far less than the fastest growth possible.  Excessive growth can result in the wide system seeking to compensate by slowing it down.
  7. Cause and effect are not closely related in time and space
  8. Small changes can produce big results - but the areas of highest leverage - are often the least obvious
  9. You can have your cake and eat it too - but not at once Many current dilemmas are by-products of static thinking.  They only exist in the current situation.  Often both goals can be achieved if you are willing to wait for one while focused on the other.
  10. Dividing an elephant in half does not produce two small elephants A system boundary must include the most important issues at hand.
  11. There is no blame We tend to blame others 'someone else'.  The cure lies in your relationship with your 'enemy'.
Permalink Add your comment
Share post
Picture of James Sokolowski

Two new books to help me understand Systems Dynamic Modelling

Visible to anyone in the world
Edited by James Sokolowski, Wednesday, 15 May 2019, 18:21


A picture of environmental scientist and activist Donella Meadows

Today I discovered work by the environmental scientist and activist Donella Meadows. She contributed to the construction of a System Dynamic model called World3, which attempted to computer simulate human population growth.  I found a link to an version of the World3 model online (although I can't verify its authenticity) https://insightmaker.com/insight/1954/The-World3-Model-A-Detailed-World-Forecaster

As far as I can tell, Donella Meadows was convinced the population of the world will outgrow the world's capacity to support life within the next 100 years.  She wrote a book called Limits to Growth which sold over 10 million copies.  I've just purchased this book, because I'm fascinated with what she has to say. She devoted the rest of her life to living in a completely sustainable "closed-loop".

However, I just can't subscribe to apocalyptic views and dooms-day theories. Humankind has always lived with fear and dread of their own survival.  Jay Forrester (1997) is quoted in saying  "that Systems Dynamics demonstrates how most of our own decision-making policies are the cause of the problems that we usually blame on others."  Surely this is at the very natural of what it is meant to be "human."  When man designed a bicycle, he also created the problem of it not being fast.  When man put a engine in the bicycle, he also created the problem of it being too fast that people died.  The human solution was not to stop riding motorbikes and settle for slow bicycles! No, the solution was design faster bikes and better crash helmets.

Humans create solutions, which create new problems, which result in solutions to new problems.  Surely this is how evolution is meant to work.  I've just purchased a second book call Why e=mc2, and Why Does It Matter?  I'm convinced that Einstein's theory of relativity is a missing link that debunks sustainability theories.  Nature exists in cycles of creation, then destruction, but there still exists a "constant" in the equation.  Surely that "constant" is alternatively called growth, evolution or progress?

World3 SD model

Permalink 2 comments (latest comment by James Sokolowski, Friday, 17 May 2019, 23:35)
Share post

This blog might contain posts that are only visible to logged-in users, or where only logged-in users can comment. If you have an account on the system, please log in for full access.

Total visits to this blog: 1928