See how this way of modeling a system can help to explain how orcas, fishermen, seals, otters, and sea urchins can combine to serously threaten the Bering Sea ecosystem:
This page maintained by Jeff Dooley
Explore the subjects listed below for systems papers, references, theory, and practice for building and sharing a language for effective learning and action.
People sometimes ask, "what is cybernetics?" Many give Norbert Weiner's definition: communication and control in the animal and in the machine. But this phrase fails to carry the sophisticated meanings the term has come to embody, fails to excite listeners to imagine the implications of cybernetics for their lives.
Let's further press the question: how might an explanation of cybernetics help us get through the day and prepare for tomorrow? I'm going to try to give my perspective on this question in the following, but I don't think of it as a "definition" exhaustive or otherwise; just a pragmatic perspective I'm willing to risk.
Cybernetics is a science of purposeful behavior. It helps us explain behavior as the continuous action of someone (or thing) in the process, as we see it, of maintaining certain conditions near a goal state, or purpose. This process includes perceiving, organizing perceptions, comparing organized perceptions against a desired (remembered) goal state, and enacting corrective behavior just in case the perception varies too widely from the desired perceptual state. A function of memory, in some systems, aside from storing the goal-state, is also to store past patterns relevant to the types of error-corrections the process is called upon to make. Observing these aggregate functions, connected by appropriate information channels, over time, we may distinguish a purposeful whole--a system. Being able to perceive, compare, decide, and act purposfully makes the thing cybernetic.
The process of things being cybernetic together may look to an observer like a dynamic dance, like the action of flocking birds all trying to be in the center of the group as it moves.
Considering the implications of seeing the world through cybernetic lenses can have a devastating effect on our traditional view of knowledge and the nature of things--two very philosophical issues. At the very least (there is certainly more), cybernetics implies a new philosophy about (1) what we can know, (2) about what it means for something to exist, and (3) about how to get things done. Cybernetics implies that knowledge is to be built up through effective goal-seeking processes, and perhaps not necessarily in uncovering timless, absolute, attributes of things, irrespective of our purposes and needs. John Dewey gave us that bit of systems epistemology back in the 1920's. Cybernetics implies that what we take for reality may be no more than an elaborate construction of ours useful to us in maintaining our ability to get along. This notion suggests that the "systems" we distinguish may be useful as mere hypotheses, rather than requiring them to be out there, to be discovered and catalogued by erstwhile cyberneticians. These ideas lead us to design ways of doing things--solving- problems--that recognize the likelihood that indefinitely many solution paths may be available, and that indefinitely many perspectives on problems will be found.
Because systems can be considered hypotheses and not actually existing things it is relevant to ask, "what are the needs of the observer of the system that lead the observer to hypothesize this particular system and not some other system?"
Cybernetics strongly implies an ethical position: People (and other autonomous life forms) can be classified as a peculiar sort of cybernetic system which works best when it sets its own goal states. These are autonomous cybernetic systems, as contrasted with other purposeful systems, like thermostats, which do not set the goal states their actions seek to achieve. The ethical position is that we are led to avoid on principle actions that purposefully (on our part) disregard, frustrate, or replace the goals states held by other autonomous systems. At least it leads us to review our own purposes to the extent that they may overrule the purposes of other autonomous systems.
The relevance of all this to organizational design, diagnosis, and team problem-solving is huge. It results in viewing organizations as purposeful systems containing purposeful parts--people, and groups of people, things, and technology. These parts can act together purposefully, producing systemic behavior that the parts cannot produce on their own but for which they are mutually causally responsible.
Finally, cybernetics can help us to see how the behaviors of parts can help or hinder the effectiveness of a larger system. It can also help us to understand what is happening when purposeful parts interact to create systemic causal patterns whose eventual consequences are not good for the parts. An example of this would be "A Tragedy of Commons" in which the combined result of the short run selfish action of the parts is sufficient to kill them all off later.
Up to options Down to navigation
We have all seen decisions taken that seemed unmistakably correct only to result in unintended, and undesireable, side-effects later on, in a different part of the system. These decisions are costly, we feel, viewing them in hindsight.
Now there is a way by which teams within an organization can create a picture of the causal influences and feedback loops created by alternate decisions, and they can set these decisions in motion to see what their results may be, locally, and systemically.
Using one of the most powerful tools of systems practice--system dynamics modeling--these teams can diagram the structures driving the behavior of a system, and then, as a team, construct a visual map that is also a computer program capable of simulating the behavior of key interdependent components of the system over time. This computer map is valuable as a shared "mental model" of the system in question, and it is valuable as an explicit story of everyone's assumptions about key interrelationships among the parts of the system. This model can be developed in a way that anyone who wishes can "try out" a set of decisions during a simulation run.
The simulation program is one of the last steps in a methodology which involves several important steps, from brainstorming the components of a problem, to mapping and linking those components as a map of feedback loop structures, to converting the causal loop map to a computer flow diagram, and then on to simulation, distribution, use, and improvement of the model.
Originally developed during the 1950's as a mainframe program named DYNAMO, the System Dynamics simulation application has evolved in breadth and power on desktop computers, either Mac or Windows. Versions of the applicaton are available from a number of vendors. A popular choice is iThink, from High Performance Systems Inc. of Hanover, NH.
Up to options Down to navigation
Thanks to both TQM and Business Process Re-engineering the notion of designing organizations around their production processes has tantalized many decision makers. It has led to large scale change initiatives, some of which have succeeded, some of which have not. When we look at what has worked and what has not worked some patterns seem to emerge.
Successful process design and implementation has at least two key features sometimes missing in less successful iniatives. These are:
Another key aspect of successful process design, taken from systems science, is the notion of designing a solution that accommodates both social and technical systems. This dual cultural and structural focus has emerged in the work of Peter Checkland, a systems researcher and consultant. His contribution, Soft Systems Methodology, has become the template for self-correcting process design among systems many consultancies. A link to Peter Checkland will soon be available.
Up to options Down to navigation
Organizational culture and behavior are among the most powerful, and ambiguous, forces within an organization. Many change initiatives have stalled when existing values and behavioral patterns have become threatened. Yet there is rarely discussion about these potential factors in the success or failure of expensive change initiatives. Part of the reason these issues are bypassed has been that we have lacked an appropriate approach to understanding the systemic structures maintaining these patterns. However, in recent decades at least one approach has emerged, form the work of Chris Argyris, Donald Schon, Diana Smith, and Robert Putnam, that produces remarkable, enduring results. This approach has acquired the name: Action Science.
Action science advances the notion that our behavior is an automatic device for error-correction, driven by our mental models and the values they contain. The situations we bring forth through behavior reflect the values on which we act. For instance, if I wish to win at poker I will lie, distort, cover-up, and obstruct the flow of accurate information (about my hand). If I do this cleverly enough, I win the pot. I hold the value "it is good to win" and I act effectively to bring forth a situaiton that reflects this value realized: my winnings. this works very well at the poker table, yet it has drawbacks in organizational problem-solving.
To complicate further, we appear to be largely unaware of our mental models and the values they contain. Moreover, when we act in ways that conflict with our espoused values, we are rarely aware of the disconnect, though it seems clearly apparent to others around us.
A result of this disconnect between our espoused values and the values we enact is that we may, upon making an error, act to protect ourselves from embarrassment or threat by distorting or covering-up key information about the error. This usually saves us, but it has the unfortunate consequence of also inhibiting just the kind of organizational learning that would be required to examine the conditions that gave rise to the error in the first place. We have acted in a way that has probably hurt the organization, yet we don't usually think of ourselves as sabateurs; we simply do what seems to make sense given the dilemmas of organizational life.
The practice of doing action science is to join in a group process of examining the consequences of our behavior, and sometimes of guessing the values that we actually used in difficult situations. Through role playing alternate behaviors and discussing the results in groups we are able to invent and produce new behavioral patterns less likely to distort information or to evoke a defensive response in another.
Once these new habits become entrenched at the top of an organization it is not uncommon to see a renaissance of creativity and passion among the people throughout.
Up to options Down to navigation
Some downloadable Systems-oriented papers and simulation models.
References to Aikido and through-the-body learning for enhancing systemic leadership:
Other Systems-Oriented Sites:
About Adaptive Learning Design
Contact: dooley at well.com
Jeff Dooley's Home page
© 1995 Adaptive Learning Design