Normal Accidents LO12141

K SANDROCK (KSAND@hertz.mech.wits.ac.za)
Thu, 23 Jan 1997 10:39:57 GMT+2

Replying to LO12070 --

Lon Wrote

"Two amplifications on human error
1. System reliability is thetotal of the multiplied reliability of
each component. Thus the system reliability of a machine with three
parts a,b,and c, is (reliability a) x (reliability b) x (reliability
c). Most people, however, will intuitively attempt to add the chances
of failure of each component or each step in a process to establish
the overall odds of failure or accident potential. (reliability a) +
(reliability b) + (reliability c).
2. It is normally very easy to find the reliability of each
componant in a machine. It is normally very difficult to find all
the componants ofan interactive process between groups of people. It
is not math that stands in our way but awareness."

Lon, you have used a fundamental probability law in your illustration
built on the use of the AND (multiplicative ) relationship. You have said
that the system functions IF a AND b AND c work. IF the three components
are in parallel (your point 2 regarding interactive processes maybe), then
computing reliability is more complex. In this case we need to use another
law given by the additive relationship described by OR.

To cut a long story short lets simplify the circuit to a and b
leaving out c. The parallel circuit works if: (a AND b work)OR(a
works AND b fails)OR(a fails AND b works). Let R(a) = 0.9 and R(b)
= 0.9 then your series circuit gives a reliability of (0.9)(0.9) =
0.81 and the parallel circuit (0.9x0.9)+(0.9x0.1)+(0.9x0.1)= 0.99.
However, if we compute the "unreliability" instead of the reliability
of the parallel circuit we have simply that the system fails if a AND
b fail = (0.1)(0.1) = 0.01 and reliability = 1 - 0.01 = 0.99 as
before.

This may be a clue to finding the reliability of an interactive process
between people (series-parallel circuitry?). IMHO it would be simpler to
try to minimize failure rather than to try to maximize reliability
(success). For example, to facilitate communication it would be difficult
(impossible maybe) to try and optimize the communication processes
involved, but not so difficult to minimize bad news by eliminating KNOWN
barriers to communication. However, we run into one of Ackoff's 'laws'
here in that eliminating what you don't want does not necessarily give you
what you do want. But minimizing bad news (known bugs) at least points us
in the right direction, and we can move from there in CI steps to possible
maximization of good news.

Regards
Keith
Keith Sandrock Systems/Johannesburg Technology (JOHANTEC)
FAX 27-11-339-7997 KSAND@hertz.wits.mech.ac.za

-- 

"K SANDROCK" <KSAND@hertz.mech.wits.ac.za>

Learning-org -- An Internet Dialog on Learning Organizations For info: <rkarash@karash.com> -or- <http://world.std.com/~lo/>