Normal Accidents LO12048 - Joe's Jottings #68
Fri, 17 Jan 97 16:53:42 -0800

Humpty-Dumpty sat on a wall.

Humpty-Dumpty had a great fall.

All the king's horses and all the king's men,

Couldn't put Humpty-Dumpty together again. (Nursery rhyme)


Things should be as simple as possible, but not simpler.

(Albert Einstein)


For a year now, I've been carrying around an article from the January 22,
1996 issue of _The New Yorker_ called "Blowup" by Malcolm Gladwell.
Gladwell writes about some well-known major disasters such as Three Mile
Island, the Challenger tragedy, and various airplane crashes. After each
event, we go through intensive investigations, trying to determine what
happened and then to eliminate that cause so that we prevent the disaster
from happening again, for that reason.

This process sounds pretty reasonable to my quality ears, but Gladwell
offers a different view. "Over the past few years," he writes, "a group
of scholars has begun making the unsettling argument that the rituals that
follow things like plane crashes or the Three Mile Island crisis are as
much exercises in self-deception as they are genuine opportunities for
reassurance. For these revisionists, high-technology accidents may not
have clear causes at all. They may be inherent in the complexity of the
technological systems we have created."

Remember, this was written six months before TWA flight 800 went down.
All the king's people are still trying to put that one together again.

The scholars that Gladwell mentions are sociologists. They are finding
evidence that "the potential for catastrophe is ... found in the normal
functioning of complex systems..." and that "... accidents are not easily
preventable." Yale University sociologist Charles Perrow calls these,
"normal accidents."

Gladwell also discusses the phenomenon of "risk homeostasis." He refers
to work done by Canadian psychologist Gerald Wilde who says that human
beings seem to compensate for lower risks in one area by taking larger
risks in others. Gladwell quotes studies that show that equipping cars
with better braking systems actually increased accidents because the
drivers went faster and tailgated more. Likewise, more pedestrians are
hit in crosswalks than in unmarked areas because the pedestrian tend to be
less careful in crosswalks.

And, of course, there is one factor that Gladwell doesn't mention, the
issue of unintended consequences. The classic case here is that of the
air bag. There is risk homeostasis effects because some people don't wear
their seatbelts in air bag equipped cars. And, there are unintended
consequences when the deployment-speed of the air bags saves the lives of
normal-sized people but injures small people and kills babies. We solve
one problem and cause others. Not nice.

I've been carrying around this article for a year because I didn't know
what to do with it. I see its message as being fatalistic, discouraging.
I certainly see fulfillment of the premise every day. Information systems
are arguably the most complex systems ever devised, and they fail
regularly in all sorts of "interesting" ways. I preach information
systems quality and strong project management as ways of making the
systems less "interesting" and more boringly reliable. But they still
fail despite the significant efforts of very smart and caring engineers.
But I'm not ready yet to give in to a "what will be, will be" sort of

A few weeks ago, my daughter Joni gave me a copy of the February/March
1996 issue of MIT's _Technology Review_. It contains an article by
political scientist Eric Brende entitled "Technology Amish Style." Brende
lived among the Amish for over a year and a half, and, while he isn't at
all ready to adopt their religion, he has come to really appreciate the
way they fully incorporate technology into their individual and communal

I always had the uneducated belief that the Amish simply froze their level
of technology someplace in the 18th or 19th century. That's not at all
true, according to Brende. The Amish in fact do accept new technology,
but they do it according to a few basic rules.

First, the technology has to be able to be understood by ALL members of
the group. They refuse to have specialists because they believe that
widely-shared knowledge helps keep their society stable and gives everyone
the ability to contribute fully informed opinions on all decisions. They
become jacks of all trades and masters of all of them because the trades
are few, are well-valued by everyone, and are completely integrated with
all aspects of their work and personal lives.

Second, they test all technological proposals against the basic values of
their society to see how the technologies might fit into their way of
life. Brende says, for example, that "they have made an active decision
not to avail themselves of (electricity). Installing electricity would
only permit them to plug in clever contraptions that could, at the push of
a button, shift much skilled work away from them, reducing the need for
shared know-how and the opportunities for community-building."

Innovations, when proposed, are carefully tested for social impact.
Brende gives the example of the decision to use a nearby pay telephone to
facilitate sales of produce to customers. During a six month test, the
community debated two sides of the issue: some people urged the use of the
telephone to maintain competitive advantage, while others warned of loss
of face-to-face intimacy and contact that was inherent in transactions
without the telephone. After the test, the community decided to use the
telephone but only in specified, limited situations.

As I sit here facing an upgrade from Windows 3.1 to Windows NT and having
to install another gigabyte of memory to do that, and having to learn a
whole bunch of new software so that I can do tomorrow what I can do today,
I appreciate Amish wisdom.

OK, so I've looked at the two poles, complexity that causes random
disasters on the one hand versus tightly constraining technology within
the framework of community values. I'm not ready to accept either

Usually, when logic runs out of answers, the decisions come back to
people. We may someday have a reductionist logical understanding of the
chemical and electrical processes that make up human biology, but, for the
foreseeable future, we have the pleasure of having to deal with the
messiness of emotions, of laughter and tears, of fear and peace, of anger
and delight. Decisions of risk are made in the gut, not in the brain.

Most of us are blessed with the inability to calculate probabilities on
the fly. We don't have to be burdened with the exact data about how
driving is more dangerous than flying, or about how stupid it is to drive
faster if we wear a seat belt. We do what we do because we enjoy it. Our
definitions of pleasure are learned as children. We are now simply
exploring those well-worn patterns.

Complexity, to me at least, is pleasure. I marvel at what computers can
do, even if I can't logically justify doing them all. I am willing to
take the risk of flying on a 747, partially because I am thrilled by the
intricate technological ballet of plane, pilot, and air traffic control
that hurtles 400 people through the sky at 600 mph.

But the question the Amish ask is the right one: how does a specific
technology affect the values of the community? We usually don't ask
ourselves this question when we build information systems. We don't
question the risks of adding complexity. We rarely are willing to skip a
generation of software or hardware for fear of instant obsolescence and
career failure. We make our personal complexity catastrophes inevitable.

Where have you seen values applied to information systems? What examples
do you have of the failures of complexity or of the successes of

I hope you are off to a wonderful 1997.



Learning-org -- An Internet Dialog on Learning Organizations For info: <> -or- <>