Normal Accidents LO12136 -Comments
Wed, 22 Jan 97 16:36:20 -0800

Replying to LO12048 --

More great comments on the Jottings #68.


______________________________ Forward Header __________________________________
Subject: Re: Comments on Joe's Jottings #68
Author: HUDI PODOLSKY at HP-PaloAlto,om13
Date: 1/21/97 10:09 AM

All of the comments on this jot, along with the jot itself have been
interesting. It got me thinking about why we are all so intrigued by
failure. For one thing, it's clear to me that failure is where I learn.
When I'm only doing things I can already do successfully, I'm not in a
position to learn. So there is this personal experiance, that every
infant has, of the value and fascination of failure. And then opposed to
it, there is the common institutional attitude towards failure. It's bad.
Prevent it. Hide it. Etc.

Watch the kindergarteners. They are still, many of them, cheerful and
excited about their failures in September. But by the end of the year,
the great majority of them have become timid, afraid to fail, ashamed of
failure. Perhaps this is our society's way of regulating the pace of
change. If we had schools that nurtured innovation and risk taking by
supporting failure, perhaps we'd chave a society that created more
innovation than we could digest. Maybe our institutions are a needed
counterbalance for homo sapien's incredible inventiveness. Perhaps at
some level we know that if we don't create social taboos about failures
and institutions that support those taboos, we'd innovate ourselves to
death as a species.

Or perhaps we'd innovate ourselves into some astounding,unimaginablely
interesting future...



As usual, you've put out some very thought-provoking material. I
especially resonate with Phillip Capper's response. However, I (with
my analytic hat on today) have another direction for these few

>Gladwell also discusses the phenomenon of "risk homeostasis." >He
refers to work done by Canadian psychologist Gerald Wilde who >says
that human beings seem to compensate for lower risks in one >area by
taking larger risks in others. Gladwell quotes studies >that show
that equipping cars with better braking systems >actually increased
accidents because the drivers went faster and >tailgated more.
Likewise, more pedestrians are hit in >crosswalks than in unmarked
areas because the pedestrian tend to >be less careful in crosswalks.

>And, of course, there is one factor that Gladwell doesn't >mention,
the issue of unintended consequences. The classic case >here is that
of the air bag. There is risk homeostasis effects >because some
people don't wear their seatbelts in air bag >equipped cars. And,
there are unintended consequences when the >deployment-speed of the
air bags saves the lives of normal-sized >people but injures small
people and kills babies. We solve one >problem and cause others. Not

I'd like to know more about these studies before I drew too many
conclusions from them. For example, I could also envision more
pedestrians being hit in crosswalks because more pedestrians actually
use crosswalks than unmarked areas (this certainly seems dependent
upon the local culture). So, I'd be curious as to rates of accidents
instead of counts of accidents, and then I'd be curious as to how the
causality was inferred.

As for air bags, I just saw a statement from a certain car
manufacturer claiming that one of the problems with air bags was that
the US government required manufacturers to make them able to restrain
unbelted passengers. That increased the required charge and
contributed to the problems with smaller passengers. If that be true,
then perhaps people (in general) haven't increased their risks by not
wearing belts due to the presence of an air bag; indeed, I thought I
had seen statistics which indicated increased use of seat belts
(perhaps due to increased legal obligation to do so).

Even if the circumstances are as conjectured, perhaps the people
haven't increased their overall risks but 'merely' offset some of the
gains. That certainly _seems_ reasonable.

I'm probably especially picky today, because I was listening to a
piece on NPR about the Dupont murder case this morning. Apparently
Dupont's defense will be based on an insanity plea. The commentator
noted that, should this succeed, it would likely fuel the argument
that the insanity plea is a rich person's
defense and that it is freeing hordes of criminals, when, in fact, it
is used in 0.25% of US criminal cases (as I recall) with about a 33%
success factor, and the overwhelming majority of such defenses are
attempted by, as I recall, people who get public defenders, not rich

This doesn't necessarily change the implications of your posting.
Things are getting more complex. We are, in general, seemingly less
able to understand or control all of the ramifications of our designs
(even though we've gotten better at them). Our immediate reaction
after catastrophes may not be the appropriate one. I just want to
make sure we spend suitable attention to analyzing the situation
rather than going with what "seems obvious".

On a related note, at a conference on computers in London back in 1972
or 1973, I was speaking with a person from the British National
Physical Laboratory (~ the US NIST) regarding AI and "intelligent"
applications of computing. I conjectured that a key difference
between human and automated work was that we were willing to let
humans fail at tasks, but we expected computerized solutions to be
correct always. He replied that he fully expected people to begin
allowing computers the same freedom to fail at tasks (implying, among
other things, that we would need and have the same defensive
mechanisms in place that we do --- or perhaps should --- for people).



Bill Harris Hewlett-Packard Co. R&D
Productivity Department Lake Stevens Division domain: M/S 330
phone: (206) 335-2200 8600 Soper Hill Road fax:
(206) 335-2828 Everett, WA 98205-1298


Learning-org -- An Internet Dialog on Learning Organizations For info: <> -or- <>