According to my mail, I'm afraid my recent Intro post to the list left too
much unsaid, which I have been asked to elaborate on.
The research I referred to is being performed by a (currently)
loose-knit group known as the Autognomics Institute, a recently formed
non-profit research organization of which I am a part-time contributor.
Although I don't have the space to fully explain the science of
"Autognomics", I can address a point of recent interest from an Autognomic
point of view. I have also recently posted responses to the "Data
Selection" thread along the same lines.
The term "mental models" is often interpreted to mean internal
representations of an external world used by a living system (organism or
organization) as a basis for thought and action. To my knowledge, it
originates in a psychological paradigm that still prevails today, and has
been adopted for use by various fields such as human-computer interaction,
information sciences, artificial intelligence, and apparently by
management consultants as well. Autognomics, along with certain groups
within Artificial Intelligence (e.g., see Rod Brooks and Animats),
psychology (e.g., work often referenced back to J.J. Gibson) and biology
(e.g., the autopoiesis of Maturana and Varela) dispute (to various
degrees) the value of the mental model concept.
One problem with theories of learning and action that presume mental
models ("representationalism") is that they often have little to say about
*the process* by which such models come to exist and are modified,
particularly without requiring a "privileged" or omniscient view of the
"real" world on which a particular person's or organization's mental model
is based. Autognomics (and others), on the other hand, begin with the
presumption that there is no * objective* world to be modeled, but rather
that the system learns by "constructing" its world (not just a model),
captured in and distributed throughout the network of acts that compose
its knowledge. (hence the term "constructivism" is sometimes used).
Many of the methods and concepts that have been used to elicit
organizational "mental models" are implicitly derived from a
representational point of view. As a result, they inheret the same
difficulties, namely: the problem of choosing what to include in the
"model", and understanding how the model is modified through experience.
The *ladder of inference* which seems to be getting a lot of use
lately on the list is a good example. One problem with this ladder is
that like any other ladder, it has a top and bottom. There is no
*process*. How does one know when the selected data is "enough"? What
process leads to previously unknown (i.e., unnamed) data eventually being
known and selected? While I don't think Senge proposed anything like the
ladder, I still think he may have unwittingly suggested a concept of
mental models that often leads to contradiction (or at least, lack of
synergy) with his otherwise laudable emphasis on process. While the term
"mental model" is not *necessarily* bad, it opens the door (through which
many enter) to a "realist" epistomology that never has been very
successful in explaining, let alone recognizing, the *process* by which
the "objects" it takes for granted (the "components" from which "models"
are constructed) come to exist in the mind, and how they are instantiated
and used in formulating action.
The science of Autognomics distinguishes itself within the
constructivist domain by its proposed *process* as to how learning occurs.
It is strictly anti-representational in that all knowledge (i.e., that
which is learned) is in the form of *acts*. Unlike the ladder of
inference, which begins with perception and ends with action, we propose
that there is *no* perception except as preceded by (and as a result of)
action. I suppose that if I had to give an alternative to "mental model",
I would suggest "habitual acts", both physical and intellectual. While it
may seem like a silly play on words (and nearly is when "mental model" is
used by the more insightful (IMHO) practitioners of systems thinking), it
forces one to tie knowledge to action, and undercuts any tendancy to take
certain objects/concepts as "given".
Autognomic's view of learning largely depends on the theory of
Semiosis originally proposed at the beginning of the 20th century by the
American philosopher and logician Charles S. Peirce. Semiosis, or the
sign process, describes how "signs" (which, roughly, are the objects and
concepts thought to make up mental models) are created, modified, and
potentially abandoned through use. Peircian theory is enjoying a recently
renewed interest, having been largely ignored for the better part of the
century, which I can only hope will lead to a greater interest in lines of
thinking like Autognomics.
Well, I suspect that that is far more elaboration than most on the
list care to read. I encourage anyone interested in more detail to
contact me or the Institute directly if you don't want to bother the whole
list, although I intend to hang around on the list and get in a few words
when I have the time. Since the Institute is just getting started as an
"entity", our current emphasis is on developing relationships with people
who are interested in contributing to or using the results of our
email@example.com (203) 599-3910x2291
Norm Hirst, Autognomics Institute, Mystic CT (203) 536-8585
Host's Note: I reformatted this message to 75 cols wide; wider messages
can be hard to read in some email programs.
-- Rick Karash, firstname.lastname@example.org, host for learning-org