I'd like to add a little something into the collective knowledge soup here.
(Apologies if it's occurred somewhere in Doug E's work or elsewhere on this
list and I just haven't seen it before.)
I've been thinking about organizations, knowledge and CODIAK.
All of what Eric has written below is lovely providing folks know what it is
they need to know at a given point in their work process. And there are two
important questions in there that I think unrev tools need to answer very
1) What is knowledge?
2) How do folks know that they know what they need to know in order to make
decision D at point P and be as confident as they can that they are making
the right call?
I think these two questions are important because answering them defines how
helpful an augmenting system can be.
To show what I mean, let me ask two more questions, corollaries of the
i) If I haven't defined what knowledge means for the system, then I might
just fill my Dynamic Knowledge Repository with everything that passes my
eyes and then some, but how useful is it if all that stuff just sits there
ii) Let's assume that you've cottoned on that a passive system is, um, dull.
How do you teach the system to gather all the relevant knowledge it can to
the right person/place at the right time at the right level,... etc.?
And in order to answer those questions positively I'm going to introduce
b) Task/Process Patterns
c) Self-reforming Systems
I'm going to suggest that human knowledge is really all about awareness.
When you talk about what you know it falls into roughly four categories:
A) Your current awareness of your environment (present perception).
B) Facts in your memory that you are pretty sure you can recall accurately
C) Awareness of processes (patterns of praxis) in your memory ...ditto (how,
D) Your present range of capability within the environment of which you are
aware; your 'sphere of influence'.
(To say a little bit more about (C): Here I will regard a pattern of praxis
as consisting of ordering of facts and actions/implications in a particular
sequence (how). Facts can operate either as input or constraints to a
I then suggest that 'why' is just an inversion of 'how' sequencing.
'When' is the locating of a sequence step/point relative to a more general
sequence (ultimately the top level sequence is time itself).)
It seems to me that what the systems unrev is seeking to produce are all
about is the extension of human awareness. If you think of the Internet as a
vast DKR right now, it is not difficult to see that it is pretty much
completely passive. It doesn't pre-empt your knowledge needs or wants, and
there's no real extension of your awareness beyond its existing boundaries.
Now, it occurs to me that a great many human enterprises are systematic
processes and that there are frequently effective models of such. These
models are process patterns.
It also occurs to me that assuming you can identify an individual's process
pattern(s), then you should also be able to define not just the constraints
on what he is allowed to see, but also how far abroad any active
knowledge-seeking system should go to fulfil the knowledge needs of a
particular part of his process in advance. One should be able to have the
DKR prepare in advance for the exam, if you like. Think how such a system
would enhance (A), (B), (C), and (D) from above. It would effectively move a
lot of the burden on your brain 'out' into the computer system.
This in turn would make the system even more transparent to itself enabling
greater unburdening in the future as, say, robotics technology advances.
Now in the terms of present computational capabilities such active knowledge
seeking might only consist in actively maintaining the most up-to-date set
of information relative to some part of a process - e.g. the latest range of
compatibilities for some new chipset in some existing range of systems. But
it is also not that difficult to see that once you have specified what it is
that needs to be known *relative to a defined process* for achieving a
specific goal, that the nature of the systems' 'active seeking' can be
defined too, and that the formatting of the 'knowledge' can be driven by
this process. For example, relational databases are a well-understood
paradigm for information storage, and the information becomes knowledge when
combined into a process in some way at some point.
Now, it is also not too difficult to see that Design Patterns (oft touted
concept in software development in recent years) actually hold a great deal
of the information that a Process Pattern as I have defined it might have.
But it is also clear that approaches to representation or ordering of
relations in many data sets that might be used with Design Patterned
software are not often consistent across the variety of systems out there,
making the 'active seeking' part problematic.
The Semantic Web effort is an attempt to overcome this issue. By providing a
layer of description over system data you get part of the way towards having
the computer take over some of the responsibility for relevance of retrieved
data in response to queries. It can at least ask you whether type T is the
right datatype for operation O in your system and try something else if it
isn't. Thus the computer tacitly extends your range of capability.
We know from Doug Engelbart's thinking, and other works out there, that
human social and commercial organizations are vastly interwoven complexes of
activity. We also know that most presently operate around a central
capitalist ideology. Governments today are in the unenviable position of
having to balance state economic interests in the world (with underlying
interests from geographically located portions of trans-national
corporations) against the welfare of the people of the world (constraining
trans-national corporate activity).
We also know that this complex tapestry is ever-changing, and now more
rapidly than before.
So it stands to reason that the shape of processes and their components also
needs to shift with these changes.
It also stands to reason that if we want to have some control over our
futures then we need to get some 20,000ft overview of any system and its
environment in order to steer effectively.
At present we do this with rather cumbersome system process re-engineering
methods dependent on current human epistemic limits whose results are often
obsolete before they are applied. So I want to put a strange vision on the
block in respect of what I've written above and see how it runs:
Let's say that like Doug Engelbart we have seen the organization as a system
of interrelated parts. Change or remove a part and the effects ripple
through the system some distance (for good or bad). Let's also assume that
any system can be modelled using some logic-based verification system.
Perhaps if we hook enough models together and make the whole thing
non-monotonic, then not only could we simulate system changes rapidly but we
could also use a chosen simulation (in effect a decision to take some
positive course of action) to drive changes in the organizations' systems
and also to monitor the real efficacy of such a change (Dear Jack P, This
reminds me of something...:-). In effect you could make re-organization of
an organization's system a much more adaptive, tactical and gradual affair
instead of being a vast top-down strategic prolonged organizational
earthquake. Not only that, but the modelling could be used to rapidly
implement remedial system re-organization in respect of some unforeseen
Employees might have to be more flexible over time but they might be able to
avoid dropping in and out of employment so drastically.
It does require organizations to hook up together to a degree they won't
have done previously.
Now overall, is this a good thing, because people have less time-consuming
research to deal with, or a bad thing because I have somewhat unwittingly
de-humanized them and let the system take control.
Comments welcomed, as always.
----- Original Message -----
From: "Eric Armstrong" <email@example.com>
Sent: Thursday, August 30, 2001 12:22 AM
Subject: Re: [unrev-II] Ratings and Maleability
> "Garold (Gary) L. Johnson" wrote:
> > ...
> > I went through inability to reason, lack of scientific knowledge,
> > evil people, evil ideas, graft, the corruption of power, and
> > several others searching for the nature of the difficulty.
> > ... the question I am working on now boils down to "how does it
> > happen that a group can make decisions that are worse than the
> > decisions that would be made by nearly anyone in the group?"
> There is a group decision-making experiment that should prove
> It's where you put several people in a room and give them a
> You're marooned in a desert. You have a compass, a life raft,
> a bottle of water, salt tablets, a flare, etc. What do you
> The people who run the experiments monitor the process to
> see how group decisions are reached. Sometimes a strong
> personality takes over and creates an autocracy. Sometimes
> its a democratic process. Basically every kind of government
> we know gets represented at one time or another, by some
> The lesson that was most intriguing for me was relayed by
> a friend who had either taken part, or monitored, or read
> about the experiments (I don't know which). The moral of
> the story, apparently, was this:
> The groups that had the best chance of survival all
> had one thing in common. It wasn't the groups organization
> that predicted success, but rather this: They excelled
> at IDENTIFYING THE INDIVIDUAL WITH THE MOST RELEVANT
> If they needed to tie a knot in a rope, they found among
> themselves the person best qualified to do it. If they
> needed to decide whether to take the salt tablets (they
> shouldn't), they were able to identify the person with
> the most useful knowledge on the subject, and follow his
> or her advice.
> This principle is reflected in two of my dictims for a
> knowledge-accreting system:
> a) Ratings
> b) Maleability
> Ratings make it possible for the most useful information
> to "float to the top". Maleability makes it possible to
> change one's rating, as one becomes convinced by subsequent
> I recall arguing for one point of view in a philosophy
> class for the duration of the class. I even spent one
> class lecturing for that point of view. The night before
> the finals, I actually read the papers. The first one
> argued persuasively for my point of view. The next two
> papers took that perspective, point by point, and
> destructed it utterly. I was overwhelmingly convinced.
> (And since it was all fresh on my mind, I was able to
> quote paragraphs from my memory on the final.)
> The point, really, is that the all the arguing I did for
> one point of view really turned in me into an expert on
> why that view was wrong. But up until my epiphany, I
> could never be argued out of it.
> Imagine a similar result in a group decision-making
> scenario. 5 out of 6 people agree that X is right.
> #6 argues persuasively that it isn't and convinces one
> other. Together they convince a 3rd. Eventually, the
> thing snowballs, and everyone agrees.
> Or perhaps #6 has the information, but it is #2 who
> excels at spotting people with authoritative info,
> and others listen to #6 because #2 says that #6 is
> making sense.
> However it works, the end result is the product of
> ratings and maleability.
> One further observation on the subject of maleability
> is that, from the standpoint of *using* the information,
> it is the *result* that is important. All of the
> arguments that led to the result become background.
> So, where the initial series of arguments is a
> hierarchy that proceeds from an initial question down
> through a series of options, with relevant arguments,
> the product of all that is an inverted hierarchy that
> has the ANSWER at the root.
> Under the "answer" comes "what questions does this
> answer respond to" (which ties together those elegant
> options that satisfy more than one criteria). Under
> each question comes, "what other alternatives are
> possible" (which keeps track of options that may be
> of greater use in other circumstances).
> Community email addresses:
> Post message: unrev-II@onelist.com
> Subscribe: unrev-IIfirstname.lastname@example.org
> Unsubscribe: unrev-IIemail@example.com
> List owner: unrev-IIfirstname.lastname@example.org
> Shortcut URL to this page:
> Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/
------------------------ Yahoo! Groups Sponsor ---------------------~-->
Secure your servers with 128-bit SSL encryption! Grab your copy of VeriSign's FREE Guide: "Securing Your Web Site for Business." Get it Now!
Community email addresses:
Post message: unrev-II@onelist.com
List owner: unrev-IIemail@example.com
Shortcut URL to this page:
Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/
This archive was generated by hypermail 2.0.0 : Sat Sep 01 2001 - 09:28:41 PDT