Great stuff, Peter. It is far-reaching, intellectually. I think the
semantic web *is* the first step, in that it categorizes information in
reflexive way. I suspect that "reflexive categorization" may well a
component of, or possibly a deconstruction of, what we call
Once information exists in knowlege-form, it becomes usable in a
variety of ways. Since I the semantic web will take 10-12 years
to come into being (with luck), I expect the procedures you are
talking about to be in widespread use by around 2020, or so.
One of own favorites is the identification of "isomorphic" systems.
If an agent can run around looking for topic maps that are similar
to the one we're working on -- especially if they have one or two
subject references in common -- then possibly the system can bring
to our attention work that we should be aware of, but would never
have seen as complementary to our own efforts, due to surface
Peter Jones wrote:
> I'd like to add a little something into the collective knowledge soup
> (Apologies if it's occurred somewhere in Doug E's work or elsewhere on
> list and I just haven't seen it before.)
> I've been thinking about organizations, knowledge and CODIAK.
> All of what Eric has written below is lovely providing folks know what
> it is
> they need to know at a given point in their work process. And there
> are two
> important questions in there that I think unrev tools need to answer
> effectively indeed.
> 1) What is knowledge?
> 2) How do folks know that they know what they need to know in order to
> decision D at point P and be as confident as they can that they are
> the right call?
> I think these two questions are important because answering them
> defines how
> helpful an augmenting system can be.
> To show what I mean, let me ask two more questions, corollaries of the
> i) If I haven't defined what knowledge means for the system, then I
> just fill my Dynamic Knowledge Repository with everything that passes
> eyes and then some, but how useful is it if all that stuff just sits
> ii) Let's assume that you've cottoned on that a passive system is, um,
> How do you teach the system to gather all the relevant knowledge it
> can to
> the right person/place at the right time at the right level,... etc.?
> And in order to answer those questions positively I'm going to
> some concepts:
> a) Awareness
> b) Task/Process Patterns
> c) Self-reforming Systems
> I'm going to suggest that human knowledge is really all about
> When you talk about what you know it falls into roughly four
> A) Your current awareness of your environment (present perception).
> B) Facts in your memory that you are pretty sure you can recall
> (what, where).
> C) Awareness of processes (patterns of praxis) in your memory ...ditto
> D) Your present range of capability within the environment of which
> you are
> aware; your 'sphere of influence'.
> (To say a little bit more about (C): Here I will regard a pattern of
> as consisting of ordering of facts and actions/implications in a
> sequence (how). Facts can operate either as input or constraints to a
> I then suggest that 'why' is just an inversion of 'how' sequencing.
> 'When' is the locating of a sequence step/point relative to a more
> sequence (ultimately the top level sequence is time itself).)
> It seems to me that what the systems unrev is seeking to produce are
> about is the extension of human awareness. If you think of the
> Internet as a
> vast DKR right now, it is not difficult to see that it is pretty much
> completely passive. It doesn't pre-empt your knowledge needs or wants,
> there's no real extension of your awareness beyond its existing
> Now, it occurs to me that a great many human enterprises are
> processes and that there are frequently effective models of such.
> models are process patterns.
> It also occurs to me that assuming you can identify an individual's
> pattern(s), then you should also be able to define not just the
> on what he is allowed to see, but also how far abroad any active
> knowledge-seeking system should go to fulfil the knowledge needs of a
> particular part of his process in advance. One should be able to have
> DKR prepare in advance for the exam, if you like. Think how such a
> would enhance (A), (B), (C), and (D) from above. It would effectively
> move a
> lot of the burden on your brain 'out' into the computer system.
> This in turn would make the system even more transparent to itself
> greater unburdening in the future as, say, robotics technology
> Now in the terms of present computational capabilities such active
> seeking might only consist in actively maintaining the most up-to-date
> of information relative to some part of a process - e.g. the latest
> range of
> compatibilities for some new chipset in some existing range of
> systems. But
> it is also not that difficult to see that once you have specified what
> it is
> that needs to be known *relative to a defined process* for achieving a
> specific goal, that the nature of the systems' 'active seeking' can be
> defined too, and that the formatting of the 'knowledge' can be driven
> this process. For example, relational databases are a well-understood
> paradigm for information storage, and the information becomes
> knowledge when
> combined into a process in some way at some point.
> Now, it is also not too difficult to see that Design Patterns (oft
> concept in software development in recent years) actually hold a great
> of the information that a Process Pattern as I have defined it might
> But it is also clear that approaches to representation or ordering of
> relations in many data sets that might be used with Design Patterned
> software are not often consistent across the variety of systems out
> making the 'active seeking' part problematic.
> The Semantic Web effort is an attempt to overcome this issue. By
> providing a
> layer of description over system data you get part of the way towards
> the computer take over some of the responsibility for relevance of
> data in response to queries. It can at least ask you whether type T is
> right datatype for operation O in your system and try something else
> if it
> isn't. Thus the computer tacitly extends your range of capability.
> We know from Doug Engelbart's thinking, and other works out there,
> human social and commercial organizations are vastly interwoven
> complexes of
> activity. We also know that most presently operate around a central
> capitalist ideology. Governments today are in the unenviable position
> having to balance state economic interests in the world (with
> interests from geographically located portions of trans-national
> corporations) against the welfare of the people of the world
> trans-national corporate activity).
> We also know that this complex tapestry is ever-changing, and now more
> rapidly than before.
> So it stands to reason that the shape of processes and their
> components also
> needs to shift with these changes.
> It also stands to reason that if we want to have some control over our
> futures then we need to get some 20,000ft overview of any system and
> environment in order to steer effectively.
> At present we do this with rather cumbersome system process
> methods dependent on current human epistemic limits whose results are
> obsolete before they are applied. So I want to put a strange vision on
> block in respect of what I've written above and see how it runs:
> Self-reforming systems.
> Let's say that like Doug Engelbart we have seen the organization as a
> of interrelated parts. Change or remove a part and the effects ripple
> through the system some distance (for good or bad). Let's also assume
> any system can be modelled using some logic-based verification system.
> Perhaps if we hook enough models together and make the whole thing
> non-monotonic, then not only could we simulate system changes rapidly
> but we
> could also use a chosen simulation (in effect a decision to take some
> positive course of action) to drive changes in the organizations'
> and also to monitor the real efficacy of such a change (Dear Jack P,
> reminds me of something...:-). In effect you could make
> re-organization of
> an organization's system a much more adaptive, tactical and gradual
> instead of being a vast top-down strategic prolonged organizational
> earthquake. Not only that, but the modelling could be used to rapidly
> implement remedial system re-organization in respect of some
> Employees might have to be more flexible over time but they might be
> able to
> avoid dropping in and out of employment so drastically.
> It does require organizations to hook up together to a degree they
> have done previously.
> Now overall, is this a good thing, because people have less
> research to deal with, or a bad thing because I have somewhat
> de-humanized them and let the system take control.
> Comments welcomed, as always.
> ----- Original Message -----
> From: "Eric Armstrong" <email@example.com>
> To: <unrev-II@yahoogroups.com>
> Sent: Thursday, August 30, 2001 12:22 AM
> Subject: Re: [unrev-II] Ratings and Maleability
> > "Garold (Gary) L. Johnson" wrote:
> > >
> > > ...
> > > I went through inability to reason, lack of scientific knowledge,
> > > evil people, evil ideas, graft, the corruption of power, and
> > > several others searching for the nature of the difficulty.
> > >
> > > ... the question I am working on now boils down to "how does it
> > > happen that a group can make decisions that are worse than the
> > > decisions that would be made by nearly anyone in the group?"
> > >
> > There is a group decision-making experiment that should prove
> > instructive.
> > It's where you put several people in a room and give them a
> > scenario:
> > You're marooned in a desert. You have a compass, a life raft,
> > a bottle of water, salt tablets, a flare, etc. What do you
> > do?
> > The people who run the experiments monitor the process to
> > see how group decisions are reached. Sometimes a strong
> > personality takes over and creates an autocracy. Sometimes
> > its a democratic process. Basically every kind of government
> > we know gets represented at one time or another, by some
> > group.
> > The lesson that was most intriguing for me was relayed by
> > a friend who had either taken part, or monitored, or read
> > about the experiments (I don't know which). The moral of
> > the story, apparently, was this:
> > The groups that had the best chance of survival all
> > had one thing in common. It wasn't the groups organization
> > that predicted success, but rather this: They excelled
> > at IDENTIFYING THE INDIVIDUAL WITH THE MOST RELEVANT
> > EXPERTISE.
> > If they needed to tie a knot in a rope, they found among
> > themselves the person best qualified to do it. If they
> > needed to decide whether to take the salt tablets (they
> > shouldn't), they were able to identify the person with
> > the most useful knowledge on the subject, and follow his
> > or her advice.
> > This principle is reflected in two of my dictims for a
> > knowledge-accreting system:
> > a) Ratings
> > b) Maleability
> > Ratings make it possible for the most useful information
> > to "float to the top". Maleability makes it possible to
> > change one's rating, as one becomes convinced by subsequent
> > arguments.
> > I recall arguing for one point of view in a philosophy
> > class for the duration of the class. I even spent one
> > class lecturing for that point of view. The night before
> > the finals, I actually read the papers. The first one
> > argued persuasively for my point of view. The next two
> > papers took that perspective, point by point, and
> > destructed it utterly. I was overwhelmingly convinced.
> > (And since it was all fresh on my mind, I was able to
> > quote paragraphs from my memory on the final.)
> > The point, really, is that the all the arguing I did for
> > one point of view really turned in me into an expert on
> > why that view was wrong. But up until my epiphany, I
> > could never be argued out of it.
> > Imagine a similar result in a group decision-making
> > scenario. 5 out of 6 people agree that X is right.
> > #6 argues persuasively that it isn't and convinces one
> > other. Together they convince a 3rd. Eventually, the
> > thing snowballs, and everyone agrees.
> > Or perhaps #6 has the information, but it is #2 who
> > excels at spotting people with authoritative info,
> > and others listen to #6 because #2 says that #6 is
> > making sense.
> > However it works, the end result is the product of
> > ratings and maleability.
> > One further observation on the subject of maleability
> > is that, from the standpoint of *using* the information,
> > it is the *result* that is important. All of the
> > arguments that led to the result become background.
> > So, where the initial series of arguments is a
> > hierarchy that proceeds from an initial question down
> > through a series of options, with relevant arguments,
> > the product of all that is an inverted hierarchy that
> > has the ANSWER at the root.
> > Under the "answer" comes "what questions does this
> > answer respond to" (which ties together those elegant
> > options that satisfy more than one criteria). Under
> > each question comes, "what other alternatives are
> > possible" (which keeps track of options that may be
> > of greater use in other circumstances).
------------------------ Yahoo! Groups Sponsor ---------------------~-->
Secure your servers with 128-bit SSL encryption! Grab your copy of VeriSign's FREE Guide: "Securing Your Web Site for Business." Get it Now!
Community email addresses:
Post message: unrev-II@onelist.com
List owner: unrev-IIfirstname.lastname@example.org
Shortcut URL to this page:
Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/
This archive was generated by hypermail 2.0.0 : Wed Sep 12 2001 - 22:54:32 PDT