Painfully longish. Sorry.
From: Gil Regev
The problem with categories is that they tend to rigidify the knowledge repository, in which case I wonder how dynamic it will be. I was in a conference on trans-disciplinarity a few months ago and one of the presenters said that the the real threat to trans-disciplinarity was the hardening of categories. I a collaborative software (not Knoware) we did in our lab we moved from relying on categories to find information to using a good search tool. We figured that whatever categorization scheme we could come up with, it would be obsolete pretty fast. Having said that, I totally agree that categories are essential but they should be implemented in a way that preserves the dynamics of the system.
In Knoware, relationships have no meaning for the software but they do have meaning to the users. The search tool searches for text in relationships as well as in concepts.
Gil has a good point here. In fact, George Lakoff wrote a whole book on this (_Women, Fire, and Dangerous Things_). Gil's point, combined with the meeting yesterday, lead me to ponder a couple of issues, which I shall do outloud, even as I type...
My take on the meeting yesterday was this: a lot of back-and-forth without a clearly defined ontology on which the banter was founded. Ultimately, even the definition of "document" was up for grabs, not to mention "node." I believe that we waste an enormous amount of human intellectual energy doing battle while not even on the same page. If that sounds like a criticism of one aspect of yesterday's meeting, it is meant to be so. OTOH, the meeting was, indeed, valuable largely since Eugene did a masterful job of summarizing the Use Case issue and presenting it -- something that needed (and continues to be needed) to be done. I respectuflly submit that all discussions be preceeded by the development of a concensus ontology. <side note>achievement of a concensus ontology should be a goal of this list</side note>
Gil points out what Lakoff and others have been saying: once you get to the ontological level of "category", concensus begins to fall apart. Gil uses the term "rigidify." That works for me, but there are other points of view as well. At issue is the fact that we all categorize the world in our own way. Production-line education tends to enforce standardization in that arena, but we are still individuals with our own non-linearities and so forth.
So, just what IS a mother to do? An OHS/DKR is, at root, a vision of a universal tool for collaborative evolutionary epistemology (that's my take on it, your mileage may vary). To be universal, the implication is that everybody has the chance to contribute (both give and take) with the "appearance" of being on the same page as everyone else. Nice trick, if you can do it.
As it turns out, Adam, I, Howard, and Peter Yim all work for a company that is working to render this very capability in the B2B space. VerticalNet uses a carefully crafted ontology (on which Howard works) to serve as an "interlingua" or, shall I say, "page renderer", so that enterprises that have their own individual ontology can be mapped onto the playing field.
When Mary Keeler and I spoke at one of the meetings recently, we sketched on the board a 3-layered architecture, all of which comprised the DKR and its gateway to the OHS (which I define here as a desktop, palmtop, whatever, window into the DKR). Let me now sketch (in words) that 3 layer architecture and try to show how it has the opportunity to do precisely what Doug asks for, and allows us to build an ontology that serves as an interlingua to all possible users no matter what they make of women, fire, and/or dangerous things.
Peirce's theory of categories has it that there are, fundamentally, three categories:
Possibilities -- all the "noise" out there, raw data, written/spoken discourse
Actualities -- a mapping of the possibilities; Mary calls this layer a "lens."
Probabilities -- what you and I do with the actualities.
Possibilities resides at the bottom layer of the architecture. This layer is nothing more or less than a database (archive) of human discourse, recorded experience.
<side note>if up-down imagery doesn't work for you, substitute left-right, or whatever</side note>
Actualities resides in the middle. This layer serves as a lens, mapping the possibilities into structures (an ontology) that can be viewed, inferenced, debated, and so forth. This layer, IMHO, is the crucial one. To get it right, it must consist of a kind of structure that, at once, serves as a universal ontology (tongue ensconsed firmly in cheek on that one), a platform for reasoning and debate, and a permanent record of the evolving human knowledge base. Whoever builds this layer wins.
Probabilities is the top layer in the architecture. It, in fact, is the gateway to the users "out there." Users will have their own mapping tools, perhaps what Doug calls the transcoder. Transcoding can, of course, be accomplished anywhere in the world; at the server (good for wireless), somewhere else in the network, or at the user's client computer. The purpose of transcoding is to allow the user to get or otherwise construct a view that suits his/her tastes/needs/desires. The user should have the ability to directly query actualities, and, through that layer, ask a question like "where did you get that?" and have read-only access directly to the possibilities layer. This capability suggests that each "node" (don't go there, we shall define it eventually) contains pointers into the "document(s)"(hey!, I said don't go there) from which it (the node) was derived. <side note>I believe that transcoding now takes on a larger role; originally it was conceived as a view generation tool. Now, I suspect it also takes the role of ontological mapping</side note>
How is this architecture used? Here's a sketch of the appropriate scenario that traces document origination, actualities generation, and user experience.
1- Documents (e.g. articles, news items, books, papers, speaches, etc) are entered into the archive.
2- An engine is turned loose on the archive to perform the task of mapping everything into the actualities layer. (as I said, whoever does this wins).
3- User constructs a view into actualities, perhaps as a query, perhaps as a simple mapping of the knowledge structures contained in actualities to a topic map, for which templates may be available.
It doesn't really stop there. Let's pretend user takes exception with something discovered in actualities.
4- User opens a "debate" view after selecting a particular actuality item (node?)
5- DKR creates a new document in possibilities to record the nature of the criticism.
6- DKR alerts subscribers to the debate.
7- DKR maps new document into actuality.
From that, we can see that user does NOT have write access to actuality. Only the system does -- and that, of course, is the big issue here. Hesse's Glass Bead Game suggested that there is a Bead Master, one individual that has the ability to do such mappings and control the flow of the epistemological evolution within the system. I tend to think that will not happen, at least in my lifetime. There needs to be a "machine" that does this work, and there is an enormous body of scholarly work being generated that hints of the emergence of this capability.
But, given that this capability remains the great "anal sphincter" in our project, the entire architecture I have sketched cannot, by definition, be our Version 1.0. So, we must re-sketch it as something we can do today. Largely, the overall architecture remains the same. We simply do not set out to construct the universal ontology as a middle layer. Rather, we scale it back to some kind of human-generated (perhaps with machine augmentation as that evolves) middle layer, one that represents a concensus ontology for today, but one that is mutible as the concensus evolves (conceptual drift). By constructing the software as a pluggable architecture, we simply plug in software modules as they emerge to enhance the system. <side note>I have a hunch that some activity of the UN, say, the UN/SPSC, will ultimately become the basis for the "universal mapping engine"</side note>
Which brings me back (yes, Marth, non-linear types can find their way back) to the original space on which this diatribe is based. The fundamental architecture being espoused within the meeting was that of an engine that mutates original documents by adding links to them. The fundamental approach taken in the architecture I present here is one in which absolutely no modifications are ever performed on original documents. All linkages are formed "above" the permanent record of human discourse and experience. I strongly believe that the extra effort required to avoid building a system that simply plays with original documents will prove to be of enormous value in the larger picture.
Thus ends the diatribe. The non-linear one is now leaving the building. While leaving, he wishes to acknowledge that the architecture sketched here has been strongly influenced by Doug (for the big picture), Mary Keeler (for the Peircian vision), Kathleen Fisher (for the knowledge mapping structures, along with John Sowa and others), Eric (for his introduction to IBIS), and Rod (for his web site that tries to keep all this together).
This archive was generated by hypermail 2b29 : Fri Jun 23 2000 - 09:13:26 PDT