Re: [unrev-II] Augment + categories = OHS v0.1

From: Jack Park (
Date: Sat Jun 24 2000 - 13:03:16 PDT

  • Next message: Dick Karpinski: "[unrev-II] Revised book spam for everybody"

    Ultimately, Eric, if one is to model human discourse, one must resort to all
    three: crisp, fuzzy, and probabilistic categories. I have in mind that the
    middle layer will do all three. Eventually, of course. As a ferinstance,
    consider that the bottom layer is also a database of stuff other than
    papers, books, and so forth. It's not out of line to imagine, say, the
    bottom layer holding all the data on the stock market, and mapping that data
    to probabilistic statements about the market up in the middle layer. Of
    course, those probabilistic statements would have semantics generated by
    links to other nodes.

    You might say this would all consume a lot of compute power.
    I can imagine a lot of compute power.


    From: Eric Armstrong <>

    > Jack Park wrote:
    > > ... Gil uses the term "rigidify." That works for me, but there
    > > are other points of view as well. At issue is the fact that we
    > > all categorize the world in our own way. Production-line education
    > > tends to enforce standardization in that arena, but we are still
    > > individuals with our own non-linearities and so forth.
    > >
    > Ah... Now I understand the point that Gil was trying to make.
    > Yes, this is a system usage issue. The larger the system gets,
    > the more rigid the categories become -- to the degree that they
    > become standards. To the degree they don't, similar and redundant
    > categories are continually added to the system.
    > On the other hand, categories with various "shades of meaning"
    > might even be useful. If someone develops a formulation for
    > defining near-equivalences, of the form:
    > "hyper" = 90% match with "intense"
    > = 80% match with "over the top"
    > = xx% match with conceptX
    > ....
    > Then some interesting fuzzy search capabilities begin to be
    > possible. I don't intend to work on that layer of the system,
    > but it is interesting that the foundation we are building may
    > just enable it.
    > --As you point out, there is still the proble of mapping from
    > *my* concepts to some "shared" conceptual framework out there.
    > > The fundamental architecture being espoused within the meeting
    > > was that of an engine that mutates original documents by adding
    > > links to them. The fundamental approach taken in the architecture
    > > I present here is one in which absolutely no modifications are
    > > ever performed on original documents. All linkages are formed
    > > "above" the permanent record of human discourse and experience.
    > > I strongly believe that the extra effort required to avoid
    > > building a system that simply plays with original documents will
    > > prove to be of enormous value in the larger picture.
    > >
    > This idea deserves careful consideration. I have a suspicion you
    > may be right about that. Our talks about how to use Wiki effectively
    > have really centered on how we control modifications to underlying
    > documents. I haven't come at things from the perspective you
    > suggest. It's time to take a detailed look at that approach, I think.
    > Also: I'm delighted that we're not going for a full ontology in
    > version 1. Yay! But I am equally delighted that system we seem to
    > be zeroing in on may help provide a basis for that work. Life should
    > be interesting.

    IT Professionals: Match your unique skills with the best IT projects at

    Community email addresses:
      Post message:
      List owner:

    Shortcut URL to this page:

    This archive was generated by hypermail 2b29 : Sat Jun 24 2000 - 13:08:33 PDT