Re: [unrev-II] Upcoming Agenda Items

From: Jack Park (
Date: Tue Apr 25 2000 - 12:57:29 PDT

  • Next message: Eric Armstrong: "Re: [unrev-II] Eric's Summary [edited]"

    Great summary, John.

    The learning described is a kind of "hebbian" neural net learning algorithm.
    The primary mechanism is feedback. In order to fully exploit feedback, the
    system will need another mechanism -- forgetting, or decay. Otherwise, it
    will simply saturate. The hint is made that a link can cross some threshold
    and be made permanent, in which case learning stops for that link.

    BTW: Harry Klopf and others have added a temporal feature to hebbian
    learning, calling it "differential hebbian." The idea is that a link is
    rewarded only within a certain time limit, and the reward is based on a
    following event. Think of this as a kind of behaviorist issue. Giving a
    child candy a week after some desired behavior is not going to reward that
    behavior. For some animal trials, the reward window is open less than a
    second, in some cases.

    An open question, one that I am not able to answer by mining Dougs writing,
    asks if what we intend to build in a DKR is, indeed, a public knowledge
    acquisition system (as opposed to, say, a searchable database of factoids
    entered by users).

    Knowledge can be acquired in many ways, including those ensconsed in
    instructivist, and constructivist theories. As a public system, one
    involved presumably in the activities of a broad range of cultures, the DKR
    will have to present views to individual users that most likely cannot be
    uniform in nature. Those views, however, will have to be derived from a
    central repository. This leads me to suspect that issues involved with
    transcoding will be of greatest importance to the project.

    Pangero brings Gordon Pask's Conversation Theory (CT) to the table. I am
    happy to see it surface here. In essence, the whole OHS user experience can
    be viewed as a conversation with the DKR, and the DKR grows out of
    conversations with others. We can imagine a scenario where the DKR is
    seeded with numerious factoids (instructivist teaching). We can also imagine
    a scenario where the DKR bootstraps itself (constructivist learning).
    Perhaps a middle ground is one in which we seed the DKR with CT, then begin
    "talking" to it.

    My main point here is that at some time soon, the use cases should instruct
    us as to which of the three scenarios will guide DKR design. It seems to me
    that if we are going to build from the constructivist view point, then it is
    likely way too early to decide what the atomic structures might look like; I
    suspect they will look completely different from a design that would satisfy
    an instructivist project. Formalizing use cases implies agreement on a
    larger picture of the DKR.

    BTW: I have built many "learning" neural nets. One, in fact, was a 9-node
    processor that controlled an autoclave for polymer curing. It was designed
    to replace an enormous qualitative reasoning expert system I had built for
    the same client. My take on the learning network approach is that it is the
    "holy grail" of software design, but those nets can be horribly cranky,
    often learning things you would rather they ignored. Just imagine
    conversations with the dark side of the force...

    From: John J. Deneen <>

    > In essence, it proposals using an OHS for "bootstrapping structuration of
    > the web" since a knowledge-web (aka DKR) would travel along with the
    > content itself. As it passed through its DKR server, the new DKR piece
    > be integrated with the existing DKR.
    > Overall, Mr. Pangaro views of knowledge as a collective construction
    > striving to achieve coherence, rather than a mapping of external objects
    > that typically results in a "spaghetti-like" meshes of interconnected
    > so that the user quickly gets lost in hyperspace. In other words, it
    > an associative hypertext network to "self-organize" into a simpler, more
    > meaningful, and more easily usable multidimensional network (aka ZigZag by
    > Ted Nelson). "The ZigZag space may be thought of as a multidimensional
    > generalization of rows and columns, without any shape or structure
    > .... "The term "self-organization" is appropriate to the degree that there
    > is no external programmer or designer deciding which node to link to which
    > other node: better linking patterns emerge spontaneously. The existing
    > "bootstrap" new links into existence, which in turn change the existing
    > patterns. The information used to create new links is not internal to the
    > network, though: it derives from the collective actions of the different
    > users. In that sense one might say that the network "learns" from the way
    > is used." ...
    > ... "The algorithms for such a learning web are very simple. Every
    > link is assigned a certain "strength". For a given node a, only the links
    > with the highest strength are actualized, i.e. are visible to the user.
    > Within the
    > node, these links are ordered by strength, so that the user will encounter
    > the strongest link first. There are
    > three separate learning rules for adapting the strengths.
    > 1) Each time an existing link, say a -> b, is chosen by the user, its
    > strength is increased. Thus, the strength of a
    > link becomes a reflection of the frequency with which it is used by
    > hypertext navigators. This rather obvious
    > rule can only consolidate links that are already available within the
    > In that sense, it functions as a selector
    > of strong connections. However, it cannot actualize new links, since these
    > are not accessible to the user.
    > Therefore we need complementary rules that generate novelty or variation.
    > 2) A user might follow an indirect connection between two nodes, say a ->
    > b -> c. In that case the potential
    > link a -> c increases its strength. This is a weak form of transitivity.
    > opens up an unlimited realm of new links.
    > Indeed, one or several increases in strength of a -> c may be sufficient
    > make the potential link actual. The
    > user can now directly select a -> c, and from there perhaps c -> d. This
    > increases the strength of the potential
    > link a -> d, which may in turn become actual, and so on. Eventually, an
    > indefinitely extended path may thus be
    > replaced by a single link a -> z. Of course, this assumes that a
    > number of users effectively follow that
    > path. Otherwise it will not be able to overcome the competition from paths
    > chosen by other users, which will also
    > increase their strengths. The underlying principle is that the paths that
    > are most popular, i.e. followed most
    > often, will eventually be replaced by direct links, thus minimizing the
    > average number of links a user must follow
    > in order to reach his or her preferred destination.
    > 3) A similar rule can be used to implement a weak form of symmetry. When a
    > user chooses a link a -> b, implying
    > that there exists some association between the nodes a and b, we may
    > that this also implies some
    > association between b and a. Therefore, the reverse link b -> a gets a
    > strength increase. This symmetry rule on
    > its own is much more limited than transitivity, since it can only
    > a single new link for each existing link." ....
    > Therefore, by abandoning the correspondence epistemology and its reliance
    > fixed primitives, bootstrapping approaches open the way to a truly
    > adaptive and creative knowledge system.

    Enter to WIN one of 10 NEW Kenmore Ranges!
    Only at

    Community email addresses:
      Post message:
      List owner:

    Shortcut URL to this page:

    This archive was generated by hypermail 2b29 : Tue Apr 25 2000 - 13:07:53 PDT