I have been a lurker on the unrev-11 listserv, after realizing that my
administrative duties would prevent me from participating fully in the the
on-line colloquium. Today, however, I could not resist responding to the
posting by Henry. I've never before seen such tautology, and as I read it
aloud to my husband, we roared with laughter. Henry's was truly an
excellent "tongue-in-cheek" treatise, reinforcing the stereotype of
academics with too much time on their hands who overanalyze and obscure the
meaning of even the simplest concepts. Thank you for providing such
levity!!
unrev-II@egroups.com on 04/28/2000 10:28:52 AM
Please respond to unrev-II@egroups.com
To: unrev-II@egroups.com
cc:
Subject: [unrev-II] Digest Number 108
------------------------------------------------------------------------
Make music with anyone, anywhere, through FREE Internet
recording studio software. FREE software download!
http://click.egroups.com/1/3734/3/_/444287/_/956932133/
------------------------------------------------------------------------
Community email addresses:
Post message: unrev-II@onelist.com
Subscribe: unrev-II-subscribe@onelist.com
Unsubscribe: unrev-II-unsubscribe@onelist.com
List owner: unrev-II-owner@onelist.com
Shortcut URL to this page:
http://www.onelist.com/community/unrev-II
------------------------------------------------------------------------
There are 13 messages in this issue.
Topics in this digest:
1. Knowledge Representation (wasRe: Jack Park's "10 Step" Program)
From: "Jack Park" <jackpark@verticalnet.com>
2. Re: GOOD: Traction, by Twisted Systems (on browser neutrality)
From: cjn@twisted-systems.com
3. Re: Towards an atomic data structure.
From: Henry van Eyken <vaneyken@sympatico.ca>
4. Use case scenarios for OSS development
From: Lee Iverson <leei@ai.sri.com>
5. Re: Re: Towards an atomic data structure
From: "Sandy Klausner" <klausner@cubicon.com>
6. Re: GOOD: Traction, by Twisted Systems (on browser neutrality)
From: Eric Armstrong <eric.armstrong@eng.sun.com>
7. Re: Re: Towards an atomic data structure
From: Eric Armstrong <eric.armstrong@eng.sun.com>
8. Re: Towards an atomic data structure.
From: "Sandy Klausner" <klausner@cubicon.com>
9. ResearchIndex http://www.researchindex.com/
From: NABETH Thierry <thierry.nabeth@insead.fr>
10. Re: Use case scenarios for OSS development
From: Eric Armstrong <eric.armstrong@eng.sun.com>
11. Re: Re: Towards an atomic data structure
From: "Sandy Klausner" <klausner@cubicon.com>
12. All Colloquium transcripts available
From: "Henry van Eyken" <vaneyken@sympatico.ca>
13. A small experiment to help students
From: "Henry van Eyken" <vaneyken@sympatico.ca>
________________________________________________________________________
________________________________________________________________________
Message: 1
Date: Thu, 27 Apr 2000 09:04:12 -0700
From: "Jack Park" <jackpark@verticalnet.com>
Subject: Knowledge Representation (wasRe: Jack Park's "10 Step" Program)
From: Eric Armstrong <eric.armstrong@eng.sun.com>
<snippage/>
> There is an interesting possibility that there is. Looking over the
> Traction offering and comparing that with IBIS concepts led to the
> minor epiphony that the simple act of categorizing information nodes
> according to some (agreed upon) schema is in essence a knowledge-
> abstraction process. I'm convinced that Traction is absolutely on the
> right track with respect to categorization -- IBIS is an
> easily-definable subset of their system. Where they fall down is with
> respect to document hierarchy, but they've made a contribution (to my
> thinking, at least)
> with respect to categories.
>
Kindly take the time to elucidate that which convinces you.
> What is interesting, here, is the concept that the whole "knowledge
> management" domain exists in the realm of the categories, where
> "documents" are found among the information nodes. If it makes sense
> to think of knowledge management in those terms, then we can conceivably
> apply some interesting abstract manipulations to "knowledge", where
> knowledge means a common (or possibly standard) set of categories, and
> where the underlying information is unique to each domain.
>
Standard categories: Ha!
Lakoff wrote a book called "Women, Fire, and Dangerous things," which, as I
recall, were the primary categories of some aboriginals. The Maoris include
reproduction in Earth Science. They do this because they see the
cycle of life and always bury a placenta next to a tree. So much for
logical categories.
> For example, the category "argument for" can be applied to information
> nodes in a biological sciences domain, or to one in an art analysis
> domain. The category is a form of meta-data that is independent of the
> information content.
>
> Now, given a standard set of categories, it might be possible to begin
> describing category-relationships. That would produce the property of
> abstract reasoning, that was independent of the problem domain.
>
> I keep thinking in terms of "implies". If there is some way to add the
> meta-data "implies" in the category space, then automated reasoning
> becomes possible.
>
> Example: at the initial writing, node A is written, as well as node B,
> with the "implies" attribute linking the two. Later, someone adds C as
> an implication of B. The system can now deduce that A implies C --
> regardless of the information content contained in the nodes.
>
> Perhaps the "category" for such a system is "implication". Categorizing
> B as an implication then requires pointing to A, to identify the node
> from which B was derived. The symettric relationship can then also be
> added -- call it "motivator", or some such. (If there is a logic term
> for it, I've forgotten it.)
>
> There might also be categories for preconditions, requirements, and
> what have you, all of which would allow for fairly sophisticated
> reasoning engines to be built on top of the fundamental structures.
> [There are also evaluations -- of node content as well as the logic
> employed...]
>
> The interesting point to all this is that the "DKR" becomes a layer
> of abstraction built on the OHS, where the categorization-capability
> is already built into the OHS.
>
Actually, as I see it, the DKR is an API that the OHS supports. Thusly,
most of the use cases should discuss what one wants to do at the OHS, and
DKR use cases will define that API.
In all of the above, reference is made to categories, relations,
implication, and so forth. It seems to me that mankind has been discussing
this since Aristotle, maybe before. It also seems to me that nobody has
achieved "the solution tres grande." It further seems to me that nobody
ever
will. Therefore, we must take pains to define just what the DKR is
intended
to represent and manipulate, then give it our best shot. Lenat has made an
enormous effort along these lines with CYC. There is even a public domain
version of CYC evolving. This all stems from the fact that Eurisko,
powerful
as it was, never went very far simply because it lacked common sense. Guha,
McCarthy, and lots of others are making careers trying to figure out how to
represent common sense. Guha is not quite the champion of RDF, McCarthy
remains the champion of a variety of logical formalisms.
Automated reasoning is well documented. It does work for limited domains.
In
fact, just about anything will work when operating in a sufficiently
constrained domain. DKR, however, wants to take on the universe, and
everything. Not particularly constrained, IMHO.
Humans talk in qualitative terms for normal conversation. We will use
qualitative descriptions of probabilistic issues (often, seldom, etc), we
use qualitative descriptions of fuzzy issues (tall, fat, ...), and we use
crisp terms to describe other things (absolutely, never, ...). IMHO, the
game is to invent a KR scheme that lets us partially automate handling of
all kinds of representations. Zadeh has recently (at KR 2000) proposed
something he calls "precisified natural language." (PNL) This appears to be
a highly constrained natural language. Telling jokes with PNL would not be
easy, but describing the evolution of an hiv infection would.
Daphne Koller at Stanford has developed what appears to be a seamless
integration of bayesian and description logics. One of her students, now
at
Harvard, is adding linguistics to that mix. Maybe, just maybe, they are on
to something we need to understand better.
I think that I am saying that the DKR warrants a deeper look at KR than is
suggested by appeals to categories, relations, and logic. We are trying to
represent things which are complex. Newtonian mechanics and reductionist
thinking will not get us there. Indeed, there may not exist an atomic
structure capable of supporting our dream.
________________________________________________________________________
________________________________________________________________________
Message: 2
Date: Thu, 27 Apr 2000 16:07:58 -0000
From: cjn@twisted-systems.com
Subject: Re: GOOD: Traction, by Twisted Systems (on browser neutrality)
> I didn't say it to everyone but I'm also bothered that its not
browser
> neutral. Designing for one browser is just lazy, stupid, or both.
I realize this reply is fairly belated, for which I apologize.
I agree with Jon entirely. But we regularly use Traction with
Netscape, Lynx, Proxiweb and Avant Go (on Palm), Opera and Internet
Explorer. And occasionally with w3 mode in Emacs. It supports all
these browsers.
However, we saw an opportunity to improve responsiveness and
information density using DHTML features (such as layers and
JavaScript); Traction uses the client in the HTTP header to determine
which interface to serve. For browsers that support DHTML features,
such as Netscape and IE, Traction defaults to using those features.
Like many options within Traction, this can be disabled as a
preference.
During the demo I gave, I used IE because its full-screen feature
lets people concentrate on the web interface, and lets me use larger
fonts to display on the projector. I didn't realize I was conveying
the impression that IE was the only browser Traction supports.
-Chris
________________________________________________________________________
________________________________________________________________________
Message: 3
Date: Thu, 27 Apr 2000 14:33:56 -0400
From: Henry van Eyken <vaneyken@sympatico.ca>
Subject: Re: Towards an atomic data structure.
Or "Can a DKR bridle unbridable thought?"
Or "Loom of frustration."
Or "Re: Knowledge representation."
Or "Should I really inflict this piece on anyone?"
Wednesday morning, April 26. -- I just began reading this thread and almost
immediately something began to revolt in me. And yet, I must (as indeed I
do)
respect the opinions of people who have spent much of their lives informing
themselves in the best tradition of educated society.
Before moving on to the next paragraph, let's share the observation that my
opening sentence contains, contrary to the dogma of "one paragraph, one
notion,"
at least eight nodes of information, i.q. I; just; "began"; "to read";
"almost
immediately"; "something"; "to revolt"; "in me" -- a breakdown which, to be
sure, is just one way of splitting the atomic ideal. And I haven't yet
completed
the paragraph. Also observe that this very last sentence, just written,
implies
a node of information that is hidden right after the word "completed." That
implied node of information is the action to which the noun "paragraph" was
subjected to. (Tiresome, am I not?)
Eric's first post on the subject matter of atomization of language, or
thought
.... Oh, I must stop again. There mostly does not seem to be a one-to-one
relationship between the components (free radicals, atoms, molecules,
crystals,
etc.) of thought and the components of language. Language might be seen as
a
conduit of thought, and, like my back-country telephone line, a most
important,
but far from a perfect conduit. Take. for example, the word "mankind"
which,
when taken out of context, may well signal an antifeminist attitude.
"Mankind,"
therefore," is an instance where we find at least two meanings within a
single
word. Isotopes, anybody?
What upset me immediately is the top-down approach embodied by the class
object-oriented terminology. As applied to text (and maybe to the potential
of
computing as well), I question the usefulness of a Simon-pure
object-oriented
approach. A top-down approach, it seems to me, is bridling the unbridable,
a
tool that communicates a hard-to-discipline melange of logical order and
emotions, of what wells from the conscious and the levels of the
less-than-conscious. Let me show my concern by trying to recapture some of
the
fleeting thoughts that went through my mind when reading the paragraphs
under
the heading "Text Nodes."
I quote: The fundamental unit of a DKR is an item of information. Since the
ideal in writing is to have "one idea per paragraph", an "information node"
can
be thought of as a paragraph of text. Headings stand apart from other text,
as
well, so a heading is a special (short) paragraph, or information node.
The first sentence quoted is a postulate that probably won't stand the test
of
scrutiny. The second sentence turns the tables on the postulate; and that
quite
aside from the stated ideal in writing. Beaudelaire, Joyce, Conrad, Schlink
are
but some of the people whose celebrated works are what they are because
their
ideal is to stuff in a little extra. As do children and salesmen, and,
well,
don't we all? As for headings being nodes of information, may I invite you
back
to the smorgasbord of headings at the beginning of this piece. Which one
best
conveys what I am writing about?
At this point, I ought to realize that I am reading Eric's post out of
(his)
context and putting it into mine. In other words, the meaning of text is
subject
to environmental influence. (Geez, I think I could expand that last
sentence
into a book.) I also understand that precisely because of this problem,
language
must contain something that is not just "purely informational." It must
contain
a funnel of words to guide the reader or listener coming in from the cold
as
quickly as possible to the point the emitter is trying to make. That funnel
of
words has been called redundancy. I understand from having read a couple of
atoms from Shannon and Wheeler that English is about 30 percent redundant;
redundant, that is, from the point of view of its central messages, but an
essential redundancy to guide the innocent to the nectar of an attempted
communication.
How then, with these notions in my mind, may I feel compelled to keep on
reading? But I continued anyway, forcing myself.
Quoting: Node behaviors are defined in a class (object template). Every
text
node must contain an attribution -- a pointer to the author, or an
identifying
string. A copy of that node may be edited, which suggests the need for a
split
operation, for example. After node is split into one or more fragments, and
edit
operation could replace some fragments or insert new ones that have a
different
author. Some of the operations appropriate to a node might therefore
include
split, delete, replace, and insert.
My immediate problem, after blindly sliding through the first sentence
(because
it lacks the, for me, prerequisite redundancy), is "a pointer to the
author."
Individual authorship, of course, is a concept that belongs to the class
"culture." Ancient Greek culture did not recognize this kind of authorship.
Homer rhapsodized, literally meaning that he stitched together. He stitched
descriptions that were fragments of other tales to create his tale. He then
added rhyme to reason for staunching his memory. (No art of poetry for him;
just
plain craftmanship.) A good thing that Eric added "or an identifying
string."
Luck has it that I stayed the course for at this point I find Eric
introducing
ideas that capture my attention. I might quickly add to his list of editing
operations on a node of information: re-emote and recontextualize (ain't
she
sweet?). It is well to remember that re-emoting may change "objective
meaning"
(??) totally. A simple re-ordering of wordsmay efffect this. "Just so," you
may
think, whereas the editor conscienciously meant to be oh "so just."
At this point, my mind fleetingly dwells on translation. How simple would
it be
to translate from one language into another if language could be clearly
atomized. ("How simple would it be," I wrote. Not "How simple it would
be.")
At this point, I must realize, I think, that the kind of text Eric writes
about
(no, I didn't say wrote!) is not natural language, not even a transcript of
natural thoughts. He is writing about formalized transcripts of some sort
of
culturally bridled thoughts. Among these are the, supposedly
redundancy-free
languages of mathematics and computer programming. And, perhaps, of
zealots, who
tend to consider their ethics so purified from redundancy as to justify an
attitude of "my way or no way."
Languages of scientific and technological cultures might be less redundant
than
natural language -- and easier to translate. The disciplined listener needs
only
half the words required by an undisciplined one, Which brings up a concern
one
should have with public DKRs.
Quoting: Note that when the node is split, two objects exist where one did
before. Every node must therefore be capable of being the root of a
subtree.
Although it may start out life as a simple node that contains or points to
an
item of text, it must also be capable of pointing to a list of text
elements.
(That list might also include markup elements, like HTML bold tags: <b>.)
Since
each item in that list may itself point to a list of subitems, the
resulting
structure is a tree.
Interesting indeed. At one time, I dwelled on the use of adjectives as a
means
of splitting a node and on adverbs as means of further modulation. I
mentally
compared their use to the words we have for color. The little red engine
that
could. Blue moon. But primary colors don't suffice beyond childhood.
Orangy,
brown-gray become necessary additions. For the artist we have special
chromatograms. For scientists we have that concretized abstraction of
vibrational frequencies. Serving roles not unlike adjectives are hyphens,
and I
have wondered whether we might introduce for greater expressive precision a
more
potent hyphen by borrowing the equal sign, e.g. "a brown=gray-colored
object"
would make a somewhat more precise statement than "a brown-grayish object"
once
propagated by the discerning.
And at this point, I must ask myself, what is the best use of my time and
how
much may I impose on others? And how just am I to the author who began this
thread, unquestionably a man one cannot but hold in high esteem?
I feel so frustrated.
Better go outdours for some fresh air. With oxygen of a particularly
refreshing
molecular form.
Ever so diatomic in vibrational embrace.
Henry
P.S. I wrote this yesterday morning and decided not to put it on the forum.
Until I read Jack Park's piece a moment ago. Funny how different our two
pieces
are, and still so very much the same. That's language for you.
Jack Park wrote about
Knowledge Representation (wasRe: [unrev-II] Jack Park's "10 Step" Program):
________________________________________________________________________
________________________________________________________________________
Message: 4
Date: Thu, 27 Apr 2000 11:58:44 -0700
From: Lee Iverson <leei@ai.sri.com>
Subject: Use case scenarios for OSS development
Slow as usual to get this stuff to everybody, but here it is:
http://www.ai.sri.com/~leei/OHS/ossusecases.html
-------------------------------------------------------------------------------
Lee Iverson SRI International
leei@ai.sri.com 333 Ravenswood Ave., Menlo Park CA 94025
http://www.ai.sri.com/~leei/ (650) 859-3307
________________________________________________________________________
________________________________________________________________________
Message: 5
Date: Thu, 27 Apr 2000 13:32:38 -0700
From: "Sandy Klausner" <klausner@cubicon.com>
Subject: Re: Re: Towards an atomic data structure
Henry van Eyken
"I question the usefulness of a Simon-pure object-oriented approach. A
top-down approach, it seems to me, is bridling the unbridable, a tool that
communicates a hard-to-discipline melange of logical order and emotions, of
what wells from the conscious and the levels of the less-than-conscious."
"I think, that the kind of text Eric writes about is not natural language,
not even a transcript of natural thoughts. He is writing about formalized
transcripts of some sort of culturally bridled thoughts. Among these are
the, supposedly redundancy-free languages of mathematics and computer
programming."
"Quoting: Note that when the node is split, two objects exist where one did
before. Every node must therefore be capable of being the root of a
subtree.
Although it may start out life as a simple node that contains or points to
an item of text, it must also be capable of pointing to a list of text
elements. (That list might also include markup elements, like HTML bold
tags: <b>.) Since each item in that list may itself point to a list of
subitems, the resulting structure is a tree."
The following is a repeat of my Sunday, April 24 posting:
The DKR team has identified two distinct levels of information abstraction
that require development to achieve the group's goals. The underlying
abstraction appears to be based upon a general system cognitive model based
upon deterministic behavior that can machine execute. This technology
model
could be used as a foundation to design and implement "An interactive tool
for discussion and deliberation* that records decisions and their
rationales in a way that allows the knowledge gained in the process to be
applied to future projects."
To fully achieve the interactive tool goal, the fundamental capabilities in
the underlying technology model must include a robust way to traverse,
edit,
read, and write untyped text. In addition, there needs to be a way to
intelligently analyze the text in interesting ways to determine fundamental
semantics in the symbolic patterns and link these patterns to other
passages
to anywhere in the web. This text may be linked to typed atomic data that
may itself be composed into typed molecular data representing pictures,
sounds, and other rich multimedia information. All this information may
itself exist as part of a data structure within a domain object within a
system.
The SGML community has a long history of developing ways to markup
documents
to capture semantic knowledge embedded in strings. As processing
requirements become more sophisticated, new ways of managing this
complexity
need to be developed. One possible solution is to move to a "clear
document
model." This model separates concerns by parsing the clear text from the
markup information. The clear text is parsed into a collection of linked
character nodes, while one or more composite structure processors maintain
position and range links into the clear text collection. Each processor
may
have specialized behavior to analyze and hold semantic information on
format, organization, navigation, narrative, reference, graphic control,
publication, and filters. The model must be able to allow clear text
editing
while automatically maintaining the processor links into the clear text
collection. Such a model would be able to manage the requirements for a
robust DKR environment.
New Posting
Conventional markup language technology cannot effectively cope with the
demands required for a robust DKR environment. The clear document model
that
we have developed has been successful applied to several domain-specific
applications by others. We have generalized the mechanisms behind this
model
into a novel Graphical Language technology that can effectively manage this
level of system complexity. Your right Henry, the conventional "Simon-pure
object-oriented approach" will not be effective. But, any technology that
can augment human intelligence must be based upon a concrete and
deterministic computational model. Perhaps unrev-II should take a good look
at what Cubicon has to offer the community.
Sandy Klausner
klausner@cubicon.com
[This message contained attachments]
________________________________________________________________________
________________________________________________________________________
Message: 6
Date: Thu, 27 Apr 2000 14:22:51 -0700
From: Eric Armstrong <eric.armstrong@eng.sun.com>
Subject: Re: GOOD: Traction, by Twisted Systems (on browser neutrality)
Thanks, Chris.
"We can use anything" probably makes sense as a prominent
bullet point in the presentation. You never said anything
to indicate that it *was* a one-browser system, but not
hearing that wasn't led one to suspect...)
________________________________________________________________________
________________________________________________________________________
Message: 7
Date: Thu, 27 Apr 2000 14:32:34 -0700
From: Eric Armstrong <eric.armstrong@eng.sun.com>
Subject: Re: Re: Towards an atomic data structure
Sandy Klausner wrote:
>
> ... As processing requirements become more sophisticated, new ways of
> managing this complexity need to be developed. One possible solution
> is to move to a "clear document model." This model separates concerns
> by parsing the clear text from the markup information.
> ^^^^^^^
I think you mean "separating".
> The clear text
> is parsed into a collection of linked character nodes, while one or
> more composite structure processors maintain position and range links
> into the clear text collection. Each processor may have specialized
> behavior to analyze and hold semantic information on format,
> organization, navigation, narrative, reference, graphic control,
> publication, and filters.
>
Can you give a short example that shows one or more of these, and how
they would work together? What are narrative, reference, and graphic
control semantic information, anyway? What happens when you change one
set of external links? For example if you change the organization, what
happens to to the others?
> The model must be able to allow clear text editing while
> automatically maintaining the processor links!! into
> the clear text collection.
>
How can that be done? How does the system know when I add a new word
whether it is part of a heading or the paragraph that follows it. Or
whether it is inside or outside of a bolded section. There must be
dozens of "end points" for which the proper location for an insertion is
indeterminable.
________________________________________________________________________
________________________________________________________________________
Message: 8
Date: Thu, 27 Apr 2000 12:48:38 -0700
From: "Sandy Klausner" <klausner@cubicon.com>
Subject: Re: Towards an atomic data structure.
Henry van Eyken
"I question the usefulness of a Simon-pure object-oriented approach. A
top-down approach, it seems to me, is bridling the unbridable, a tool that
communicates a hard-to-discipline melange of logical order and emotions, of
what wells from the conscious and the levels of the less-than-conscious."
"I think, that the kind of text Eric writes about is not natural language,
not even a transcript of natural thoughts. He is writing about formalized
transcripts of some sort of culturally bridled thoughts. Among these are
the, supposedly redundancy-free languages of mathematics and computer
programming."
"Quoting: Note that when the node is split, two objects exist where one did
before. Every node must therefore be capable of being the root of a
subtree.
Although it may start out life as a simple node that contains or points to
an item of text, it must also be capable of pointing to a list of text
elements. (That list might also include markup elements, like HTML bold
tags: <b>.) Since each item in that list may itself point to a list of
subitems, the resulting structure is a tree."
The following is a repeat of my Sunday, April 24 posting:
The DKR team has identified two distinct levels of information abstraction
that require development to achieve the group's goals. The underlying
abstraction appears to be based upon a general system cognitive model based
upon deterministic behavior that can machine execute. This technology
model
could be used as a foundation to design and implement "An interactive tool
for discussion and deliberation* that records decisions and their
rationales in a way that allows the knowledge gained in the process to be
applied to future projects."
To fully achieve the interactive tool goal, the fundamental capabilities in
the underlying technology model must include a robust way to traverse,
edit,
read, and write untyped text. In addition, there needs to be a way to
intelligently analyze the text in interesting ways to determine fundamental
semantics in the symbolic patterns and link these patterns to other
passages
to anywhere in the web. This text may be linked to typed atomic data that
may itself be composed into typed molecular data representing pictures,
sounds, and other rich multimedia information. All this information may
itself exist as part of a data structure within a domain object within a
system.
The SGML community has a long history of developing ways to markup
documents
to capture semantic knowledge embedded in strings. As processing
requirements become more sophisticated, new ways of managing this
complexity
need to be developed. One possible solution is to move to a "clear
document
model." This model separates concerns by parsing the clear text from the
markup information. The clear text is parsed into a collection of linked
character nodes, while one or more composite structure processors maintain
position and range links into the clear text collection. Each processor
may
have specialized behavior to analyze and hold semantic information on
format, organization, navigation, narrative, reference, graphic control,
publication, and filters. The model must be able to allow clear text
editing
while automatically maintaining the processor links into the clear text
collection. Such a model would be able to manage the requirements for a
robust DKR environment.
New Posting
Conventional markup language technology cannot effectively cope with the
demands required for a robust DKR environment. The clear document model
that
we have developed has been successful applied to several domain-specific
applications by others. We have generalized the mechanisms behind this
model
into a novel Graphical Language technology that can effectively manage this
level of system complexity. Your right Henry, the conventional "Simon-pure
object-oriented approach" will not be effective. But, any technology that
can augment human intelligence must be based upon a concrete and
deterministic computational model. Perhaps unrev-II should take a good look
at what Cubicon has to offer the community.
Sandy Klausner
klausner@cubicon.com
[This message contained attachments]
________________________________________________________________________
________________________________________________________________________
Message: 9
Date: Fri, 28 Apr 2000 09:51:28 +0200
From: NABETH Thierry <thierry.nabeth@insead.fr>
Subject: ResearchIndex http://www.researchindex.com/
Hello,
I wanted to raise to your attention what appear as a very interesting
active search document system such as:
Autonomous Citation Indexing
(ACI) automates the construction of citation indexes
Awareness and tracking
ResearchIndex provides automatic notification of new citations to given
papers, and new papers matching a user profile.
Autonomous location of articles
ResearchIndex uses search engines and crawling to efficiently locate
papers on the Web.
ResearchIndex (formerly CiteSeer)
http://www.researchindex.com/
ResearchIndex engine is Freely available
(The full source code of ResearchIndex is available at no cost for
non-commercial use.)
Thierry Nabeth
Research Fellow
INSEAD CALT (the Centre for Advanced Learning Technologies)
http://www.insead.fr/CALT/
PS:
For your information, I am currently working on an active
"collaborative web referencing system" using the Zope technology.
I want to transform my Encyclopedia of links
http://www.insead.fr/CALT/Encyclopedia/
into a more active, personalized and collaborative system.
However, I am still at a very early stage of the project,
and I do not have currently a lot of time to dedicate to this
project.
Found on the UMBC agentslist
<http://www.cs.umbc.edu/agentslist>
------------------------------
Date: Thu, 4 May 2000 09:00:26 -0700
From: "Ahmad Abdollahzadeh Barforoush" <ahmad@ce.aku.ac.ir>
Subject: Re: Paper available: Indexing and Retrieval of Scientific
Literature
Dear Karla,
A good place that you can search for agent related papers is Research
Index,CiteSeer, (www.researchindex.com). You can search for "agent" and
find
a good summary of published articles (about 8800).
Best,
R. Shirazi.
>Hi,
>I am making my PhD and I am needing recent bibliography in intelligent
>agents (learning), if possible published in periodic of the type IEEE
>Transactions.
>Can anybody help?
>Thanks,
>Karla Figueiredo
--
See <http://www.cs.umbc.edu/agentslist> for list info & archives.
------------------------------
[This message contained attachments]
________________________________________________________________________
________________________________________________________________________
Message: 10
Date: Thu, 27 Apr 2000 14:23:31 -0700
From: Eric Armstrong <eric.armstrong@eng.sun.com>
Subject: Re: Use case scenarios for OSS development
Thanks, Lee. I think this really begins to get us where
we need to go.
Lee Iverson wrote:
>
> Slow as usual to get this stuff to everybody, but here it is:
>
> http://www.ai.sri.com/~leei/OHS/ossusecases.html
>
>
-------------------------------------------------------------------------------
> Lee Iverson SRI International
> leei@ai.sri.com 333 Ravenswood Ave., Menlo Park CA
> 94025
> http://www.ai.sri.com/~leei/ (650) 859-3307
>
> ----------------------------------------------------------------------
>
> ----------------------------------------------------------------------
> Community email addresses:
> Post message: unrev-II@onelist.com
> Subscribe: unrev-II-subscribe@onelist.com
> Unsubscribe: unrev-II-unsubscribe@onelist.com
> List owner: unrev-II-owner@onelist.com
>
> Shortcut URL to this page:
> http://www.onelist.com/community/unrev-II
________________________________________________________________________
________________________________________________________________________
Message: 11
Date: Thu, 27 Apr 2000 15:04:23 -0700
From: "Sandy Klausner" <klausner@cubicon.com>
Subject: Re: Re: Towards an atomic data structure
> The clear text
> is parsed into a collection of linked character nodes, while one or
> more composite structure processors maintain position and range links
> into the clear text collection. Each processor may have specialized
> behavior to analyze and hold semantic information on format,
> organization, navigation, narrative, reference, graphic control,
> publication, and filters.
>
Can you give a short example that shows one or more of these, and how
they would work together? What are narrative, reference, and graphic
control semantic information, anyway? What happens when you change one
set of external links? For example if you change the organization, what
happens to to the others?
Let me clarify this text processor classifcation a little further. Any
particular unit of content can have one of a number of characteristics that
we can group into several categories: format, organization, navigation,
narrative, reference, and metadata:
· Format obviously is the way the text looks.
· Organization is the way content is grouped into coherent units.
· Navigation is text that acts as finding aids -- tables of contents,
indices, and the like.
· Narrative is content-bearing structures; paragraphs, lists, tables, etc.
· Reference is text that acts as a gateway to other, related content.
· Metadata is like the Z-axis of your documents. It is the information in
front
of or behind the words on display that makes it possible for your to
exploit
the
content in novel and sophisticated ways. For example, metadata about a
publication can be used to automate assembly and publishing. Metadata about
graphics can allow the print product to use full-page TIFF images while the
Web
site uses the thumbnail JPEGs linked to a full-sized images.
The issues of changes to a set of external links and internal clear text
edits are related. I will take more time to explain clearly (perhaps even
by
using some pictures) as to how the Cubicon architecture manages these type
of dynamic changes to a composite data structure.
Sandy Klausner
klausner@cubicon.com
[This message contained attachments]
________________________________________________________________________
________________________________________________________________________
Message: 12
Date: Fri, 28 Apr 2000 11:26:48 -0000
From: "Henry van Eyken" <vaneyken@sympatico.ca>
Subject: All Colloquium transcripts available
All Colloquium transcripts are now available via
http://www.bootstrap.org/colloquium
Henry
________________________________________________________________________
________________________________________________________________________
Message: 13
Date: Fri, 28 Apr 2000 11:56:09 -0000
From: "Henry van Eyken" <vaneyken@sympatico.ca>
Subject: A small experiment to help students
The "no-margins" version of the transcript for the Colloquium's
Session 1 contains an experimental feature that permits a student to
quickly step through all the graphics or, independently, through all
the tables without losing his place in the document. It is found at:
http://www.bootstrap.org/colloquium/session_01/session_01-.html
Find underneath the heading the words "quick-step." A click on the
asterisk leads to a footnote that explains what quick-step does. I
perceive it as an attempt towards easing the studying of large
documents. College texts, for example.
I wonder whther there are other, better ways for achieving that
objevtive.
Henry
________________________________________________________________________
________________________________________________________________________
------------------------------------------------------------------------
Was the salesman clueless? Productopia has the answers.
http://click.egroups.com/1/3019/3/_/444287/_/957132508/
------------------------------------------------------------------------
Community email addresses:
Post message: unrev-II@onelist.com
Subscribe: unrev-II-subscribe@onelist.com
Unsubscribe: unrev-II-unsubscribe@onelist.com
List owner: unrev-II-owner@onelist.com
Shortcut URL to this page:
http://www.onelist.com/community/unrev-II
This archive was generated by hypermail 2b29 : Sun Apr 30 2000 - 15:16:16 PDT