Re: [unrev-II] WILL SPIRITUAL ROBOTS REPLACE HUMANITY

From: Henry van Eyken (vaneyken@sympatico.ca)
Date: Sat Apr 08 2000 - 05:49:59 PDT

  • Next message: Henry van Eyken: "[unrev-II] Re: Transcripts of Colloquium sessions (update)"

    Eric:

    A word of thanks for your reportage on the April-1 events at Stanford U.,
    and also about your earlier mention of the availability of the video
    sometime soon. My interest in the subject stems less from Kurzweil's promise
    that people may one day fall in love with spiritual machines and more
    because I have children and grandchildren. For anybody in my position, the
    future ought to be a fascinating topic.

    It seems to me that a bit of editorializing is not out of order. What I
    really shall be looking for in the video, once I have it, is to what extent
    the arguments take the neural part of cognitive science into account, and to
    what extent the realism of human ability to stave off disaster.

    As things stand, it is my strong impression that people do not wish to face
    potential facts and prepare for them. In my own lifetime, many heard
    Hitler's rantings, learned about crystal night, saw troop movements along
    the fabulous Autobahns and, like Chamberlain, believed in peace forever
    after. In my lifetime, many have for years observed the rapid melting of the
    polar ice caps, perhaps discussing that calamity over drinks that heat up
    rapidly once the ice in them has melted. In my lifetime, the deteriorating
    effect of free radicals on the ozone layer has been known for many decades
    (very early in the seventies, I happened to see an English translation of a
    Russian paper on the subject). Etc.

    It seems to me that computer science is not complete without understanding
    the minds it is supposed to augment (or replace, as some will have it). But
    then again, I have no big inventions to my name, I have no honorary degrees,
    and I eke out my existence without any US president ever having noticed me.
    So, what the deuce is my opinion worth?!

    Henry

    P.S. But what little I tentatively wrote on the subject of "augmentation"
    DOES take account, in some small way, the writings of some of the biggies
    working in psychology. For my less brilliant and rather mundane stuff --
    rather more a reporting than my personal opinion -- see, e.g.:
    http://www.fleabyte.org/archives-computing_to_a_purpose-1.html#The bias that
    got us places
    http://www.fleabyte.org/archives-computing_to_a_purpose-1.html#An
    interpreter that knows its priority
    http://www.fleabyte.org/archives-computing_to_a_purpose-2.html#The connected
    brain

    Then, not unrelated to the DKR effort:
    http://www.fleabyte.org/archives-computing_to_a_purpose-2.html#Promoting
    more meaningful learning
    http://www.fleabyte.org/archives-computing_to_a_purpose-2.html#On the
    integrity of truly lifelong, personal computing

    And for getting in touch with Kurzweil:
    http://www.fleabyte.org/archives-computing_to_a_purpose-3.html#Kurzweil
    previews thirty years of education
    (the devil is in the footnotes).

    They will give you a basis for a totally different, and perhaps even a more
    rational basis for examining things. And a rationale for co-evolution as an
    important feedback stream in bootstrapping.

    I guess ...

    Eric Armstrong wrote:

    > Frode Hegland wrote:
    > >
    > > >> WILL SPIRITUAL ROBOTS REPLACE HUMANITY BY 2100? A SYMPOSIUM AT
    > > >> STANFORD
    > >
    > > So, how was it? Was anybody there from this group? Have I missed
    > > reports?
    > >
    > You had to ask. Now I need to write up the summary I've been planning
    > for awhile.
    >
    > Bill Joy pointed to the old enemies we have so far faced down since the
    > middle of the last century: atomic power, germ warfare, and ______. But
    > he noted that those technologies required big, expensive programs and
    > were the province of a few governments (although the list is growing).
    > The additional issues that face in the next century though: biotech,
    > nanotech, and robotics, differ in two fundamental ways:
    > 1) They are potentially self-replicating
    > Rather than being confined to a single instance, therefore,
    > harmful effects can multiply exponentially.
    >
    > 2) They do not require big, expensive programs.
    > Especially with the growth of computing power and information
    > access, we are "democratizing the capacity for evil", such that
    > a Kacinski in our midst could bring major portions of our
    > selves, our civilization, or even the biosphere. (Biotech, for
    > example, holds the ability to build "designer viruses" that
    > lethally attack a given race.)
    >
    > The alternative he presented was "relinquishment" -- giving up the
    > pursuit of knowledge in those areas.
    >
    > The panel provided a number of interesting counter-arguments and
    > counter-proposals, most of which were buried beneath a flurry of bad
    > logic and specious arguments. I was doing a fair amount of tongue-biting
    > during those moments when arguments were presented that "missed the
    > point" but between those moments there were some well-reasoned
    > hypotheses.
    >
    > To summarize:
    >
    > Ralph Merkle made the most well-reasoned defense of nanotechnology,
    > pointing out that "replication capacity" does not necessarily imply
    > "self-replication". If the DNA-information is stored on board, like a
    > cell, then the cell is self-replicating. But if it is broadcast from
    > afar, then any component that is cut off from that stream can no longer
    > reproduce. So the technology is controllable.
    >
    > [He also made the point that business does not want to kill off its
    > customer base, so it will never intentionally do harmful things.
    > Unfortunately, that argument misses the problem. It is not what business
    > intends that is the problem, but rather what they might do in a
    > shortsighted quest for profit (witness MTBE) and, more importantly, what
    > some badly misguided individual might do.]
    >
    > On the other hand, Ralph also made the point that if we run from this
    > technology, and some malevolent person or government or person *does*
    > pursue it, we will be left without any means to defend ourselves. So if
    > there is a problem, we want to know about it as far ahead of time as
    > possible. And if there is no problem, we'd like to know about that, too.
    >
    > Another counter-argument made by John Holland (I think) was that if a
    > problem were unleashed by an individual, massive resources of government
    > and industry would be immediately brought to bear to find a solution.
    > Although that approach has not been particulary effective with AIDs, the
    > reasoning is that the massive computing power that is coming into our
    > hands (the power of one thinking person in a single affordable system by
    > 2010, of *all* thinking people by 2030) will make it possible to solve
    > such problems before they do egregious harm. [On the other hand, the
    > equivalent of putting oil back into the Valdez always seems like a much
    > easier problem to solve when you haven't been faced with it.]
    >
    > As for Robotics and thinking machines replacing us, Holland, pointed out
    > that there are serious discrepancies between what we can get a computer
    > to do and what humans do. He mentioned Herbert Simon, who estimated that
    > it would take 10 years to develop a machine capable of beating the best
    > human chess players -- in 1950. He also pointed out that for Deep Blue
    > to play a great game of chess after analysing millions of combinations
    > every *second* was not very amazing -- what *was* astonishing was that a
    > human could do so. [This is more astonishing given the background
    > information that humans -- including Grandmaster -- look at 35
    > positions, on average, before selecting a move. I have long felt that
    > chess programs should be restricted to evaluating 50 positions. To play
    > well under that restraint, they will have to do the same kind of pattern
    > recognition that humans do.]
    >
    > He also gave some insight into the human pattern-recognition process,
    > too -- patterns like the ones players see on a chessboard. It turns out
    > the human eye darts from place to place, absorbing bits of the picture,
    > and that the "darting" action is governed by deep cognitive processes we
    > don't understand. The darting actions themselves are call "sacades"
    > (sah-cahds) and they are integral in human pattern matching. [Another
    > note on chess: Given a chess position in which the best move found by
    > analysis was resonably obscure, 2 out of seven Grandmasters considered
    > the move in their deliberations, while none of the 5 or 6 Masters in the
    > study considered it -- another indication of the degree to which the
    > limited moves considered are controlled by pattern recognition.]
    >
    > Anyway, the point was made that many of the things we simulate with
    > computers today don't really come near to performing in ways that we
    > could consider "intelligent" must less "self-aware", "conscious", or
    > "spiritual". So we probably don't have to worry about robots for a while
    > yet.
    >
    > [There were other predictions about glorious futures, like biotech
    > having the capability to eliminate world hunger. It strikes me that
    > would be a good idea because, if there *is* a possibility that a small
    > government or even an individual could wreak massive harm, then it seems
    > to me that we really *want* everyone on the planet to be just as totally
    > happy and comfortable as possible, and we better get busy thinking about
    > how to get them that way just as soon as we can. However, I note with
    > chagrin that no business is funding genetic research for a wild tomato
    > that grows like crazy in adverse conditions and produces abundant fruit.
    > Instead, I see funded research for a tomato that can withstand stronger
    > pesticides -- so we can pump even *more* pesticides into the eco system
    > and keep profits up! So it seems to me that even with a magic ray gun
    > that produces a powerful orgasms, the so-an-so's that run our businesses
    > wouldn't have the sense to know which way to point it...]
    >
    > Thus endeth the sermons, diatribes, and soapbox standing, cleverly (?)
    > disguised as a summary...
    >
    > ------------------------------------------------------------------------
    > PERFORM CPR ON YOUR APR!
    > Get a NextCard Visa, in 30 seconds! Get rates as low as
    > 0.0% Intro or 9.9% Fixed APR and no hidden fees.
    > Apply NOW!
    > http://click.egroups.com/1/2121/3/_/444287/_/955078228/
    > ------------------------------------------------------------------------
    >
    > Community email addresses:
    > Post message: unrev-II@onelist.com
    > Subscribe: unrev-II-subscribe@onelist.com
    > Unsubscribe: unrev-II-unsubscribe@onelist.com
    > List owner: unrev-II-owner@onelist.com
    >
    > Shortcut URL to this page:
    > http://www.onelist.com/community/unrev-II

    ------------------------------------------------------------------------
    LOW RATE, NO WAIT!
    Get a NextCard Visa, in 30 seconds! Get rates
    as low as 2.9% Intro or 9.9% Fixed APR and no hidden fees.
    Learn more at:
    http://click.egroups.com/1/937/3/_/444287/_/955197999/
    ------------------------------------------------------------------------

    Community email addresses:
      Post message: unrev-II@onelist.com
      Subscribe: unrev-II-subscribe@onelist.com
      Unsubscribe: unrev-II-unsubscribe@onelist.com
      List owner: unrev-II-owner@onelist.com

    Shortcut URL to this page:
      http://www.onelist.com/community/unrev-II



    This archive was generated by hypermail 2b29 : Sat Apr 08 2000 - 05:54:02 PDT