Re: [unrev-II] "As We May Think", etc.

From: Henry van Eyken (vaneyken@sympatico.ca)
Date: Sun Jul 15 2001 - 14:17:40 PDT

  • Next message: Henry van Eyken: "Re: [unrev-II] New comer"

    Hi Jack.

    Fascinating, if not unsettling stuff that calls for detailed attention.

    I don't wish to go off an a tangent, but here is one of those reasonings that flaw
    debates of evolution:

    "Many millions of years ago, the first living cells evolved. These ancient
    unicellular organisms, swimming about in the primordial soup, had a sole
    function--survival in order to reproduce."
    (First par. under "Multicellularity" in Danny Belkin's "Evolution and the Internet"
    found on the Kurzwel site.) The word "function" is used here synonymously with
    "purpose."

    But as I understand it, evolution is a game of chance - nature's opportunism. It is
    not endowed with purpose. Evolution is a coming about, not a going after. In
    evolutionary terms it is wrong to ask what, for example, eyes are for. Our eyes are
    not for seeing. It so happens that evolution produced organisms that permit us to
    see. "Purpose" is one outcome of evolutionaty developments, a happenstance, that we
    seem to employ to turn the tables on evolution.

    I believe we do well to bear this firmly in mind when we read materials like these.
    When humans "purposefully" create machines to do this and that and the other thing,
    they are unwittingly moved by evolution, etc. And we often take credit (or blame)
    for what forces of chance painted us with.

    All of which is not to say we should abandon the human view of purpose and,
    consequently, being held accountable for our pursuits. Mores ex natura; mores ex
    machina. Go figure ...

    Henry

    Jack Park wrote:

    > At 07:54 AM 7/15/2001 -0400, you wrote:
    > >Interesting. Like deferring to the authority of churches.
    > >
    > >Henry
    > >
    > >Peter Jones wrote:
    > >
    > > > Yes. I am also interested in what might happen if ethical value systems
    > > were
    > > > somehow made part of the augmenting system. Would people start deferring to
    > > > the system excessively? In fact, that aspect concerns me for
    > > augmentation as
    > > > a whole.
    >
    > Apropos to this line of thinking are a couple of posts from the global
    > brain list, which I copy here (Start by reading the paper at kurzweilai.net):
    >
    > > http://www.kurzweilai.net/meme/frame.html?main=/articles/art0132.html
    > This is an excellent statement of one view of future
    > evolution, in which human individuality is sacrificed so
    > that humans may become components of a larger brain. The
    > Internet and organizational networks already give us a
    > taste of this, in which we must process a constant stream
    > of email. For most people, it is work that they would
    > rather avoid. For everyone, at least some of their email
    > traffic is work that would be nice to avoid.
    > People in industrial socities have been happy to let
    > machines do most of the physical labor, as soon as
    > technology produced machines that could do that labor.
    > Similarly, as soon as technology produces machines that
    > can relieve people of mental labor, people will be happy
    > to let them.
    > People will be intimately connected to intelligent
    > machines, but that connection will exist to serve and
    > please people rather than for people's brains to serve
    > the network.
    > This is where ethics must come into our thinking about
    > the global network of machines and people. Learning and
    > the values that define positive and negative reinforcement
    > for learning will be an essential part of intelligent
    > machines. Those values must be human happiness, both
    > short term and long term, rather than any sort of self-
    > interest of the machines. I think the humans who build
    > intelligent machines would be crazy to build them with
    > selfish values.
    > Such values will of course produce machines that do not
    > fit the Darwinian logic of self-interest. These machines
    > will be hobbled by being tied to human happiness. They
    > will continue to evolve in the sense of developing ever
    > better minds, but always in the interests of the humans
    > they serve.
    > In human and animal brains, learning values are called
    > emotions: the things we want. Rather than seeing the
    > global brain as a large intellectual collaboration of
    > human and machine minds, interactions among human and
    > machine minds will heavily involve emotional values.
    > Current interactions among humans heavily involve
    > emotions: humans have guilt and gratitude to promote
    > cooperation, but natural selection has made humans
    > primarily selfish which creates competition. Societies
    > that have tried to reprogram their citizens for too
    > great a level of altruism have failed.
    > But adding intelligent machines to human society, that
    > have greater than human intelligence and are designed
    > with altruistic values, will change society deeply.
    > A good measure of machine intelligence will be the
    > number of people they can know well and converse with
    > simultaneously. Humans are "designed" to be able to know
    > about 200 other people well. There should be no reason
    > why intelligent machines cannot know billions of people
    > well. Such machines will significantly decrease the
    > diameter of the human aquaintanceship network. I think
    > this, and the machines' altrsuistic values, are the keys
    > to understanding the nature of the global brain.
    > As reflected by Bill Joy's article, people are frighened
    > by the possibility of intelligent machines. They key to
    > answering these fears is public understanding that they
    > can control the values of intelligent machines, and that
    > those values can serve human happiness rather than
    > machine interests. Educating the public to these issues
    > is a useful role for the Global Brain Group.
    > This is discussed in more detail in my book:
    > http://www.ssec.wisc.edu/~billh/gotterdammerung.html
    > in a column summarizing the book:
    > http://www.ssec.wisc.edu/~billh/visfiles.html
    > and in my paper to the recent Global Brain Workshop:
    > http://www.ssec.wisc.edu/~billh/gbrain0.html
    > Cheers,
    > Bill
    > ----------------------------------------------------------
    > Bill Hibbard, SSEC, 1225 W. Dayton St., Madison, WI 53706
    > hibbard@facstaff.wisc.edu 608-263-4427 fax:
    > 608-263-6738http://www.ssec.wisc.edu/~billh/vis.html
    >
    > and
    > Bill Hibbard wrote:
    > > > http://www.kurzweilai.net/meme/frame.html?main=/articles/art0132.html
    > >
    > > This is an excellent statement of one view of future
    > > evolution, in which human individuality is sacrificed so
    > > that humans may become components of a larger brain.
    > I'm not sure I would phrase this this way, as it is not only bound to
    > alarm the paranoid, but is, in fact not true. I would say that as a
    > person's connectivity rises, his/her individuality also increases. As an
    > analogy, a person in a rural setting, interacting with two hundred people
    > has only a limited number of socially acceptable roles they can fulfill. In
    > contrast, a person in a city, who interacts with thousands of people every
    > day, not only has a wider variety of possible roles, or jobs, but also will
    > perforce adopt a slightly different persona vis a vis every person she/he
    > comes in to contact with.
    > They might be subservient to their boss, overbearing to the doorman,
    > amicable to the woman at the news stand, jovial at the club, raucous at the
    > concert, aggressive at the basketball court, and submissive to their sex
    > partner. How can this not elaborate when we deal with millions of people?
    > I think that as the global brain develops, every person will realize that
    > their identity is a matter of choice, much as people adopt variant personas
    > in different chat rooms or email lists. I don't see people lessening their
    > mental interactions, or mental activities when their horizons expand.
    > Indeed, the concept of horizon, two dimensional space, is obsolete.
    > Cyberspace is multi dimensional... wish
    >
    >
    > Community email addresses:
    > Post message: unrev-II@onelist.com
    > Subscribe: unrev-II-subscribe@onelist.com
    > Unsubscribe: unrev-II-unsubscribe@onelist.com
    > List owner: unrev-II-owner@onelist.com
    >
    > Shortcut URL to this page:
    > http://www.onelist.com/community/unrev-II
    >
    > Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/

    Community email addresses:
      Post message: unrev-II@onelist.com
      Subscribe: unrev-II-subscribe@onelist.com
      Unsubscribe: unrev-II-unsubscribe@onelist.com
      List owner: unrev-II-owner@onelist.com

    Shortcut URL to this page:
      http://www.onelist.com/community/unrev-II

    Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/



    This archive was generated by hypermail 2b29 : Sun Jul 15 2001 - 15:07:08 PDT