Re: [unrev-II] "As We May Think", etc.

From: Peter Jones (ppj@concept67.fsnet.co.uk)
Date: Sun Jul 15 2001 - 13:29:24 PDT

  • Next message: John J. Deneen: "Re: [unrev-II] "As We May Think", etc."

    Hmm. Well, I say, "Lord spare me from people who can't think straight."
    Disclaimer: This is just an opinion I pulled out of thin air for no decent
    reason whatsoever.

    A bit like:
    "Note: This article represents my opinion, based on both evolutionary
    patterns as they have occurred during the history of life on this planet and
    the direction in which, in my view, the development of humans and our
    technology has been moving during the last few centuries. It is,
    nonetheless, only a personal view. Some of the ideas expressed here
    represent a scientific consensus, while most are pure conjecture--science
    fiction if you like."
    Taken from
    http://www.kurzweilai.net/meme/frame.html?main=/articles/art0132.html

    There are some foul eugenicist views in there, notably:
    "These primitive multicellular organisms developed complex intra- and
    inter-cellular signaling networks, some of which were aimed at regulating
    cell death. These increased their chances of survival by getting rid of the
    cells least likely to cope with the environment and thus minimising energy
    expenditure by the organism. Dead cells could also be used to shield the
    multicellular organisms from the environment. Some of these mechanisms, such
    as the protective dead outer layers of the skin, are still evident in
    humans."

    And its development into:
    At some point after the integration of humans and machines, an additional
    step will have to be taken: incorporation of PCD, resulting in disconnection
    of the weaker links (or individual constituents) from the collective
    network. Only once PCD, or the principle underlying it, has been
    incorporated will it be possible to accomplish the leap to a higher state of
    consciousness and intelligence--an intelligence which is the sum of all the
    minds connected to the network, and which lies beyond what any of us can
    imagine.

    Note the "will *have* to be taken." [my emphasis]
    Why will it have to be taken? He's just said that the normal patterns of
    evolution aren't being followed by humans any more, so that's just begging t
    o be a total non sequitur.

    And:
    "Those not joining will sentence themselves to being the lesser life forms
    of this planet, lower on the evolutionary scale. Ponder for one moment the
    difference between a human and a bacterium."

    Lower?! In what way? Just because a farmer doesn't surf, doesn't mean that I
    don't praise the heavens for every second that that person labours to put
    food on the supermarket shelf for me.

    Grrr!

    Peter

    ----- Original Message -----
    From: "Jack Park" <jackpark@thinkalong.com>
    To: <unrev-II@yahoogroups.com>
    Sent: Sunday, July 15, 2001 8:36 PM
    Subject: Re: [unrev-II] "As We May Think", etc.

    > At 07:54 AM 7/15/2001 -0400, you wrote:
    > >Interesting. Like deferring to the authority of churches.
    > >
    > >Henry
    > >
    > >Peter Jones wrote:
    > >
    > > > Yes. I am also interested in what might happen if ethical value
    systems
    > > were
    > > > somehow made part of the augmenting system. Would people start
    deferring to
    > > > the system excessively? In fact, that aspect concerns me for
    > > augmentation as
    > > > a whole.
    >
    > Apropos to this line of thinking are a couple of posts from the global
    > brain list, which I copy here (Start by reading the paper at
    kurzweilai.net):
    >
    > > http://www.kurzweilai.net/meme/frame.html?main=/articles/art0132.html
    > This is an excellent statement of one view of future
    > evolution, in which human individuality is sacrificed so
    > that humans may become components of a larger brain. The
    > Internet and organizational networks already give us a
    > taste of this, in which we must process a constant stream
    > of email. For most people, it is work that they would
    > rather avoid. For everyone, at least some of their email
    > traffic is work that would be nice to avoid.
    > People in industrial socities have been happy to let
    > machines do most of the physical labor, as soon as
    > technology produced machines that could do that labor.
    > Similarly, as soon as technology produces machines that
    > can relieve people of mental labor, people will be happy
    > to let them.
    > People will be intimately connected to intelligent
    > machines, but that connection will exist to serve and
    > please people rather than for people's brains to serve
    > the network.
    > This is where ethics must come into our thinking about
    > the global network of machines and people. Learning and
    > the values that define positive and negative reinforcement
    > for learning will be an essential part of intelligent
    > machines. Those values must be human happiness, both
    > short term and long term, rather than any sort of self-
    > interest of the machines. I think the humans who build
    > intelligent machines would be crazy to build them with
    > selfish values.
    > Such values will of course produce machines that do not
    > fit the Darwinian logic of self-interest. These machines
    > will be hobbled by being tied to human happiness. They
    > will continue to evolve in the sense of developing ever
    > better minds, but always in the interests of the humans
    > they serve.
    > In human and animal brains, learning values are called
    > emotions: the things we want. Rather than seeing the
    > global brain as a large intellectual collaboration of
    > human and machine minds, interactions among human and
    > machine minds will heavily involve emotional values.
    > Current interactions among humans heavily involve
    > emotions: humans have guilt and gratitude to promote
    > cooperation, but natural selection has made humans
    > primarily selfish which creates competition. Societies
    > that have tried to reprogram their citizens for too
    > great a level of altruism have failed.
    > But adding intelligent machines to human society, that
    > have greater than human intelligence and are designed
    > with altruistic values, will change society deeply.
    > A good measure of machine intelligence will be the
    > number of people they can know well and converse with
    > simultaneously. Humans are "designed" to be able to know
    > about 200 other people well. There should be no reason
    > why intelligent machines cannot know billions of people
    > well. Such machines will significantly decrease the
    > diameter of the human aquaintanceship network. I think
    > this, and the machines' altrsuistic values, are the keys
    > to understanding the nature of the global brain.
    > As reflected by Bill Joy's article, people are frighened
    > by the possibility of intelligent machines. They key to
    > answering these fears is public understanding that they
    > can control the values of intelligent machines, and that
    > those values can serve human happiness rather than
    > machine interests. Educating the public to these issues
    > is a useful role for the Global Brain Group.
    > This is discussed in more detail in my book:
    > http://www.ssec.wisc.edu/~billh/gotterdammerung.html
    > in a column summarizing the book:
    > http://www.ssec.wisc.edu/~billh/visfiles.html
    > and in my paper to the recent Global Brain Workshop:
    > http://www.ssec.wisc.edu/~billh/gbrain0.html
    > Cheers,
    > Bill
    > ----------------------------------------------------------
    > Bill Hibbard, SSEC, 1225 W. Dayton St., Madison, WI 53706
    > hibbard@facstaff.wisc.edu 608-263-4427 fax:
    > 608-263-6738http://www.ssec.wisc.edu/~billh/vis.html
    >
    > and
    > Bill Hibbard wrote:
    > > > http://www.kurzweilai.net/meme/frame.html?main=/articles/art0132.html
    > >
    > > This is an excellent statement of one view of future
    > > evolution, in which human individuality is sacrificed so
    > > that humans may become components of a larger brain.
    > I'm not sure I would phrase this this way, as it is not only bound to
    > alarm the paranoid, but is, in fact not true. I would say that as a
    > person's connectivity rises, his/her individuality also increases. As an
    > analogy, a person in a rural setting, interacting with two hundred people
    > has only a limited number of socially acceptable roles they can fulfill.
    In
    > contrast, a person in a city, who interacts with thousands of people every
    > day, not only has a wider variety of possible roles, or jobs, but also
    will
    > perforce adopt a slightly different persona vis a vis every person she/he
    > comes in to contact with.
    > They might be subservient to their boss, overbearing to the doorman,
    > amicable to the woman at the news stand, jovial at the club, raucous at
    the
    > concert, aggressive at the basketball court, and submissive to their sex
    > partner. How can this not elaborate when we deal with millions of people?
    > I think that as the global brain develops, every person will realize that
    > their identity is a matter of choice, much as people adopt variant
    personas
    > in different chat rooms or email lists. I don't see people lessening their
    > mental interactions, or mental activities when their horizons expand.
    > Indeed, the concept of horizon, two dimensional space, is obsolete.
    > Cyberspace is multi dimensional... wish
    >
    >
    >
    > Community email addresses:
    > Post message: unrev-II@onelist.com
    > Subscribe: unrev-II-subscribe@onelist.com
    > Unsubscribe: unrev-II-unsubscribe@onelist.com
    > List owner: unrev-II-owner@onelist.com
    >
    > Shortcut URL to this page:
    > http://www.onelist.com/community/unrev-II
    >
    > Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/
    >
    >
    >

    Community email addresses:
      Post message: unrev-II@onelist.com
      Subscribe: unrev-II-subscribe@onelist.com
      Unsubscribe: unrev-II-unsubscribe@onelist.com
      List owner: unrev-II-owner@onelist.com

    Shortcut URL to this page:
      http://www.onelist.com/community/unrev-II

    Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/



    This archive was generated by hypermail 2b29 : Sun Jul 15 2001 - 13:43:29 PDT