Well said Henry.
The idea that nature is not survival of the fittest, but actually survival
of the luckiest needs to be heeded.
And then, as you say, humans have been lucky enough to develop some
interesting faculties, that enable **the species as a whole** to create a
bit more luck for themselves.
The point in that kurzweil article that was of most positive interest (that
he chose not to develop) was that this chance creation of faculties that
enables us to steer our luck to some degree has created a situation where a
much greater diversity of genetic traits survive and are considered
valuable.
That for me is the message I would want to carry into any Global Brain
effort: That some global knowledge space accessible for all not only extends
our luck-making capacity as a species, but also exists as a space where open
democratic diversity is promoted free from physical encumbrances.
That certainly does not mean that we should let sloppy thinking pass
unchallenged though. Especially not where those thoughts have implications
for our physical existence should they be implemented.
A few notes from the record of history:
"I've come up with this great idea for some stuff that goes 'bang!'"
"Won't that be a bit dangerous if it's misused?"
"Well, we'll worry about that later."
Gunpowder.
"I've found some new stuff that makes my engine run much more powerfully."
"What about all that smoke and gas coming out of the pipe at the back?"
"Well, we'll worry about that later."
Petrol engines.
"I've come up with this great idea for splitting atoms so that we can hold
the power of the sun in our hands."
"Won't that be a bit dangerous if it's misused?"
"Well, yes, I am rather worried about that. We'll write a memo to the
government on the dangers of developing weapons with this technology."
Nuclear weapons.
"I've come up with this great idea for a pesticide that kills just about
every bug imaginable, and then some."
"What happens after it has killed all the bugs?"
"Well, let's worry about that later."
DDT.
And on and on and on...
... A global knowledge space accessible for all not only extends our
luck-making capacity as a species, but also exists as a space where open
democratic diversity is promoted free from physical encumbrances.
This I am in favour of as long as it doesn't promote crime in the physical
world.
A vast artificial intelligence capable of controlling all the machines in
the world, able to reproduce itself and its drones at will?
I believe any project heading in that direction requires us to think things
through with a level of discipline beyond any yet seen.
Peter
----- Original Message -----
From: "Henry van Eyken" <vaneyken@sympatico.ca>
To: <unrev-II@yahoogroups.com>
Sent: Sunday, July 15, 2001 10:17 PM
Subject: Re: [unrev-II] "As We May Think", etc.
> Hi Jack.
>
> Fascinating, if not unsettling stuff that calls for detailed attention.
>
> I don't wish to go off an a tangent, but here is one of those reasonings
that flaw
> debates of evolution:
>
> "Many millions of years ago, the first living cells evolved. These ancient
> unicellular organisms, swimming about in the primordial soup, had a sole
> function--survival in order to reproduce."
> (First par. under "Multicellularity" in Danny Belkin's "Evolution and the
Internet"
> found on the Kurzwel site.) The word "function" is used here synonymously
with
> "purpose."
>
> But as I understand it, evolution is a game of chance - nature's
opportunism. It is
> not endowed with purpose. Evolution is a coming about, not a going after.
In
> evolutionary terms it is wrong to ask what, for example, eyes are for. Our
eyes are
> not for seeing. It so happens that evolution produced organisms that
permit us to
> see. "Purpose" is one outcome of evolutionaty developments, a
happenstance, that we
> seem to employ to turn the tables on evolution.
>
> I believe we do well to bear this firmly in mind when we read materials
like these.
> When humans "purposefully" create machines to do this and that and the
other thing,
> they are unwittingly moved by evolution, etc. And we often take credit (or
blame)
> for what forces of chance painted us with.
>
> All of which is not to say we should abandon the human view of purpose
and,
> consequently, being held accountable for our pursuits. Mores ex natura;
mores ex
> machina. Go figure ...
>
> Henry
>
>
>
>
>
> Jack Park wrote:
>
> > At 07:54 AM 7/15/2001 -0400, you wrote:
> > >Interesting. Like deferring to the authority of churches.
> > >
> > >Henry
> > >
> > >Peter Jones wrote:
> > >
> > > > Yes. I am also interested in what might happen if ethical value
systems
> > > were
> > > > somehow made part of the augmenting system. Would people start
deferring to
> > > > the system excessively? In fact, that aspect concerns me for
> > > augmentation as
> > > > a whole.
> >
> > Apropos to this line of thinking are a couple of posts from the global
> > brain list, which I copy here (Start by reading the paper at
kurzweilai.net):
> >
> > > http://www.kurzweilai.net/meme/frame.html?main=/articles/art0132.html
> > This is an excellent statement of one view of future
> > evolution, in which human individuality is sacrificed so
> > that humans may become components of a larger brain. The
> > Internet and organizational networks already give us a
> > taste of this, in which we must process a constant stream
> > of email. For most people, it is work that they would
> > rather avoid. For everyone, at least some of their email
> > traffic is work that would be nice to avoid.
> > People in industrial socities have been happy to let
> > machines do most of the physical labor, as soon as
> > technology produced machines that could do that labor.
> > Similarly, as soon as technology produces machines that
> > can relieve people of mental labor, people will be happy
> > to let them.
> > People will be intimately connected to intelligent
> > machines, but that connection will exist to serve and
> > please people rather than for people's brains to serve
> > the network.
> > This is where ethics must come into our thinking about
> > the global network of machines and people. Learning and
> > the values that define positive and negative reinforcement
> > for learning will be an essential part of intelligent
> > machines. Those values must be human happiness, both
> > short term and long term, rather than any sort of self-
> > interest of the machines. I think the humans who build
> > intelligent machines would be crazy to build them with
> > selfish values.
> > Such values will of course produce machines that do not
> > fit the Darwinian logic of self-interest. These machines
> > will be hobbled by being tied to human happiness. They
> > will continue to evolve in the sense of developing ever
> > better minds, but always in the interests of the humans
> > they serve.
> > In human and animal brains, learning values are called
> > emotions: the things we want. Rather than seeing the
> > global brain as a large intellectual collaboration of
> > human and machine minds, interactions among human and
> > machine minds will heavily involve emotional values.
> > Current interactions among humans heavily involve
> > emotions: humans have guilt and gratitude to promote
> > cooperation, but natural selection has made humans
> > primarily selfish which creates competition. Societies
> > that have tried to reprogram their citizens for too
> > great a level of altruism have failed.
> > But adding intelligent machines to human society, that
> > have greater than human intelligence and are designed
> > with altruistic values, will change society deeply.
> > A good measure of machine intelligence will be the
> > number of people they can know well and converse with
> > simultaneously. Humans are "designed" to be able to know
> > about 200 other people well. There should be no reason
> > why intelligent machines cannot know billions of people
> > well. Such machines will significantly decrease the
> > diameter of the human aquaintanceship network. I think
> > this, and the machines' altrsuistic values, are the keys
> > to understanding the nature of the global brain.
> > As reflected by Bill Joy's article, people are frighened
> > by the possibility of intelligent machines. They key to
> > answering these fears is public understanding that they
> > can control the values of intelligent machines, and that
> > those values can serve human happiness rather than
> > machine interests. Educating the public to these issues
> > is a useful role for the Global Brain Group.
> > This is discussed in more detail in my book:
> > http://www.ssec.wisc.edu/~billh/gotterdammerung.html
> > in a column summarizing the book:
> > http://www.ssec.wisc.edu/~billh/visfiles.html
> > and in my paper to the recent Global Brain Workshop:
> > http://www.ssec.wisc.edu/~billh/gbrain0.html
> > Cheers,
> > Bill
> > ----------------------------------------------------------
> > Bill Hibbard, SSEC, 1225 W. Dayton St., Madison, WI 53706
> > hibbard@facstaff.wisc.edu 608-263-4427 fax:
> > 608-263-6738http://www.ssec.wisc.edu/~billh/vis.html
> >
> > and
> > Bill Hibbard wrote:
> > > >
http://www.kurzweilai.net/meme/frame.html?main=/articles/art0132.html
> > >
> > > This is an excellent statement of one view of future
> > > evolution, in which human individuality is sacrificed so
> > > that humans may become components of a larger brain.
> > I'm not sure I would phrase this this way, as it is not only bound to
> > alarm the paranoid, but is, in fact not true. I would say that as a
> > person's connectivity rises, his/her individuality also increases. As an
> > analogy, a person in a rural setting, interacting with two hundred
people
> > has only a limited number of socially acceptable roles they can fulfill.
In
> > contrast, a person in a city, who interacts with thousands of people
every
> > day, not only has a wider variety of possible roles, or jobs, but also
will
> > perforce adopt a slightly different persona vis a vis every person
she/he
> > comes in to contact with.
> > They might be subservient to their boss, overbearing to the doorman,
> > amicable to the woman at the news stand, jovial at the club, raucous at
the
> > concert, aggressive at the basketball court, and submissive to their sex
> > partner. How can this not elaborate when we deal with millions of
people?
> > I think that as the global brain develops, every person will realize
that
> > their identity is a matter of choice, much as people adopt variant
personas
> > in different chat rooms or email lists. I don't see people lessening
their
> > mental interactions, or mental activities when their horizons expand.
> > Indeed, the concept of horizon, two dimensional space, is obsolete.
> > Cyberspace is multi dimensional... wish
> >
> >
> > Community email addresses:
> > Post message: unrev-II@onelist.com
> > Subscribe: unrev-II-subscribe@onelist.com
> > Unsubscribe: unrev-II-unsubscribe@onelist.com
> > List owner: unrev-II-owner@onelist.com
> >
> > Shortcut URL to this page:
> > http://www.onelist.com/community/unrev-II
> >
> > Your use of Yahoo! Groups is subject to
http://docs.yahoo.com/info/terms/
>
>
> Community email addresses:
> Post message: unrev-II@onelist.com
> Subscribe: unrev-II-subscribe@onelist.com
> Unsubscribe: unrev-II-unsubscribe@onelist.com
> List owner: unrev-II-owner@onelist.com
>
> Shortcut URL to this page:
> http://www.onelist.com/community/unrev-II
>
> Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/
>
>
>
------------------------ Yahoo! Groups Sponsor ---------------------~-->
Secure your servers with 128-bit SSL encryption! Grab your copy of
VeriSign's FREE Guide "Securing Your Web Site for Business." Get it now!
http://www.verisign.com/cgi-bin/go.cgi?a=n094442340008000
http://us.click.yahoo.com/6lIgYB/IWxCAA/yigFAA/IHFolB/TM
---------------------------------------------------------------------~->
Community email addresses:
Post message: unrev-II@onelist.com
Subscribe: unrev-II-subscribe@onelist.com
Unsubscribe: unrev-II-unsubscribe@onelist.com
List owner: unrev-II-owner@onelist.com
Shortcut URL to this page:
http://www.onelist.com/community/unrev-II
Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/
This archive was generated by hypermail 2b29 : Mon Jul 16 2001 - 02:47:08 PDT