Re: [unrev-II] The perils of high technology... (fwd)

From: Paul Fernhout (pdfernhout@kurtz-fernhout.com)
Date: Tue Mar 14 2000 - 16:06:57 PST

  • Next message: Eric Armstrong: "Re: [unrev-II] Re: HtmlDOM -- XML -- Xmail"

    From: Paul Fernhout <pdfernhout@kurtz-fernhout.com>

    Jon -

    I was in an Ecology and Evolution graduate program for a while.
    To sum up two years of studies: Evolution happens.
    The best one can do is slightly delay this.

    People often make major errors in underestimating self-replicating
    technology -- for example, Tom Ray himself thought that by evolving code
    in a "virtual" environment it would never get out. All one needs is for
    the virtual environment to be hooked to a real one somehow, and for all
    relevant purposes, the code can evolve and interact with the real
    world. Such a link could happen by malicious intent, a design mistake,
    or a VM coding bug (the same way problems with some versions of Internet
    Explorer can be exploited to modify files on your hard drive.). More
    subtly, such a link can take place by evolving systems creating
    fascinating patterns that are interpreted by the observing scientist (on
    a view screen, or with a code viewer) and cause them to act.

    Nano-tech, Robo-tech, Corporate-Tech, Computer-Tech, AI-Tech,
    Money-Tech, Meme-tech, Habitat-tech, BioTech -- it makes no difference.
    All these systems can evolve if the can replicate something with
    variations that are selected for and against somehow.

    Technology is an amplifier -- possibly of the tiniest most distasteful
    voice.
    Self-replicating technology is even a greater amplifier.
    There will always be disturbed people with significant power -- for
    example, imagine what would happen if Bill Gates had a minor stroke in a
    part of the brain related to emotional control. Or if that money is left
    somehow to disfunctional relations in his will. [He actually says he
    will give the bulk to charity]. We might be looking at $100 billion
    dollars out of control. (Obviously, on the other hand, Bill Gates could
    also invest his money helping save humanity.)

    Given the rising magnitude of the money, information, and energy flows
    on this planet, I don't see anyway to stop a major singularity or even
    greatly impeed it. Gaia is about to give birth to something new. I don't
    see any other hope than to surf the evolutionary wave and hope for the
    best.

    That said, I stoppped doing artifical life over a decade ago in part
    because of fear of the consequences and not wanting to contribute to
    such disasters. Now that such knowledge has become mainstream and there
    are hundreds or thousands of serious ALife practicioners in every field
    from finance to scheduling to art, I no longer have the same level of
    qualms about my own efforts significantly speeding the problem along.

    Real world living systems survive by dispersing, varying, and having
    refugia (places to hide from predators). We must do so if we care about
    ensuring humans and other biological creatures survive.

    I think our best hope for human survival in the next few decades is to
    develop a wide variety of self-replicating systems at a macroscopic
    level (like space habitats or underwater habitats) capable of sheltering
    human and other biological life within. That is my long term interest in
    an OHS/DKR:
      http://www.kurtz-fernhout.com/oscomak

    If we worry about fantastic futuristic problems, then we must admit the
    possiblity of working towards fantastic futuristic solutions.

    -Paul Fernhout
    Kurtz-Fernhout Software
    =========================================================
    Developers of custom software and educational simulations
    Creators of the Garden with Insight(TM) garden simulator
    http://www.kurtz-fernhout.com

    Jon Winters wrote:
    > Checkit:
    > http://washingtonpost.com/wp-srv/WPlate/2000-03/12/215l-031200-idx.html
    >
    > Now is the time to consider all theese issues so we can prevent an
    > accident in the future.
    >
    > A few suggestions that were made in our class were to make the self
    > replicating machines in such a way that they can only replicate in certain
    > controlled environments that do not exist in nature. (while being flooded
    > by X-rays for example) If replicators get released into the wild they will
    > die off in one generation.
    >
    > Mutants have me worried. Researchers are considering mutation and natural
    > selection so that nano machines can evolve and self optimize. Care must
    > be taken to prevent them from mutating the security measures mentioned
    > above.
    >
    > I remember reading some stuff written by Tom Ray that talked about letting
    > software mutate in the lab and then dis-abling its ability to reproduce
    > when it is time to harvest the program and put it to use.
    >
    > Can the same concepts be applied to Nanotech?

    ------------------------------------------------------------------------
    PERFORM CPR ON YOUR APR!
    Get a NextCard Visa, in 30 seconds! Get rates as low as
    0.0% Intro or 9.9% Fixed APR and no hidden fees.
    Apply NOW!
    http://click.egroups.com/1/2121/2/_/444287/_/953078770/
    ------------------------------------------------------------------------

    Community email addresses:
      Post message: unrev-II@onelist.com
      Subscribe: unrev-II-subscribe@onelist.com
      Unsubscribe: unrev-II-unsubscribe@onelist.com
      List owner: unrev-II-owner@onelist.com

    Shortcut URL to this page:
      http://www.onelist.com/community/unrev-II



    This archive was generated by hypermail 2b29 : Tue Mar 14 2000 - 16:13:17 PST