Re: [unrev-II] Is "bootstrapping" part of the problem?

From: Rod Welch (rowelch@attglobal.net)
Date: Mon Dec 18 2000 - 21:05:09 PST

  • Next message: Rod Welch: "Re: [unrev-II] Is "bootstrapping" part of the problem?"

    Paul,

    As usual, I am impressed by the depth of your analysis. In this case, however,
    your point is not clear.

    Can you sum up by stating the two or three things you advocate should be done,
    that are not being done, or that should be done differently? How does your
    analysis today impact the big picture of moving from IT to KM?

    For example, I am advocating a culture of knowledge, as the big objective to
    accommodate a new world order of faster information resulting from IT that
    increasingly overwhelms human span of attention, causing continual bumbling due
    to meaning drift (see, for example, discussion on the high cost of medical
    mistakes....

    http://www.welchco.com/00601.HTM#0644

    ...explaining communication, which is nominally the strongest asset of any
    community, is rapidly becoming the biggest risk of enterprise.) Maybe that is a
    mistake, maybe not. In any case it seems to address Doug's objective to improve
    competency for solving complex problems, since communication, is by far the most
    complex problem of the ages, see, for example, Drucker reviewed on 900303....

    http://www.welchco.com/sd/08/00101/02/93/11/30/002549.HTM#L311983

    ....and further at.....

    http://www.welchco.com/sd/08/00101/02/93/11/30/002549.HTM#1855

    I propose a single, breakthrough, solution by enhancing alphabet technology
    using a continual "intelligence" process that turns information into knowledge,
    thus the goal to move up a notch on the cognitive scale from IT to KM. You seem
    to suggest today that bootstrapping, while intending to solve complexity, might,
    in some respects, be said to compound the problem it seeks to solve. I think
    there is another way to explain bootstrapping that avoids this conflict, but you
    seem to be arguing against it. Can you clarify?

    Thanks.

    Rod

    Paul Fernhout wrote:
    >
    > Now that I've got your attention, this isn't a slam at the OHS/DKR
    > project, which I still think worthwhile, nor at Doug's life/work in
    > general, which I consider admirable.
    >
    > In the current issue of Technology Review is an article including an
    > interaction between Michael Dertouzos and Ray Kurzweil, arising out of
    > commentary on Bill Joy's statements (on out of control technology).
    >
    > http://www.technologyreview.com/articles/jan01/dertouzoskurzweil.html
    >
    > The issue at stake in all this is whether bootstrapping machine
    > intelligence (and nanotechnology) is a good idea, whether it will happen
    > regardless of anyone's intent, and what the outcome will likely be of
    > this probably unstoppable bootstrapping process over the next few
    > decades
    >
    > In the related discussion forum
    > http://www.technologyreview.com/forums/list.php3?num=13
    >
    > is a rather scathing criticism of Dertouzos's comments (by Jon Taylor):
    >
    > http://www.technologyreview.com/forums/read.php3?num=13&id=3&loc=0&thread=3
    >
    > I think the discussion gets muddied as there are three types of
    > technology being discussed (but not clearly):
    > 1) Better "hand" tools (w/ dog level intelligence and loyalty?)
    > 2) Augmented humans (or "transhumans")
    > 3) Machine intelligence / independent nanotech
    >
    > Dertouzos' comments are most applicable to understanding the outcome of
    > (1) and maybe (2), [in terms of tools for a humane society] where as
    > Kurweil and the other commentary address more the issues of (2) and (3)
    > [autonomous intelligent tools with their own agendas]. Dertouzos more or
    > less discounts (3) as something worth worrying much about now (i.e.
    > allowing PhD students to do dissertations on :-). And the commentor
    > takes him to task for this, saying in effect that is the only major
    > issue in the next few decades really worth worrying about.
    >
    > On this issue of bootstrapping and exponential growth:
    >
    > Kurzweil writes:
    > > Many long-range forecasts of technical feasibility
    > > in future time periods dramatically
    > > underestimate the power of future technology
    > > because they are based on what I call the
    > > "intuitive linear" view of technological progress
    > > rather than the "historical exponential"
    > > view. When people think of a future period,
    > > they intuitively assume that the current rate
    > > of progress will continue for the period being
    > > considered. However, careful
    > > consideration of the pace of technology shows
    > > that the rate of progress is not constant,
    > > but it is human nature to adapt to the changing pace,
    > > so the intuitive view is that the pace
    > > will continue at the current rate. It is typical,
    > > therefore, that even sophisticated
    > > commentators, when considering the future,
    > > extrapolate the current pace of change over
    > > the next 10 years or 100 years to determine
    > > their expectations. This is why I call this way
    > > of looking at the future the "intuitive linear" view.
    > >
    > > But any serious consideration of the history of technology
    > > shows that technological
    > > change is at least exponential, not linear.
    > > There are a great many examples of this,
    > > including exponential trends in computation,
    > > communication, brain scanning,
    > > miniaturization and multiple aspects of biotechnology.
    > > One can examine this data in many
    > > different ways, on many different time
    > > scales and for a wide variety of different
    > > phenomena, and we find (at least) double
    > > exponential growth, a phenomenon I call the
    > > "law of accelerating returns." The law
    > > of accelerating returns does not rely on an
    > > assumption of the continuation of Moore's law,
    > > but is based on a rich model of diverse
    > > technological processes. What it
    > > clearly shows is that technology, particularly the pace
    > > of technological change, advances (at least)
    > > exponentially, not linearly, and has been
    > > doing so since the advent of technology.
    > > That is why people tend to overestimate what
    > > can be achieved in the short term
    > > (because we tend to leave out necessary details) but
    > > underestimate what can be achieved in
    > > the long term (because exponential growth is ignored).
    > >
    > > This observation also applies to paradigm shift rates,
    > > which are currently doubling (approximately) every decade.
    > > So the technological progress in the 21st century will be
    > > equivalent to what would require (in the linear view)
    > > on the order of 20,000 years.
    >
    > So, this discussion takes "bootstrapping" as a technological given, and
    > in fact, as really the defining quality of the early 21st century, as
    > exponential curves begin to show their teeth.
    >
    > Again, reprising an earlier post, a problem like running out of oil just
    > isn't of major significance if over the next hundred years we will see
    > what would appear to the average person to be 20,000 years of linear
    > technological progress at today's pace, but really all compressed into
    > the next hundred calendar years as exponential growth.
    >
    > What is of major significance is what this IMHO runaway and unstoppable
    > bootstrap process means both for humanity and us as individuals. What
    > does this mean in terms of culture shock? Effectively, this is Alvin
    > Toffler's "Future Shock"
    > http://www.bergtraum.k12.ny.us/user/t6573/br3.htm
    > magnified to the extreme!
    >
    > This is one reason why I think just stating the Bootstrap's Institute's
    > (or the colloquium's) goal of "bootstrapping" human or organizational
    > ability as a goal is not adequate. It has to be a question of
    > bootstrapping towards what end? There has to be an accompanying
    > statement of human value.
    >
    > If that end is human survival in some style, then another question has
    > to be how to cope with all the other bootstrapping processes going on
    > (like development of nanotechnology and machine intelligence) which may
    > interact with that goal.
    >
    > Effectively, we are seeing this even now as the Bootstrap Institute's
    > OHS/DKR effort carries admidst product releases and developments by many
    > other organizations. This makes it very hard to keep up -- when the
    > other efforts (for-profit commercial ones, or non-profit commercial
    > ones) are competing for the same attention and funds the Bootstrap
    > effort may desire to go instead to the general betterment of humanity.
    > The rich get richer, like in a garden the bigger plants shade out (and
    > rootwise crowd out for water and nutrients) the smaller plants.
    >
    > Perhaps this is like how a machine intelligence powered by solar cells
    > might shade out humanity on the earth's surface, bearing it no direct
    > malice. Just like big corporations and big government machine
    > intelligences effectively "shade out" 840 million humans
    > http://www.thehungersite.com/
    > by drawing in the best minds and assets for "economically feasible"
    > ventures. Closer to home, on "Meet the Press" this Sunday, retiring
    > Senator Moynihan
    > http://www.senate.gov/~moynihan/
    > pointed out the five year limit for Welfare in the U.S.A. will soon kick
    > in, and since the Welfare program for "Aid to Dependant Children" was
    > eliminated,
    > http://www.clasp.org/pubs/claspupdate/CU_10-98-21.html
    > many children in the U.S. may start to go without. As he points out,
    > something has happened to our culture, when in the depression of the
    > 1930s we could feed every American child, and now in the longest
    > economic expansion we cut programs for dependant children. So, in the
    > U.S., DOD contractors, savings and loan bailouts, etc. "shade out" poor
    > children. As Moynihan put it of children, "they don't vote, and it
    > shows." I use this as an example of how even in the U.S.A. a
    > "bootstrapping" economy (in terms of compound economic growth) can leave
    > some people behind, and even take from them what they had. [Obviously,
    > welfare reform is a complex issue -- but the point I want to make is
    > there should be certainly that children will not fall through the cracks
    > of the welfare system, and there is not...]
    >
    > I understand the desire to be neutral on the ends to which
    > "bootstrapping" is applied to attract broad support, but ultimately (in
    > my opinion) many organizations (large corporations or other
    > bureaucracies) in today's world effectively are already machine
    > intelligences (somewhat like ant colonies) working towards their own
    > exponential ends (in an economic framework). Langdon Winner's
    > "Autonomous Technology" brings up this in part, since effectively people
    > in a "role" in a corporation have limited choices as to what they can do
    > in that role (or they are dismissed if they go beyond that role in other
    > than subtle ways). So in this sense, I see the machine intelligences
    > already to an extent "shading out" efforts like the Bootstrap Institute
    > or the Humanities library.
    > http://www.humanitylibraries.net/
    >
    > This is meant to be realistic, not fatalistic. Obviously in a garden
    > many types of plants can grow, and there are various unoccupied niches
    > and refugia one can try to survive in. Some organisms like Pine trees
    > bide their time waiting for a patch of light to open up so they can
    > grow. So too, hopefully Doug will find a funding niche for the OHS/DKR
    > effort, and hopefully the human spirit will find a way to continue to
    > blossom amidst these larger machine intelligences, just like the small
    > mammals who were our forebears survived at the feet of the dinosaurs.
    > And so too, hopefully people will individually be willing to make
    > sacrifices needed to build efforts for the good of humanity. See "The
    > Skills of Xanadu" by Theodore Sturgeon -- available in his book "The
    > Golden Helix" for inspiration.
    >
    > I think humanity can survive the rise of the machine intelligences that
    > began in in the late 1800s when corporations were effectively first
    > granted equal status with humans in the U.S.A. But it will take a major
    > directed effort -- and if it is done by corporations (as in groups of
    > people), they may well be organized differently than the conventional
    > ones, possibly "chaoridcally".
    > http://www.chaordic.org/
    >
    > So given all that, I suggest people associated with the "Bootstrap
    > Institute"
    > http://www.bootstrap.org/
    > think deeply about their mission statement, in terms more of
    > understanding the bootstrapping process our civilization are enmeshed in
    > and directing it to specific defined positive ends. Unfortunately, that
    > may mean losing participation of some who don't agree with the chosen
    > ends. But one thing I hope we all can agree on after reading the
    > Technology Review article and related materials -- "bootstrapping" is
    > happening right now in many ways, and the implications are both wondrous
    > and threatening.
    >
    > My own mission statement is effectively:
    > http://www.kurtz-fernhout.com/oscomak/
    > or in short learning to survive in style without depending on "supply
    > chains" (chains == slavery?) other than ones specifically chosen, and
    > learning how to give that ability to choose to others.
    >
    > -Paul Fernhout
    > Kurtz-Fernhout Software
    > =========================================================
    > Developers of custom software and educational simulations
    > Creators of the Garden with Insight(TM) garden simulator
    > http://www.kurtz-fernhout.com
    >
    > eGroups Sponsor
    > [Click Here!]
    >
    > Community email addresses:
    > Post message: unrev-II@onelist.com
    > Subscribe: unrev-II-subscribe@onelist.com
    > Unsubscribe: unrev-II-unsubscribe@onelist.com
    > List owner: unrev-II-owner@onelist.com
    >
    > Shortcut URL to this page:
    > http://www.onelist.com/community/unrev-II

    -------------------------- eGroups Sponsor -------------------------~-~>
    eGroups eLerts
    It's Easy. It's Fun. Best of All, it's Free!
    http://click.egroups.com/1/9698/0/_/444287/_/977204801/
    ---------------------------------------------------------------------_->

    Community email addresses:
      Post message: unrev-II@onelist.com
      Subscribe: unrev-II-subscribe@onelist.com
      Unsubscribe: unrev-II-unsubscribe@onelist.com
      List owner: unrev-II-owner@onelist.com

    Shortcut URL to this page:
      http://www.onelist.com/community/unrev-II



    This archive was generated by hypermail 2b29 : Mon Dec 18 2000 - 21:58:36 PST