Re: [unrev-II] Is "bootstrapping" part of the problem?

From: Paul Fernhout (
Date: Wed Dec 20 2000 - 07:07:19 PST

  • Next message: Garold L. Johnson: "RE: [unrev-II] Is "bootstrapping" part of the problem?"


    Thanks for all the great comments!

    > [Garold L. Johnson] While what you say is true, I believe that there are some points being missed.


    > The expansion of technological ability continues to outstrip our ability to make use of it, to reason about it, to deal
    > with the values and desires issues. While this is true, it is nothing new. This has been going on for a very long time.


    > It does so because those who can think have allowed those who don’t to set the values agenda. Since science
    > agreed to stay out of certain aspects of knowledge in order to keep from being destroyed by the church, science
    > has refused to deal with any of the “soft” issues. The result is a strong tendency for those in the soft issues (hardly
    > sciences) to be unqualified in science and for those who are qualified in science to avoid the soft sciences.

    Another issue is that competence in science (as it is classically
    practiced in academia, as a certain type of inquiry based intellectual
    pursuit within a single domain) does not translate to competence in
    human affairs. The two are not necessarily exclusive -- just different
    skills. (Howard Gardner's book "Multiple Intelligences" touches on
    this.) People are often drawn to things that reflect their personality,
    and various personality traits are more useful in various roles. For
    example, being a suspicious person will make you an excellent software
    debugger, but may make relationships with clients difficult.

    There is a wonderful paper by the late Diana E. Forsythe
    called "Engineering Knowledge: The Construction of Knowledge in
    Artificial Intelligence" [Diana Forsythe, Engineering knowledge: The
    construction of knowledge in artificial intelligence, Social Studies of
    Science 23(3), 1993, pages 445-477.] who as an anthropologist studying
    AI workers concludes more or less that they in general have a very
    narrow view of intelligence (and mind), which in part reflects their
    personalities. It's actually rather humorous (in an ironic way, as you
    see someone trained to study what people do comment on highly technical
    (and arrogant?) AI researchers who claim to know from naive experience
    what human experts do when they think on the job and how to capture that
    in a machine.) I don't have an online reference for the paper itself,
    but it is cited quite a bit. However this link is a start into this
    world of commentary:
    > Ability to think better empowers those who think and does very little for those who won’t.

    Good point.

    > If there is a promise for the future, IMO, it lies in the fact that continuing growth in computing capability makes it
    > possible for small teams to tackle and accomplish feats which only a few years ago were possible only to major
    > corporations or governments.

    I think this is the crux of why what we are doing might make sense. You
    put it very succinctly here. And as Margaret Mead said something like
    "Don't underestimate the power of small groups of committed individuals
    to change the world, in fact, that is the only thing that ever has."

    > As we develop the tools and techniques for organizing knowledge into accessible information and increase the
    > possibility of learning supported by better information tools, we begin to break the stranglehold that governments
    > have on education, and the dependence on large organizations of all sorts.

    Good point.
    > When a small group of individuals can perform the research required to bring about some of the goals that you
    > consider important, there is a chance of it getting done. If the future relies on our ability to convert bureaucracies or
    > mass humanity to any better way of doing things, we are indeed doomed.

    Another good point.
    > 1) Value Affirmation. There should be an affirmation of core human
    > values and humane purposes in a statement of purpose for "bootstrapping"
    > as defined by the Bootstrap Institute.
    > [Garold L. Johnson] Unfortunately, we can start that debate and expend all of our energy on it and get nothing
    > accomplished.

    You have another good point. I won't say I completely agree with it, but
    it would be good to see how we could turn what might otherwise be an
    energy absorbing thing into something productive. Perhaps creating the
    infrastructure to have such discussions?

    I do agree it is a minefield. It almost comes down to a religious belief
    (values and desires).

    However, my purpose in writing the original email was to prod again on a
    topic I brought up before (basically "bootstrapping for what") because I
    think it important for individuals to keep this in mind even without a
    joint statement of purpose.

    I am not a practicing Unitarian, and this is not an attempt to convert
    anyone on this list, but as one of the world's most non-dogmatic
    religions, the UU statement of faith might be of interest as a starting
    From that page:
    ] We, the member congregations of the Unitarian Universalist
    Association, covenant to affirm and promote
    ] The inherent worth and dignity of every person;
    ] Justice, equity and compassion in human relations;
    ] Acceptance of one another and encouragement to spiritual growth in our
    ] A free and responsible search for truth and meaning;
    ] The right of conscience and the use of the democratic process within
    our congregations and in society at
    ] large;
    ] The goal of world community with peace, liberty, and justice for all;
    ] Respect for the interdependent web of all existence of which we are a

    Again, this is not an attempt to undermine anyone's specific faith, but
    to point out that often one can make a set of affirmations that are
    general enough to be inclusive, while still being a positive statement.

    I'm sure one can find similar statements in various other religious
    traditions which are statements of core value apart from specific dogma.
    My point is that when we ask "bootstrapping for what?" we should at
    least have a nebulous positive answer involving the worth of the human
    experience, rather than "just to make things go faster".

    Perhaps too big a can of worms to open...

    > I would prefer that we create a set of tools that make it possible for those who will to investigate the
    > mammoth amount of knowledge required to investigate the major issues that you raise. Part of the reason for
    > staying out of the soft areas is that the amount of information that has to be understood and manipulated to deal
    > with even the simplest of social issues continues to outstrip the abilities of those who would do so. Until we can
    > begin to understand and model how we work together to achieve any goal, it seems unlikely that we will have much
    > impact on it.

    Good point. However, at least in terms of what I want to do (related to
    letting people pick the size of organization needed for life support
    from village to planet) presumably one might be able to reduce the scale
    of some problems by addressing them in the context of smaller groups.
    (Eric's Cohousing-like example, for instance).

    > I believe that we are at a point where we need to begin to take charge of our own intellectual evolution or perish. If
    > we allow our next set of institutions to develop with no more thought than the current set, we are indeed headed for
    > trouble.


    > However, the view that all of our problems would be solved if only others saw the issues as clearly as we do is a
    > self-defeating viewpoint. All utopian ideas are basically “all that has to happen is for human nature to change to the
    > way I would like it to be”. It isn’t going to happen.


    > If social goals are going to be met, it will be done by people who: already have such goals, develop the necessary
    > tools and abilities to accomplish those goals, and set about getting it done.

    Well put. I like this statement.
    > As a consequence, developing the tools that make it possible and providing them to the small groups that have the
    > values and the desire seems to me to be the only realistic road out.

    Perhaps the issue in a nutshell. (Even if what constitutes a way out may
    be different for different groups...)

    > I submit that our problem isn’t so much too much technology as an inability to martial the knowledge necessary to
    > apply it well. As we get more information on how natural systems work, for example, such things as organic farming
    > which works with natural systems to produce more food better and without massive amounts of chemicals provide
    > the possibility of bypassing the large dinosaur systems that currently have to provide the chemicals. If there is going
    > to be a $5 box that will power a village it will far more likely be the result of a small group working to solve that
    > problem than it will because the existing system decided to build such a device. This is knowledge and research
    > which is just now becoming available to groups small enough to care.

    Well, I agree with the sentiment, but information on organic agriculture
    (and the problems with pesticides) has been available for a long, long
    time. Rodale press goes back to the 1940s.,1874,1-7,00.html
    In the 1930s biocontrols were becoming widely used (before being
    replaced by petrochemicals). What perhaps is newer is to see additional
    drawbacks to conventional agriculture (which never had the burden of
    proof of safety) such as breakdown components as estrogen mimics leading
    to developmental problems. [I was program administrator about ten years
    ago for the NOFA-NJ organic farm certification program.] However, if
    what you mean is widely of interest, then yes, interest in organic
    farming does seem to be following an exponential growth curve and
    interest is now becoming noticeable.
    > [Garold L. Johnson] I agree that there needs to be some energy devoted to the problem of exponential growth if only
    > to address the technical issues of data inflow overwhelming all attempts to organize it with whatever tools and for
    > the answer to be obsolete by the time you discover what they are. Addressing exponential growth with any view that
    > any efforts we take are going to change it is wasted effort – it isn’t going to happen. The best we can hope for is to
    > empower those willing to make a difference in the face of the growth.

    I agree it likely can't be stopped. The issue is a matter of either
    directing it to positive ends, or if that can't be accomplished, using
    some fraction of it for positive ends and to survive the rest of its

    > 3) Accepting the Politics of Meeting Human Needs. Addressing human needs
    > (beyond designing an OHS/DKR) was one of Doug's major goals and
    > something that occupied many presenters in the Colloquium. The
    > colloquium needs to accept that there are effectively no technical
    > issues requiring extensive innovation related to supporting contemporary
    > society that are of any significant importance.
    > [Garold L. Johnson] This is true, but not terribly relevant, I am afraid. We have had the technological ability to carry
    > out nearly any set of goals that we could get sufficiently widespread agreement to tackle for years. To the extent that
    > there is hunger in the world, for example, it is held in place by governments and those in power to whom their power
    > is all that is of importance. This is lamentable, but it is a fact. Continuing to lament it isn’t going to change it. What
    > will change it is empowering those with the will to do something more than talk about it. This is where the efforts we
    > are discussing can have value.

    I thought it important to once again bring up this issue. Some of this
    is a legacy of my reactions to the earliest Colloquium speakers. But
    nonetheless, if we start from this premise, there is enough to go around
    now for basic needs even if only some people will have more than that,
    then the focus of our efforts might shift. In the same way people keep
    talking about how genetically-altered crops will benefit the developing
    world, when land reform or other political issues may be more
    important... Nonetheless, the needs of the developing world are often
    part of the politics of getting funding for ever advancing genetic
    engineering of crops like "terminator" seeds or "Round-up ready" crops.
    My point is simply that meeting core human needs are often cited as a
    reason for increasing (bootstrapping) the rate of technological
    progress, so a clearly delineating the two seems important.

    > We need the ability to manage knowledge in much greater volume much faster than we can today before we can
    > even think meaningfully about why it is that the conditions we decry exist and what can be done about them in
    > human terms.

    Well, I think this point is subject to some debate, given the above. If
    we have created an ever more complex set of processes, twisted supply
    chains, and so forth, justified by claims that this is to meet core
    human needs, perhaps part of the solution is to find a way to simplify
    all this so core human needs can be met. I'm not saying that is
    necessarily possible without further innovation in organizing
    manufacturing technology (say, providing each village with a flexible
    machining center, or a $5 self-replicating food box).

    I think the point you raise here is interesting, and gets at the core of
    justifications for "bootstrapping" as the Bootstrap Institute defines
    it. Still, one may question which problems are the ones requiring that
    level of knowledge management. For addressing the issue of people
    starving in Africa (or the US) I think such a technology would be nice
    but it not required. For addressing the issue of dealing with
    self-replicating machine intelligence, perhaps such tools are required.

    > Very few people think in any measure. Even fewer think clearly to any degree. We have yet to devise the tools and
    > techniques for dealing with human values and motivations in any meaningful way. There is no agreement about how
    > to reason about issues of values, since reasoning about values ahs almost never been done in human history. It is a
    > new area of discovery. We don’t have any rules of evidence, nor any concept of what proof means in this context.

    Excellent concept of a tool to help reason about values.

    > Additionally, when we enter the realm of social interactions, group dynamics, social mechanics, evolution of
    > organizations, etc. there is no way to model the massive problems that arise. This is the entire area of “wicked”
    > problems – problems where what we think of as independent variables are mutually independent. This is an area
    > for philosophical inquiry, certainly, but it is also an area in which the ability to model systems and make the
    > information available is of utmost importance. This was Buckminster Fuller’s focus, and that effort has yet to
    > succeed.

    Hopefully, one of the areas to be addressed is to make the problems more
    manageable. For me, this at a start comes down to asking, how self
    reliant can a group of 10000 people be, for example. I think this issue
    is worth addressing not because people ideally may want to live like
    that, but because the problem is more tractable than solving "world

    > <SNIP>
    > So to the
    > extent the Colloquium wants to focus on current issues (world hunger,
    > California electricity crisis) it needs to support tools more related to
    > dealing with politics or social consensus.
    > [Garold L. Johnson] That is consistent with what I have been saying, but I believe
    > that the issues for this forum are of the nature of “what factors involved in
    > problems of the scale of human social and political interactions impact the
    > requirements and design of the knowledge tools that we propose to build to
    > assist in solving these problems?” That brings the effort into one of
    > requirements elicitation in order to build an information management technology
    > of sufficient power and scope to allow it to be used to address such problems.

    Fascinatingly complex sentence starting with "what factors...". I agree
    with the sentiment, although as above, I may question just how large
    scale the systems modeled have to be (or what simplifying assumption can
    be made regarding things outside the system...)

    > In my youth, I believed that what was needed to improve the world was a way to
    > allow those who make decisions that impact the rest of us to have the relevant
    > information and knowledge to make those decisions in a informed manner. It took
    > several years for me to realize that until we did something about the
    > unwillingness and the inability of those decision makers to think, and to think
    > about the value systems they used to address the problems, just more
    > information or better organized information wasn’t going to solve the problem.

    OK. Although, I'd go beyond this. It has been said "never attribute to
    malice what stupidity or incompetence can explain". That seems close to
    you point. But still, we must accept that decisions are made based on
    values. If the decision makers have values (i.e. staying in office)
    different than those of the people decisions are made for, then the
    results may not be desirable even if they are made intelligently. This
    is the flaw of "cost/benefit" analysis because the issue is who pays the
    costs and who gets the benefits.

    > I know believe that this is a task that can only be accomplished without asking
    > for or expecting support from the existing institutions.

    It has been said never apply to the government for a grant to become
    self-reliant... :-)

    > That happens by making
    > it increasingly possible for individuals and small groups to live independently
    > (or at least more so) of the existing institutions.

    I'm with you here. Although for me the issue is choice and fallback
    positions. I would have more confidence and acceptance of living in a
    complex society if I knew for such there were reliable (and pleasant)
    alternatives if that society collapsed (from economics, war, plague,

    > For example, IMO, our
    > entire educational system is beyond redemption as the basic underlying belief
    > structure is mistaken. Trying to get education to fix itself is not going to
    > solve this problem. It won’t even be solved directly by private or home
    > schooling, since the technology for both comes from the same pool that has
    > caused the problem. If we can make it possible for individuals and families to
    > learn, to educate themselves, and to discover things for themselves, then the
    > educational system can be bypassed and allowed to wither. The vast majority of
    > people wouldn’t use such a system if we had it, but some would, and they might
    > be able to make a difference.

    Well, obviously many educators are very dedicated, but as you point out,
    the system (and lack of resources) makes it hard for anyone to do a good
    job beyond baby-sitting and readying students for a 1950s time-card work

    But your deeper point is that sometimes the fixes for a thing is to make
    it irrelevant.

    > The only
    > hope to resist this is some form of government intervention or worker
    > (individual or union) resistance. These decisions will all be made in
    > bits and pieces, each one seeimgly sensible at the time.
    > [Garold L. Johnson] How can we seriously expect governments to provide the solution when they are the major
    > source of the problem?

    I do think it a proper role for government to enforce the rules of a
    playing field. I think for example enforcing environmental regulations
    is a proper role of government. So too, historically, government has
    been involved in income redistribution and public infrastructure. I
    think addressing this issue of ownership, control, and equity in machine
    intelligences and their output is a fair role for government.

    However, it is difficult to police immortal beings (corporate machine
    intelligences) with powers vastly beyond those of most individual
    people. And likewise, the governmental process can easily become part of
    the problem, especially as it is subverted by powerful (machine
    intelligence) interests.

    > The problem is not so much with the organizations as with the way that we as humans think
    > or fail to think – organizations reflect that failure magnified. “The mind set that got us into this mess is not the mind
    > set that will get us out of it.” (loosely) Einstein.

    Good point. And also to quote Einstein on the dropping of the first
    atomic bomb: "everything has changed but our thinking..."

    > Expecting the organizations and thought processes that brought about the current situation to resolve that situation
    > is simply not reasonable. We need better ways of approaching knowledge and thinking. We need ways to model
    > human behavior and human social systems far better than we can currently. Just decrying the fact that the current
    > reality isn’t to our liking doesn’t move us closer to changing that reality. Complaining about corporations pursuing
    > profit is pointless, as it is an essential of what they have to do to survive. The way they pursue it is possibly open to
    > change, but the fact is that the organization that doesn’t survive doesn’t have any chance of making a difference, as
    > well meaning but non-functional organizations demonstrate repeatedly. Example, we once had major problems with
    > corporations spilling all sorts of smoke related pollutants into the atmosphere. The places where I saw it resolved
    > most quickly were those that discovered that the minerals and materials that could be extracted from that smoke
    > were of more value than the cost of installing equipment to clean the smoke. If you want corporate behavior to
    > change, change the profit picture. It is here that small scale research has some real potential.

    Very good practical advice.

    Actually, I think many corporations may become less and less relevant.
    Most of them are involved with producing goods and services which
    someday (soon) might be unneeded or might come from a "replicator". Not
    to be too Star Trek, but if nanotechnology or similar larger scale
    processes are capable of flexibly making on the spot most items from
    basic raw materials, the need for a supply chain of organizations goes
    away. Yet, most corporations exist to make certain goods (or related
    services) that fit into this supply chain.

    > The corporate social form has had little time to evolve (a few hundred
    > years?) so there is not guarantee that contemporary corporate
    > organization forms will be capable of doing more than exhausting
    > convenient resources (passing on external costs when possible) and then
    > collapsing.
    > [Garold L. Johnson] We should be so lucky that they will just collapse. They will get a lot worse before that happens.

    Hope not. But it might happen. Worse in what ways do you think?

    > Worse, we have no better options to offer as a replacement.

    One possibility is:

    > The problem is that the organization of the corporate
    > social form as well as all our other social forms was completely undirected by any coherent human thought. Our
    > problem remains that the evolution of social forms is far too slow to handle the expected rates of change, and that
    > all attempts to devise a better scheme than “just let it happen” have been such uniform disasters – social planning
    > has been a major disaster nearly every time it has been tried.

    One must distinguish between "social planning" and "dictatorship" and
    "how things are produced". If things are mainly produced locally
    (replicators, supplied with little more effort than indoor plumbing for
    water) then social planning will be done on a very different landscape.

    [Note this isn't to say we need nanotechnology, I think (hope?)
    reasonably efficient community level general purpose production of most
    things through flexible manufacturing is possible without that.]

    > Is this failure because we lack the tools to model
    > systems of this complexity, because we lack any way of thinking about the problems in the first place, or because
    > we haven’t yet stepped up to expend the effort, energy, and thought necessary to address these problems.

    Good question.

    > [Garold L. Johnson] KM will be useful whether GE uses it as you would like them to or not, it will be more useful to
    > others who cannot currently offer any viable alternative to the GE’s of the world.

    Good point.

    > There is one obvious exception to saying KM won't change the direction
    > of organizations, which is to the extent humans as individuals in
    > corporations have access to KM tools and might see the bigger picture
    > and act as individuals. The only other hope is that a general increase
    > in organizational capacity in large corporations or governments will let
    > some small amount leak through for unsanctioned human ends (but the cost
    > in human suffering to that approach is high
    > [Garold L. Johnson] The human cost is high, yes. The problem remains that we haven’t yet demonstrated any
    > system that can accomplish “unsanctioned human ends” with a lower human cost.

    Whatever happened in the past, we must ask ourselves what makes sense in
    the future given where we are now. (Personally, I think much of the
    success of the U.S.A. is due to the value of the land taken from the
    indigenous people, ocean barriers from major wars, and the stimulating
    environment of a mixing of cultures and immigrants, but those are other

    > The human cost of all other such
    > efforts has been incredibly higher than the one of markets and corporations.

    Again, past history.

    > While it is true that markets and
    > corporations appear to be inefficient in many ways, we haven’t yet devised any system that works better.

    There is not (not much) market for air. The market for tap water is
    fairly specialized. The market for love is unusual. The point -- there
    are essential things not managed by conventional markets. We must ask
    ourselves specifically what markets are now getting us, as opposed to
    say local on-demand production from raw materials.

    > I think that
    > devising and modeling such a system would be a great thing to do. We need useful KM at the individual level even
    > to attempt that.


    > [Garold L. Johnson] I have heard this endless times. If you think that capitalism is inefficient at distribution, try any
    > other competing system and see how efficient it is at either production or distribution.

    We don't live in a pure capitalistic society in the U.S. That's one
    reason we pay taxes -- for the public good. Trillions of dollars of
    taxes each year. They should be spent efficiently for the public good.

    > Capitalism needs
    > improvement, to be sure, but it is still the best that man has ever done in terms of the general well being of the
    > population. Don’t be too quick to discard it.

    Good point. However, we must accept that supply chains on which
    capitalism are based may become irrelevant. Look at the (usually) much
    more local economy of nature. A tree lives, dies, decays, and the
    nutrients are recycled into other trees. (Yes, there are global material
    flows too of course.)

    > [Garold L. Johnson] If you want certainty, you are in the wrong universe, sorry. What stands a chance is ways that
    > improve the individual’s ability to cope with the world as it is evolving and to assist individuals in creating successful
    > groups that can survive while accomplishing other worthwhile goals.

    Good point.

    > The growth of computing has come closer to
    > offering that than ever before in human history. What we could use is a way to leverage that development for
    > worthwhile goals.

    Agreed. However, what we call "computing" is subject to debate and
    generalization. When viewed as thinking, or language, or tool use, or
    directing others, "computing" has been going on for a long time.

    > The technology and material resources to feed and educate all children
    > (and adults) exists right now.

    > [Garold L. Johnson] True, but not relevant.

    Sorry. This is coming from historical issues and tone set early in in
    the colloquium, so the relevance really is more in relation to that

    > As you say, they are (somewhat) different problems. However, I don’t see
    > that there is any way to solve these problems with the mind set that created them. It seems that you are advocating
    > dropping everything and solving these basic human needs.

    Not quite. I am advocating distinguishing between meeting basic needs
    and dealing with exponential growth of technology -- really to an extent
    two separate things, even though the first is often used to justify the

    > Not only will that not happen, I think that it is exactly the
    > wrong direction. Well meaning individuals and groups have been pushing for this for decades and the situation
    > remains. Provide those people with better tools, and maybe they can build a door in the wall instead of continuing to
    > beat their heads against the wall.

    I see your point, and generally agree. There is a story (forget the
    author) of a village by a river where they keep finding babied floating
    in baskets to them. They set up an efficient way to care for the babies
    and are proud of it, but never feel they have enough resources left over
    to go upriver and see where those babies are coming from or why.

    Hopefully my previous comment (current needs vs. growth issues) makes
    clearer though why I brought this up.

    Still, we need to be very careful for specific problems of not saying
    "because the political problem is so hard, I will hide my head in the
    technical sand". However, obviously there are situations where a
    political problem can be resolved by a technical innovation (need
    examples here -- anyone got one?)

    > [Garold L. Johnson] How would you suggest that the development of knowledge tools can be accomplished in such
    > a way that only those of good social conscience can make use of them?

    A good question. I don't have a good answer.

    Most humans have circuitry in their brains that helps them function as
    social organisms. It has been selected over many tens of thousands of
    years for some basic level of cooperation and values. Most corporations
    have few such built-in limits, except to the extent humans are in them,
    and in that case, we are talking about human group behavior, which is
    different from human individual behavior.

    > Since I know of no way to do that, the best
    > that I see that we can do is to aim our requirements at the scale of these major social problems.

    General agreement, although as I said above, what scale needs to be
    addressed (given the possibility of decentralization) is an issue.

    > If we develop
    > anything less, it will help those with lesser goals without providing what is needed by those who would tackle
    > problems of this scale. We would provide those we oppose with tools that they can use without gaining the tools
    > that we need. It seems to me that if we are ever to tackle problems of the scale of human social systems, we are
    > going to need tools and techniques that are far beyond what we currently have. That is what this effort seems to me
    > to be all about.

    Good point. Generally, perhaps the hope is that people will do more good
    things than bad with these tools. Or perhaps evolutionary, the small
    enclaves of humans who hit on the right good things to do with these
    will survive, whereas the masses who continue business as usual create
    ever better profit-maximizing machine intelligence won't survive their
    Machiavellian progeny.

    > In my thinking, it is the arms race itself that is
    > the potential enemy of humankind, and the issue is transcending the arms
    > race
    > [Garold L. Johnson] Perhaps so, but without tools that can handle problems of the level of social systems, we aren’t
    > going to fix it either.

    Again, general agreement, with the caveat that scale is an issue.

    > For over a decade I have wanted to build a library of human knowledge
    > related to sustainable development. I as a small mammal am using the
    > crumbs left over by the dinosaurs to try to do so (not with great
    > success, but a little, like our garden simulator intended to help people
    > learn to grow their own food).
    > [Garold L. Johnson] This is exactly the sort of approach that I think has merit. The better the tools that you can have
    > to build such a library, the more useful the result can be because of the design of the system, and the degree to
    > which it is possible for you to accomplish this without government or corporate support.

    Nice to hear that.

    > The way to put it is that "bootstrapping" has linked itself conceptually
    > to an exponential growth process happening right now in our
    > civilization. Almost all explosions entail some level of exponential
    > growth. So, in effect, our civilization is exploding. The meaning of
    > that as regards human survival is unclear, but it is clear people are
    > only slowly coming to take this seriously.
    > [Garold L. Johnson] The first step is to take it seriously. The second is to investigate what can be done about it.
    > That is what I see going on here.

    Well, to an extent. Hopefully more so now.

    > As one example, lots of trends:
    > Lou Gerstner(IBM's Chairman) was recently quoted as talking about a near
    > term e-commerce future of 10X users, 100X bandwidth, 1000X devices, and
    > 1,000,000X data. Obviously, IBM wants to sell the infrastructure to
    > support that. But I think the bigger picture is lost.

    Note: a link for that Gerstner quote is:

    > [Garold L. Johnson] I think that it is important to discuss how bootstrapping can support the goals and values that
    > we bring to it, but primarily as a source of requirements for the technology itself. These issues are all part of the
    > reality into which we wish to introduce bootstrapping, and they need to be taken into account. The issues
    > themselves are part of the motivation of the effort.

    Good points.

    > Attempting to solve these problems directly rather than
    > developing the tools with which to address them is an invitation to more pointless debate and no accomplishment.

    Well, perhaps one issue as mentioned above is to create a way to have a
    meaningful debate on this topic (and related tools)?

    > Either you will meet with agreement regarding your social views, in which case you are preaching to the choir, or
    > you won’t, in which case we will just expend more effort on the debate and still not have what we might develop that
    > might make a difference.

    Good points.

    > This is the reason that I haven’t commented on any of the social views you express – my views or yours on any of
    > this is, IMO, relevant only to the extent that we need to strive to evolve tools that will allow us to investigate the true
    > nature of the problems and to model proposed solutions to see that they do what we intend rather than have some
    > dramatically other result because we try to solve the problems (again) with inadequate tools and techniques.

    Good points. Still, I think they remain at the very least issues we must
    each for ourselves have in the backs of our minds as we build tools. As
    Langdon Winner says in "Autonomous Technology" the greatest individual
    influence an innovator has is in their choice of what to innovate. Ben
    Franklin chose to innovate bifocals, the (American) public library, and
    a better stove, and we are forever blessed because of those things.

    > Thanks,
    > Garold (Gary) L. Johnson
    > DYNAMIC Alternatives

    Thanks again for the great comments.

    -Paul Fernhout
    Kurtz-Fernhout Software
    Developers of custom software and educational simulations
    Creators of the Garden with Insight(TM) garden simulator

    -------------------------- eGroups Sponsor -------------------------~-~>
    Big News - eGroups is becoming Yahoo! Groups
    Click here for more details:

    Community email addresses:
      Post message:
      List owner:

    Shortcut URL to this page:

    This archive was generated by hypermail 2b29 : Wed Dec 20 2000 - 07:17:39 PST