[unrev-II] Values and Competence

From: Rod Welch (rowelch@attglobal.net)
Date: Wed Dec 20 2000 - 00:36:12 PST

  • Next message: Garold L. Johnson: "[unrev-II] OHS_Overview link broken"

    Paul,

    Here is a lazy response to a truly extraordinary effort on your part drawing
    attention to critical values, as one component that is balanced by competency in
    the core binary force of existence. Not sure I know what that means, but it
    sounds good to start off. Others today have made compelling points exploring
    your ideas.

    Your explanation on affirmation of human values and proposing a politics of
    meeting human needs relates to values, the "what" question of existence. I
    believe this is beyond the scope of the project, though it is important to
    constantly address values, as seen by the tradition of going to church once a
    week. The binary pair in values is life or death, out of which, for humans,
    evolves the notion of community, and by extension communication, which you
    exhibit with great flair and effect.

    In the US a tradition has evolved to sustain community through a representative
    government that stands for election. This regular process addresses, to be sure
    imperfectly, your concern about values, e.g., environment, human needs, crime,
    markets, taxes, and so on. However, it is well to recall that within the realm
    of humanity variability in genetics and experience mean there is no single
    prescription for values and desires, beyond life and death, but there are two
    primary perspectives, masculine and feminine, which generally play out as
    emphasizing process or need. In a democracy where men and women both exercise
    the franchise, these two perspectives compete for ascendancy. Within that
    competition, there are honest competitors, who recognize their prescriptions
    could be wrong, and there are those who merely seek power for its own sake.

    The US strives to provide a formula, or process, where power is diffused, so
    that no single person or group can hold all the power under the rule that none
    of us are smart enough to wield that responsibility. Thus, a continual fight
    over values is carried out through a balance of power doctrine. This results in
    the proverbial pendulum swinging back and forth between emphasis on process and
    emphasis on need, as the dominate policy guidance, where moderation prevents
    over compensating.

    So, in sum, on the issue of values, we beat out brains out every day all across
    the land wrestling with this critical question.

    An example is environmental policy that seeks to address values of eco-balance
    and bio-diversity. This leads to a concern that building a power plant will
    destroy the environment because a snail darter may become extinct, and have a
    cascading effect on existence. A lot of us lack sensitivity to the environment,
    and lack consideration for others less fortunate, with the result that we use
    too much power, by, for example leaving the PC on overnight, so we can an early
    start the next day to complete a treatise on solving world hunger, or just
    because we are lazy and insensitive, and so now there is a power shortage.
    There is not enough power to run the PC and keep the lights on, so we have to
    abandon these opulent power consumption practices, and use environmentally
    appropriate renewable power, like walking and writing our ideas with pen and
    paper, except these also take power, so we are led to use more personable
    communication methods like dialog. Doubtless one can argue that being for the
    environment can lead to stronger communication. That's good, and doing good is
    God like, i.e., its a religion, and we don't have to go to church.

    However, without power to run the PC, it is harder to get the word out about
    being environmentally sensitive, so possibly a balance of values needs to be
    struck. Maybe we need to build the power plant, and try to get by without a
    snail darter.

    People make their arguments, and we vote on values when we elect folks to the
    government. It is not perfect, but with strong advocacy evident in your
    presentation today, there is a better chance the right choices will emerge, than
    if your views were not available due to a power outage that shut down the PC.

    Competence, on the other hand is neutral to values, as you point out, except
    what ever value we adopt, it helps to be able to implement, or achieve it.
    Thus, education is a way of increasing competence to understand the environment
    and human needs, and to formulate effective arrangements for accomplishing those
    values. We send the kids to school because long experience has shown that
    learning alphabet technology and other skills, makes it easier to achieve our
    values, i.e., tasks that sustain life.

    The DKR project and Communication Metrics seek to buttress the competence
    component of the binary force comprised of values and competence. If we believe
    in education, then continual learning might be a logical extension,
    particularly, since continual learning occurs whether we do anything proactive
    to enhance it or not. The only real question is whether we stand pat and just
    let things happen, or take proactive action to lift the capacity to think,
    remember and communicate.

    The nexus between values and competence is your second point about complexity,
    which inhibits accomplishment of values, except to the extent competence can be
    raised. That is the rationale for moving from IT to KM, at least it seems so at
    this time.

    Rod

    Paul Fernhout wrote:
    >
    > Rod Welch wrote:
    > >
    > > Paul,
    > >
    > > As usual, I am impressed by the depth of your analysis. In this case,
    > however,
    > > your point is not clear.
    > >
    >
    > Rod-
    >
    > First let me summarize: there is more to living than "intelligence".
    > Intelligence doesn't call one to act, "desire" does that. "Intelligence"
    > doesn't define why one should do one thing rather than another, unless
    > one already has "values". One can make a rational choice, but the desire
    > and values that cause that choice to be made and acted on are to a large
    > extent outside of the realm of "intelligence". As an outgrowth of
    > "intelligence", knowledge management will neither lead to choices or
    > cause actions in the absense of "values" or "desire". We are talking
    > about putting ever more powerful "intelligence" in the hands of
    > organizations that have already shown themselves capable of building
    > 50,000 nuclear warheads, letting close to a billion people starve, and
    > dumping PCBs in water bodies and resisting attempts to clean them up.
    > One must question the desires and values of such organization, even if
    > to an extent some of those decisions may have also been due to faulty
    > reasoning or lack of knowledge (i.e. nukes=MAD, starvation=racism,
    > PCBs=ignorance).
    >
    > To clarify my point (if I have one beyond rambling :-) in the context of
    > your questions:
    >
    > > Can you sum up by stating the two or three things you advocate should be
    > done,
    > > that are not being done, or that should be done differently?
    >
    > 1) Value Affirmation. There should be an affirmation of core human
    > values and humane purposes in a statement of purpose for "bootstrapping"
    > as defined by the Bootstrap Institute. In elaboration, it is not enough
    > to say we will teach everyone how to do what they do better, as this is
    > in effect a small mammal sixty million years ago saying "we will teach
    > dinosaurs to be better dinosaurs" or "we will teach sharks to be better
    > sharks". The point is that to isolate competence from purpose and values
    > invites trouble.
    >
    > 2) Understanding Exponential Growth. To the extent the colloquium still
    > operates and desires to discuss issues that will have great (possibly
    > negative) impact over the next few decades, the colloquium needs to have
    > a focus on dealing with this problem of rapid exponential change itself
    > and what it is leading towards. This is specially true in considering
    > the implications of machine intelligence. It is also true in considering
    > the implications of the increase of destructive -- and constructive --
    > capacity via nanotechnology and biotechnology.
    >
    > 3) Accepting the Politics of Meeting Human Needs. Addressing human needs
    > (beyond designing an OHS/DKR) was one of Doug's major goals and
    > something that occupied many presenters in the Colloquium. The
    > colloquium needs to accept that there are effectively no technical
    > issues requiring extensive innovation related to supporting contemporary
    > society that are of any significant importance. This is in part due to
    > an abundance of material resources, as well as since our technological
    > infrastructure is effectively obsolete compared to what is in the labs
    > or in limited deployment. (The only exception to that is the need for
    > organizing and distributing what we already know...) That is, the
    > presentations in the colloquiums on imminent world problems (energy
    > crisis especially) are effectively already out of date. It is true
    > California is short of electricity, we will run out of oil in 100+
    > years, 840 million people people are starving now, but these are not
    > directly technology problems since the technology and material abundance
    > exists to solve them all right now, but what is lacking is the political
    > will (or social consensus). One might call these organizational
    > problems, requiring perhaps innovation in a practical (deployment)
    > context (which an OHS/DKR might help with). I think improved technology
    > could help with these issues in the sense of making the costs to
    > solution even smaller (i.e. a $5 box the feeds a village forever) and so
    > lowering the bar for political action, but the deeper issues are ones of
    > fairness, compassion, and so on (which includes the fact people don't
    > get research funding to make that $5 food box even if it was feasible).
    > If resource distribution is grossly unfair, even the $5 to keep a
    > village alive forever will be spent on lipstick instead. So to the
    > extent the Colloquium wants to focus on current issues (world hunger,
    > California electricity crisis) it needs to support tools more related to
    > dealing with politics or social consensus.
    >
    > Incidentally, my wife and I support the Heifer project, which is the
    > closest we know of to any organization delivering self-replicating
    > (exponential) technology at a low cost to make impoverished people's
    > lives better.
    > http://www.heifer.org/
    > http://www.heifer.org/about_hpi/index.htm
    >
    > > How does your
    > > analysis today impact the big picture of moving from IT to KM?
    >
    > It is orthogonal to that.
    >
    > If corporations now doing IT have the major goal of profit as opposed to
    > "meeting unmet social needs" (to quote William C. Norris)
    > http://www.digitalcentury.com/encyclo/update/william_norris.html
    > then corporations whether they do IT or KM are irrelevant to human
    > survival. They are effectively machine intelligences with their own ends
    > (the ethic of profit maximization, or "bucks is beautiful") to which
    > humans are only relevant in well defined "roles" to the extent they are
    > currently required for service or markets. If they could be replaced at
    > less cost by automation, they will be -- nay, by the corporation's rules
    > in a competitive landscape, they must be (except union jobs?). The only
    > hope to resist this is some form of government intervention or worker
    > (individual or union) resistance. These decisions will all be made in
    > bits and pieces, each one seeimgly sensible at the time. Consider the
    > starting replacement of telephone support people by voice recognition
    > systems.
    > http://www.nuance.com/
    > The corporate social form has had little time to evolve (a few hundred
    > years?) so there is not guarantee that contemporary corporate
    > organization forms will be capable of doing more than exhausting
    > convenient resources (passing on external costs when possible) and then
    > collapsing.
    >
    > Obviously, to the extent KM could transform an organization like GE into
    > one that makes good on their corporate slogan "if we can dream it we can
    > do it" and deliver on their implied promises in their 1986 Disney Epcot
    > center pavilion (underwater cities, space habitats) then KM will be
    > useful. It is always "Knowledge about what?" For an alternative to a
    > world view producing organizations that refuse to clean up PCBs they
    > dumped in the Hudson, consider The Venus project's world view:
    > http://www.thevenusproject.com/vp_economy/resource.htm
    >
    > There is one obvious exception to saying KM won't change the direction
    > of organizations, which is to the extent humans as individuals in
    > corporations have access to KM tools and might see the bigger picture
    > and act as individuals. The only other hope is that a general increase
    > in organizational capacity in large corporations or governments will let
    > some small amount leak through for unsanctioned human ends (but the cost
    > in human suffering to that approach is high -- witness as one example
    > the 840 million people now in hunger.) But be very clear, this secondary
    > effect is not the reasons organizations will adopt KM. They will adopt
    > KM for competitive advantage in business as usual (barring a cultural
    > shift for other reasons.)
    >
    > As I saw this weekend on "DebatesDebates" with a debate on "Is the Good
    > Corporation Dead?"
    > http://www.debatesdebates.com/programs/program517.html
    > one of the debaters made the point that even if capitalism is good at
    > generating wealth, it is not good at distributing it. That is why I say
    > capitalism without charity is evil. Taken to an extreme when machine
    > intelligence is possible on a human level, capitalism as we now know may
    > leave (most) people behind, while at the same time owning or controlling
    > all the resources, preventing most people from earning a living
    > ("shading them out"). Historically, this has happened many times before
    > -- for example, the enclosure acts driving the English peasantry
    > (initially) into poverty and starvation.
    > http://frost.ca.uky.edu/agripedia/gen100/popbeea.htm
    > Or, as was the case in Africa or North America, where in both places an
    > indigenous population with ways of life related to the land was
    > displaced to make way for corporate activities (plantations, farms, and
    > ranches).
    >
    > I hope the situation does not come down to this, and that in the end
    > charity will win out over avarice and a mentally disturbed need for
    > excessive power. But it is by no means certain charity will win out,
    > given the power of technology to amplify both the best and worst in
    > people.
    >
    > > For example, I am advocating a culture of knowledge, as the big objective to
    > > accommodate a new world order of faster information resulting from IT that
    > > increasingly overwhelms human span of attention,
    >
    > Rod, what you are doing is worthwhile, as is what Doug is doing. But the
    > deeper point is simply that dealing with overwhelming complexity due to
    > rapid change is a different issue than meeting basic human needs right
    > now. Both are important, but they are different issues.
    >
    > The technology and material resources to feed and educate all children
    > (and adults) exists right now. There is enough to go around right now.
    > The reason this does not happen is for political and social reasosn --
    > not technological. Technology could and will make some of the choices
    > less hard (i.e. when $5 can feed a village forever instead of a few
    > people for a few days) but still the issue is not primarily a
    > technological one.
    >
    > On the other hand, the "new world order of faster information" issue has
    > more in common with the implications of the rise of machine intelligence
    > and nanotechnology and the arms race. In effect that "new world order"
    > is arising out of a corporate arms race involving infotech.
    >
    > > [snipped details]
    >
    > > I propose a single, breakthrough, solution by enhancing alphabet technology
    > > using a continual "intelligence" process that turns information into
    > knowledge,
    > > thus the goal to move up a notch on the cognitive scale from IT to KM. You
    > seem
    > > to suggest today that bootstrapping, while intending to solve complexity,
    > might,
    > > in some respects, be said to compound the problem it seeks to solve.
    >
    > I am not leveling this criticism directly at "bootstrapping" as the
    > Bootstrap Institute and Doug tries to define it. What I am trying to say
    > is that "bootstrapping" in terms of exponential growth of technology
    > (which enables more technology etc.) is already happening. Bootstrapping
    > is the given. So the issue is, how do we use related exponential growth
    > processes to deal with this? To the extent Doug's techniques are used
    > just to drive the technological innovation process faster, in no
    > specific direction, they are potentially just making things worse. To
    > the extent such techniques are used for specific human ends (example,
    > dealing with world hunger, making medical care more accessible, ensuring
    > children don't grow up in ignorance and poverty, reducing conflicts and
    > arms races) they make things better. The thing is, in a world where
    > competition (the arms race) has moved from physical weapons to infotech
    > (both corporate and military), simple saying you will speed the arms
    > race is not enough. In my thinking, it is the arms race itself that is
    > the potential enemy of humankind, and the issue is transcending the arms
    > race (whatever grounds it is fought on -- nuclear, biological,
    > infotech).
    >
    > > I think
    > > there is another way to explain bootstrapping that avoids this conflict, but
    > you
    > > seem to be arguing against it. Can you clarify?
    >
    > I don't have a conflict in thinking about an OHS/DKR or working towards
    > one. I accept the possibility that this bootstrap process may end badly
    > for most of humanity. It is a shame, and humanity should try to avoid
    > this looming disaster, and may well, but I have accepted that one can
    > not save everyone.
    >
    > For over a decade I have wanted to build a library of human knowledge
    > related to sustainable development. I as a small mammal am using the
    > crumbs left over by the dinosaurs to try to do so (not with great
    > success, but a little, like our garden simulator intended to help people
    > learn to grow their own food). I spent a year hanging around Hans
    > Moravec's Mobile Robot Lab at CMU, and I turned my back on
    > self-replicating robotics work -- not because I thought it was sci-fi,
    > but because I saw it was quite feasible, and wanted to do something else
    > that was more likely to ensure human survival (self-replicating
    > habitats, for space, water, and land). I also did not want to speed the
    > process along. Now fifteen years later, this process is effectively
    > unstoppable, so I have fewer qualms about doing a little that might
    > hasten it if the payoff might be some type of refugia for humans.
    >
    > The way to put it is that "bootstrapping" has linked itself conceptually
    > to an exponential growth process happening right now in our
    > civilization. Almost all explosions entail some level of exponential
    > growth. So, in effect, our civilization is exploding. The meaning of
    > that as regards human survival is unclear, but it is clear people are
    > only slowly coming to take this seriously.
    >
    > As one example, lots of trends:
    > http://www.duke.edu/~mccann/q-tech.htm
    > Lou Gerstner(IBM's Chairman) was recently quoted as talking about a near
    > term e-commerce future of 10X users, 100X bandwidth, 1000X devices, and
    > 1,000,000X data. Obviously, IBM wants to sell the infrastructure to
    > support that. But I think the bigger picture is lost.
    >
    > Even for seeing the "trees" of individual quantitative changes, the
    > "forest" that these quantitative changes would have a qualitative change
    > on the business or human landscape is ignored. Or if people see it, it
    > is the "elephant in the living room" no one talks about (well obviously
    > a few like Kurzweil or Moravec or Joy). More of everything yes, but
    > always business as usual.
    >
    > To be relevant and of goof for humanity, Bootstrapping must address how
    > this quantitative exponential growth will lead to qualitative changes,
    > at what point if any an "S-curve" effect will set in, and how
    > "bootstrapping" as an intellectual concept will do good amidst this
    > setting.
    >
    > >
    > > Thanks.
    > >
    > > Rod
    >
    > Thanks for the comments.
    >
    > -Paul Fernhout
    > Kurtz-Fernhout Software
    > =========================================================
    > Developers of custom software and educational simulations
    > Creators of the Garden with Insight(TM) garden simulator
    > http://www.kurtz-fernhout.com
    >
    > eGroups Sponsor
    > [Click Here!]
    >
    > Community email addresses:
    > Post message: unrev-II@onelist.com
    > Subscribe: unrev-II-subscribe@onelist.com
    > Unsubscribe: unrev-II-unsubscribe@onelist.com
    > List owner: unrev-II-owner@onelist.com
    >
    > Shortcut URL to this page:
    > http://www.onelist.com/community/unrev-II

    -------------------------- eGroups Sponsor -------------------------~-~>
    eLerts
    It's Easy. It's Fun. Best of All, it's Free!
    http://click.egroups.com/1/9699/0/_/444287/_/977304162/
    ---------------------------------------------------------------------_->

    Community email addresses:
      Post message: unrev-II@onelist.com
      Subscribe: unrev-II-subscribe@onelist.com
      Unsubscribe: unrev-II-unsubscribe@onelist.com
      List owner: unrev-II-owner@onelist.com

    Shortcut URL to this page:
      http://www.onelist.com/community/unrev-II



    This archive was generated by hypermail 2b29 : Wed Dec 20 2000 - 01:33:19 PST