[unrev-II] Fwd: RE: Bill's Post -- Values for the Global Brain

From: Jack Park (jackpark@thinkalong.com)
Date: Mon Sep 17 2001 - 07:38:37 PDT

  • Next message: Simon Buckingham Shum: "RE: [unrev-II] Semantic Community Web Portal (Formality Harmful)"

    The Global Brain folks appear to be thinking along similar lines to the
    Unrev folks.
    Should you choose to join their list, go to
    http://pespmc1.vub.ac.be/TOC.html and scroll down to the list subscription
    area (and notice the enormous breadth of information coverage while you
    scroll).

    Cheers
    Jack

    >From: Francis Heylighen <fheyligh@vub.ac.be>
    >Reply-To: gbrain@listserv.vub.ac.be
    >
    >Craig:
    >>Perhaps the best way, initially, to ensure that the primary value of the
    >>Global Brain is love for all humans is to design an architecture that
    >>requires the participation of human beings in order for the Global Brain to
    >>operate.
    >>
    >>A vision of Global Brain in which humans are the information processing
    >>"nodes" connected by telecommunications/intelligent network technology might
    >>work. As the power of collective human intelligence increases, more and
    >>more of the human nodes in the network might be replaced by computationally
    >>superior AI nodes until eventually the vast majority of the intelligence
    >>would be coming from the AI processing rather than from human brainpower.
    >>However even if the human brains ended up being only a tiny fraction of the
    >>overall intelligence of the Global Brain, perhaps they could still retain
    >>the function of serving as the Global Brain's conscience.
    >>[...]
    >>There is no rational way to derive values (see the short book: Reason and
    >>Human Affairs by Herbert Simon for this complete argument). That is, no
    >>matter how intelligent the Global Brain becomes, it still must work from
    >>fundamental premises about what is right and wrong. Even a
    >>super-intelligent machine whose intellect is beyond comprehension of any
    >>human mind must still assume fundamental values.
    >>
    >>Could the role of human being in the future be to provide these core values
    >>to super-intelligent machines?
    >
    >Actually, I am at the moment working on a paper (an elaboration of the
    >argument I gave in my introductory talk at the GB workshop) that makes a
    >similar reasoning. As the world is getting ever more complex and
    >interconnected, and we are getting bombarded with ever more information,
    >individual people are no longer capable of making the best judgments.
    >Therefore they need help from the GB. But the GB is essentially an
    >extension of our own capacities, supporting the filtering and processing
    >of information, but not making decisions on its own.
    >
    >The reason is, as Craig points out, that computers cannot make value
    >judgments that are not programmed into them by humans: there is no
    >rational mechanism or algorithm to deduce a value from something that
    >isn't a value already (cf. http://pespmc1.vub.ac.be/SCIVAL.html). Thus,
    >ultimately, it is humans who make the judgments, and the GB merely helps
    >them to digest the information and point out implications of their
    >potential decisions.
    >
    >Moreover, even if we could agree about a set of basic values that are
    >explicit enough to be programmed into a computer, practical decisions in
    >the real world would remain much too ambiguous and context-dependent to be
    >left to a stand-alone, "rational" inference engine. It is only people
    >interacting with and experiencing that real world with all its subtletlies
    >and ramifications who at the moment are capable of making reliable
    >judgments. This lack of "intuition" or "common-sense" has always been the
    >major shortcoming of AI systems.
    >
    >The solution is to simply use the existing intuition of human users, but
    >to augment it by better collection and processing of information and by
    >adding up the intuitions of thousands of people. The latter is the basic
    >support for collective intelligence, as demonstrated by Craig's
    >presentation at the workshop, and as analysed in a paper of mine:
    >http://pespmc1.vub.ac.be/papers/CollectiveWebIntelligence.pdf
    >
    >An interesting aspect about this procedure of "adding" people's values or
    >preferences is that various forms of extremism are simply averaged out.
    >Add together the preferences of a Hindu fundamentalist, a Christian
    >fundamentalist and a Muslim fundamentalist, and their various fanatical
    >opinions will cancel each other out, leaving only the common sense values
    >that everybody agrees about, such as that you should avoid killing
    >innocent people. What the GB should help us with is to make such universal
    >values more explicit, and to support their attainment more systematically.
    >--
    >
    >_________________________________________________________________________
    >Dr. Francis Heylighen <fheyligh@vub.ac.be> -- Center "Leo Apostel"
    >Free University of Brussels, Krijgskundestr. 33, 1160 Brussels, Belgium
    >tel +32-2-6442677; fax +32-2-6440744; http://pespmc1.vub.ac.be/HEYL.html

    ------------------------ Yahoo! Groups Sponsor ---------------------~-->
    Get VeriSign's FREE GUIDE: "Securing Your Web Site for Business." Learn about using SSL for serious online security. Click Here!
    http://us.click.yahoo.com/LgMkJD/I56CAA/yigFAA/IHFolB/TM
    ---------------------------------------------------------------------~->

    Community email addresses:
      Post message: unrev-II@onelist.com
      Subscribe: unrev-II-subscribe@onelist.com
      Unsubscribe: unrev-II-unsubscribe@onelist.com
      List owner: unrev-II-owner@onelist.com

    Shortcut URL to this page:
      http://www.onelist.com/community/unrev-II

    Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/



    This archive was generated by hypermail 2.0.0 : Mon Sep 17 2001 - 10:58:42 PDT