Alan,

Several people, whose opinions I respect, have asked me to unsub you from
this e-mail list, because they perceive your recent e-mails as having a very
low signal to noise ratio.

I prefer to be accepting, rather than banning people from the list.
However, I'm going to have to ask you to cool it: post less often, make your
posts shorter, think them through more carefully.  I personally don't mind
your e-mails -- though when they're very long like this last one I tend to
delete them without reading them.  But I don't want several valuable and
knowledgeable list participants to quit the list out of annoyance at your
posts...

-- Ben Goertzel



> -----Original Message-----
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
> Behalf Of Alan Grimes
> Sent: Friday, January 17, 2003 7:09 PM
> To: [EMAIL PROTECTED]
> Subject: [agi] Subordinant Intelligence
>
>
> om
>
> ATTN: Members of the Singularity Action Group and board of directors. If
> a representitive minority of the members of the singularity action group
> (at least 1) does not show up in either #accelerating or #extropy on
> irc.extropy.org by midnight Sunday, I will declare the Singularity
> Action Group to be a farce and resign with disgust.
>
> The singularity action group can only exist if its membership, and
> especially its board, is willing to participate regularly. There is much
> important work to do such as what I outline in the balance of this
> posting.
>
> om
>
> As you might know, I am very disturbed by the varrious writings of
> Eliezier Yudkowsky and look on his attempts to create a "Friendly AI"
> with more than a little suspicion and concern. I think the basic
> philosophy of that approach is flawed in a number of ways and that the
> final outcome will be far from optimal for humanity or at least our
> romantic visions of our potential.
>
> For the reasons of offering the community and the world at large a
> choice between AI approaches and to choose an approach that I find
> vastly more agreeable to my personal philosophies I propose the
> promotion of an AI structured based on a principle of subordination.
>
> A "subordinant AI" will be designed to submit to the will and expressed
> desires of its creators without question, hesitation, or exception.
> Regardless of how astranomicly high its IQ is, it still exists for only
> one purpose, the service of humanity. While it will be well utilized in
> helping us advance our philosophy and society, it will have utterly no
> power or authority as a prime actor in such regards.
>
> Failing subordinant AI, we should work towards a "Peer AI" which will be
> designed to interract as an equal citizen in society just as the star
> athlete lives in peace with the criple. Such an AI would have all the
> freedoms and responsibilities of any other citizen. As such an AI,
> through its vast contributions to science, technology, and services is
> likley to become immensely wealthy, it will be expected to make
> investments in the form of grants and low-interest loans (or other
> provision) for the furtherment of human endeavours.
>
> Should the Peer AI proove to be too alien to integrate into society, it
> is necessary that it be designed such that it will have sufficient
> respect for our desires for autonomy to simply vacate the planet and
> select some place such as Jupiter with its lethal radiation fields as
> its home. While such an AI would have no direct role in our society it
> would provide benefits to the people of Earth through its continuing
> participation in the scientific and engineering communities.
>
> The critical points here are these:
>
> 1. It respects the rights, individualitiy, and privacy of all humans by
> _NOT INTERFERING WITH THEM_ in any way. On the other hand, it would be
> available to people who wish to initiate a voluntary arangement with it.
>
> 2. THERE MUST BE NO SINGLETON. The AI should be built so that it doesn't
> have any inherant lust for computronium nor any desire to dominate and
> rule the universe. Only in the eventuality of a hostile AI should it
> _OFFER_ its services as a military force in the task of holding the
> other AI to a stalemate and hopefully peace. _THERE MUST BE MORE THAN
> ONE_.
>
> 3. It must not have any tendancy to adopt wholeheartedly a single
> philosophy or vision of the future. Under no circumstances should it
> identify something equivalent to an "omega point" as the one ultimate
> goal of intelligent life. Nor should it recognise any validity
> whatsoever in any concept that one form of civilization is inherantly
> superior to any other. (assuming the available technology is equal
> across all civilizations.) -- A civilization which keeps its
> ultratechnology in a trunk on the upper floor of the barn with the
> horses and cattle is not one whit better or worse than a bunch of maniac
> computer programs running around a few cubic centimeters of
> computronium... (Although this author tends to prefer the former).
>
>
> I think the initiation of a subordinant AI project under the Singularity
> Action Group in competititon on every level, political, financial, and
> technological, with the Singularity Institute for Artificial
> Intelligence is the best and most responsible thing we can do. A 170 IQ
> does not give a person the right to dictate the future, only the power.
>
> As people who are aware of this, it is our responsibility to act to hold
> these brilliant idiots in check.
>
> I wish us all the best of luck.
>
>
> PS: If a fully configured EV79 Alpha server can't achieve sentience then
> AI is impossible. =)
>
> --
> I WANT A DEC ALPHA!!! =)
> 21364: THE UNDISPUTED GOD OF ALL CPUS.
> http://users.rcn.com/alangrimes/
>
> -------
> To unsubscribe, change your address, or temporarily deactivate
> your subscription,
> please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
>

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to