Bob Mottram wrote:
On 04/10/2007, Richard Loosemore <[EMAIL PROTECTED]> wrote:
Linas Vepstas wrote:
Um, why, exactly, are you assuming that the first one will be freindly?
The desire for self-preservation, by e.g. rooting out and exterminating
all (potentially unfreindly) competing AGI, would not be what I'd call
"freindly" behavior.
What I mean is that ASSUMING the first one is friendly (that assumption
being based on a completely separate line of argument), THEN it will be
obliged, because of its commitment to friendliness, to immediately
search the world for dangerous AGI projects and quietly ensure that none
of them are going to become a danger to humanity.
Whether you call it "extermination" or "ensuring they won't be a
danger" the end result seems like the same thing. In the world of
realistic software development how is it proposed that this kind of
neutralisation (or "termination" if you prefer) should occur ? Are we
talking about black hat type activity here, or agents of the state
breaking down doors and seizing computers?
Well, forgive me, but do you notice that you are always trying to bring
it back to language that implies malevolence?
It is this very implication of malevolent intent that I am saying is
unjustified, because it makes it seem like it is something that it is not.
As to exactly how, I don't know, but since the AGI is, by assumption,
peaceful, friendly and non-violent, it will do it in a peaceful,
friendly and non-violent manner.
Richard Loosemore
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49658671-fcb107