Eliezer is certainly correct here -- your analogy ignores probabilistic
dependency, which is crucial.

Ben

> -----Original Message-----
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> Behalf Of Eliezer S. Yudkowsky
> Sent: Tuesday, March 04, 2003 1:33 AM
> To: [EMAIL PROTECTED]
> Subject: Re: [agi] Why is multiple superintelligent AGI's safer than a
> single AGI?
>
>
> Philip Sutton wrote:
> > Hi Eliezer,
> >
> >>  This does not follow.  If an AI has a P chance of going feral, then a
> >>  society of AIs may have P chance of all simultaneously going feral
> >
> > I can see you point but I don't agree with it.
> >
> > If General Motors churns out 100,000 identical cars with all the same
> > charcteristics and potiential flaws, they will */not /*all fail at the
> > same instant in time.  Each of them will be placed in a different
> > operating environment and the failures will probably spread over a bell
> > curve style distribution.
>
> That's because your view of this problem has automatically factored out
> all the common variables.  All GM cars fail when dropped off a
> cliff.  All
> GM cars fail when crashed at 120 mph.  All GM cars fail on the moon, in
> space, underwater, in a five-dimensional universe.  All GM cars
> are, under
> certain circumstances, inferior to telecommuting.
>
> How much of the risk factor in AI morality is concentrated into such
> universals?  As far as I can tell, practically all of it.  Every AI
> morality failure I have ever spotted has been of a kind where a
> society of
> such AIs would fail in the same way.
>
> The bell-curve failures to which you refer stem from GM making a
> cost-performance tradeoff.  The bell-curve distributed failures, like the
> fuel filter being clogged or whatever, are *acceptable* failures, not
> existential risks.  It therefore makes sense to accept a probability X of
> failure, for component Q, which can be repaired at cost C when it fails;
> and when you add up all those probability factors you end up with a bell
> curve.  But if the car absolutely had to work, you would be minimizing X
> like hell, to the greatest degree allowed by your *design ability and
> imagination*.  You'd use a diamondoid fuel filter.  You'd use three of
> them.  You wouldn't design a car that had a single point of
> failure at the
> fuel filter.  You would start seriously questioning whether what you
> really wanted should be described as a "car".  Which in turn would shift
> the most probable cause of catastrophic failure away from bell-curve
> probabilistic failures and into outside-context failures of imagination.
>
> --
> Eliezer S. Yudkowsky                          http://singinst.org/
> Research Fellow, Singularity Institute for Artificial Intelligence
>
> -------
> To unsubscribe, change your address, or temporarily deactivate
> your subscription,
> please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
>

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to