--- Alan Grimes <[EMAIL PROTECTED]> wrote:
> om
>
> Today, I'm going to attempt to present an argument
> in favor of a theory
> that has resulted from my studies relating to AI.
> While this is one of
> the only things I have to show for my time spent on
> AI. I am reasonably
> confident in it's
om
Today, I'm going to attempt to present an argument in favor of a theory
that has resulted from my studies relating to AI. While this is one of
the only things I have to show for my time spent on AI. I am reasonably
confident in it's validity and hope to show why that is the case here.
Unfortun
--- Benjamin Goertzel <[EMAIL PROTECTED]> wrote:
> On 7/12/07, Panu Horsmalahti <[EMAIL PROTECTED]> wrote:
> >
> > It is my understanding that the basic problem in Friendly AI is that it is
> > possible for the AI to interpret the command "help humanity" etc wrong,
> and
> > then destroy humanity