On 10/28/07, Benjamin Goertzel <[EMAIL PROTECTED]> wrote:
>
> On 10/27/07, Stefan Pernar <[EMAIL PROTECTED]> wrote:
> >
> > Thanks Ben.
> >
> > As foundation of my AI friendliness theory I tried to figure out why we
> > believe what good or bad is and came to the conclusion that humans, animals
> >
On 10/27/07, Stefan Pernar <[EMAIL PROTECTED]> wrote:
>
> Thanks Ben.
>
> As foundation of my AI friendliness theory I tried to figure out why we
> believe what good or bad is and came to the conclusion that humans, animals
> and even plants have evolved to perceive as good what is encoded into the
Thanks Ben.
As foundation of my AI friendliness theory I tried to figure out why we
believe what good or bad is and came to the conclusion that humans, animals
and even plants have evolved to perceive as good what is encoded into their
genome/memome having been retained in the course of chance mut
To move the chat in a different direction, here is Stephan Pernar's
articulates self-improving AGI supergoal, drawn from his paper
"Benevolence--
A Materialist Philosophy of Goodness", which is linked to from
http://www.jame5.com/
Definitions:
Suffering = negative subjective experience equiva