Re: [singularity] Pernar's supergoal

2007-10-28 Thread Stefan Pernar
On 10/28/07, Benjamin Goertzel <[EMAIL PROTECTED]> wrote: > > On 10/27/07, Stefan Pernar <[EMAIL PROTECTED]> wrote: > > > > Thanks Ben. > > > > As foundation of my AI friendliness theory I tried to figure out why we > > believe what good or bad is and came to the conclusion that humans, animals > >

Re: [singularity] Pernar's supergoal

2007-10-28 Thread Benjamin Goertzel
On 10/27/07, Stefan Pernar <[EMAIL PROTECTED]> wrote: > > Thanks Ben. > > As foundation of my AI friendliness theory I tried to figure out why we > believe what good or bad is and came to the conclusion that humans, animals > and even plants have evolved to perceive as good what is encoded into the

Re: [singularity] Pernar's supergoal

2007-10-27 Thread Stefan Pernar
Thanks Ben. As foundation of my AI friendliness theory I tried to figure out why we believe what good or bad is and came to the conclusion that humans, animals and even plants have evolved to perceive as good what is encoded into their genome/memome having been retained in the course of chance mut

[singularity] Pernar's supergoal

2007-10-27 Thread Benjamin Goertzel
To move the chat in a different direction, here is Stephan Pernar's articulates self-improving AGI supergoal, drawn from his paper "Benevolence-- A Materialist Philosophy of Goodness", which is linked to from http://www.jame5.com/ Definitions: Suffering = negative subjective experience equiva