Eliezer,

Allowing goals to change in a coupled way with thoughts memories, is not
simply "adding entropy"

-- Ben



> Ben Goertzel wrote:
> >>
> >>I always thought that the biggest problem with the AIXI model is that it
> >>assumes that something in the environment is evaluating the AI
> and giving
> >>it rewards, so the easiest way for the AI to obtain its rewards would be
> >>to coerce or subvert the evaluator rather than to accomplish any real
> >>goals. I wrote a bit more about this problem at
> >>http://www.mail-archive.com/everything-list@eskimo.com/msg03620.html.
> >
> > I agree, this is a weakness of AIXI/AIXItl as a practical AI design.  In
> > humans, and in a more pragmatic AI design like Novamente, one has a
> > situation where the system's goals adapt and change along with
> the rest of
> > the system, beginning from (and sometimes but not always
> straying far from)
> > a set of initial goals.
>
> How does adding entropy help?
>
> --
> Eliezer S. Yudkowsky                          http://singinst.org/
> Research Fellow, Singularity Institute for Artificial Intelligence
>
> -------
> To unsubscribe, change your address, or temporarily deactivate
> your subscription,
> please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
>

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to