Ben Goertzel wrote:
> I don't think that preventing an AI from tinkering with its
> reward system is the only solution, or even the best one...
>
> It will in many cases be appropriate for an AI to tinker with its goal
> system...

I don't think I was being clear there. I don't mean the AI should be
prevented from adjusting its goal system content, but rather that it should
be sophisticated enough that it doesn't want to wirehead in the first place.

> I would recommend Eliezer's excellent writings on this topic if you don't
> know them, chiefly www.singinst.org/CFAI.html .  Also, I have a brief
> informal essay on the topic, www.goertzel.org/dynapsyc/2002/AIMorality.htm
,
> although my thoughts on the topic have progressed a fair bit since I wrote
> that.

Yes, I've been following Eliezer's work since around '98. I'll have to take
a look at your essay.

Billy Brown

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to