Josh,

You said - If you have a fixed-priority utility function, you can't even THINK ABOUT the choice. Your pre-choice function will always say "Nope, that's bad" and you'll be unable to change. (This effect is intended in all the RSI stability arguments.)

I replied - Doesn't that depend upon your architecture and exactly *when* the pre-choice function executes? If the pre-choice function operates immediately pre-choice and only then, it doesn't necessarily interfere with option exploration.

You called my architecture that allows THINKing ABOUT the choice a bug by replying - If you have a *program structure that can make decisions that would otherwise be vetoed by the utility function*, but get through because it isn't executed at the right time, to me that's just a bug.

I replied - You're missing the *major* distinction between a "program structure that can make decisions that would otherwise be vetoed by the utility function" and a program that "can't even THINK ABOUT" a choice (both your choice of phrase).

- - - - - - - - - -
If you were using those phrases to describe two different things, then you weren't replying to my e-mail (and it's no wonder that my attempted reply to your non-reply was confusing).



----- Original Message ----- From: "J Storrs Hall, PhD" <[EMAIL PROTECTED]>
To: <agi@v2.listbox.com>
Sent: Thursday, June 12, 2008 2:23 PM
Subject: Re: [agi] Nirvana


Huh? I used those phrases to describe two completely different things: a
program that CAN change its highest priorities (due to what I called a bug),
and one that CAN'T. How does it follow that I'm missing a distinction?

I would claim that they have a similarity, however: neither one represents a
principled, trustable solution that allows for true moral development and
growth.

Josh

On Thursday 12 June 2008 11:38:23 am, Mark Waser wrote:
You're missing the *major* distinction between a "program structure that can make decisions that would otherwise be vetoed by the utility function" and a program that "can't even THINK ABOUT" a choice (both your choice of phrase).

Among other things not being able to even think about a choice prevents
accurately modeling the mental state of others who don't realize that you
have such a constraint. That seems like a very bad and limited architecture
to me.



-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com





-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com

Reply via email to