Pei,

First, I agree that proving stuff like this is not the maximally interesting aspect of AGI. Actually creating a thinking machine is a lot more exciting, and that is what I'm devoting the bulk of my attention to!!

However, the question of how much inconsistency is inevitable in an AGI is an interesting one.

I agree with you that perfect consistency is not possible for a finite-resources mind confronted with reasonably complex goals.

However, I also think that given a particular goal and particular resource restrictions, by and large the smarter minds (the better goal achievers) will be the more consistent ones (in the sense I've defined). This seems a worthwhile conclusion, if true ... depending on the conditions attached to the theorem once it's proved....

It implies that, in AGI design, it is worthwhile to focus on making one's AGI consistent.

But, what's interesting is that achieving a high level of **behavioral** logical consistency in the manner I've defined it, may not necessarily imply having internal structures that are focused on formal logic. It could be that the way to maximize behavioral logical consistency given severe resource constraints, is actually NOT to explicitly do logic at all. Or, more likely, it could be that the way to do it is to mix explicit logic with other non-logical mechanisms.

This is the reason why I have chosen a fairly complex definition of consistency that has to do with what an observer could infer from the system's behaviors, rather than a definition that assumes the system is explicitly doing formal logic internally.

-- Ben



On Feb 3, 2007, at 8:47 PM, Pei Wang wrote:

On 2/3/07, Ben Goertzel <[EMAIL PROTECTED]> wrote:

My desire in this context is to show that, for agents that are
optimal or near-optimal at achieving the goal G under resource
restrictions R, the set of important implicit abstract expectations
associated with the agent (in goal-context G as assessed by an ideal
probabilistic observer) should come close to being consistent.

I believe your hypothesis is correct, and I agree that to prove it
will be taken as an academic achievement. However, personally I'm not
interested in it. Instead, my goal is to find a different sense of
"optimal" that is achievable by the agent even when it cannot maintain
consistent beliefs because of knowledge/resources restrictions.

I surely don't like inconsistency, but see it as inevitable in an AGI.

Pei

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to