On Feb 8, 2007, at 9:37 AM, gts wrote:

On Thu, 08 Feb 2007 09:26:28 -0500, Pei Wang <[EMAIL PROTECTED]> wrote:

In simple cases like the above one, an AGI should achieve coherence
with little difficulty. What an AGI cannot do is to guarantee
coherence in all situations, which is impossible for human beings,
neither --- think about situations where the incoherence of a bet
setting needs many steps of inference, as well as necessary domain
knowledge, to reveal.

Yes, but as I wrote to Ben yesterday, it is not possible to make a dutch book against an AGI that does not pretend to have knowledge it does not have.

And as I wrote to you yesterday, "knowing your own bounds" is itself a difficult inference problem, at which a modest-resources mind can also NOT be fully coherent unless its scope is very narrow.


So an AGI can be perfectly coherent, to *some* degree of knowledge, provided it knows its own bounds. And such a modest AGI would certainly be more trustworthy, especially if it were employed in such fields as national defense, where incoherent reasoning could lead to disaster.


Well, if the scope of a mind is narrowed enough, then it can be more coherent.

However, the ability of the mind to incorporate context into reasoning then suffers.

The two go together (in the domain of moderate-resources minds):

* reasoning that is contextually savvy
* reasoning that is not necessarily coherent (in the sense of full probabilistic consistency)

-- Ben


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to