On 2/8/07, gts <[EMAIL PROTECTED]> wrote:
On Thu, 08 Feb 2007 09:26:28 -0500, Pei Wang <[EMAIL PROTECTED]> wrote:

> In simple cases like the above one, an AGI should achieve coherence
> with little difficulty. What an AGI cannot do is to guarantee
> coherence in all situations, which is impossible for human beings,
> neither --- think about situations where the incoherence of a bet
> setting needs many steps of inference, as well as necessary domain
> knowledge, to reveal.

Yes, but as I wrote to Ben yesterday, it is not possible to make a dutch
book against an AGI that does not pretend to have knowledge it does not
have.

Knowledge is not a matter of "have or haven't", but usually is a
matter of degree. An AGI don't need to "pretend to have knowledge" ---
it has some, but not absolutely reliable.

So an AGI can be perfectly coherent, to *some* degree of knowledge,
provided it knows its own bounds.

Also too a degree --- the "bounds" is knowledge, too, so cannot be infallible.

And such a modest AGI would certainly be
more trustworthy, especially if it were employed in such fields as
national defense, where incoherent reasoning could lead to disaster.

Are the human beings doing better there? Of course, it is not an
excuse, but an inevitable reality.

Pei

-gts




-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to