gts wrote:

> 
> I understand the resources problem, but to be coherent a
> probabilistic reasoner need only be constrained in very
> simple ways, for example from assigning a higher
> probability to statement 2 than to statement 1> when
> statement 2 is contingent on statement 1.
> 
> Is such basic coherency totally out of reach for an AGI, *in
> principle*? I
> hope not.

You would have to assume that statement 2 is *entirely* contingent on
statement 1.  For many real-world statements of this form, how could a
subjective (limited context) agent be sure of this?

- Jef

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to