The definition of 'probabilistic consistency' that I was using comes from ET Jaynes' book _Probability Theory - The Logic of Science_, page 114.

These are Jaynes' three 'consistency desiderata' for a probabilistic robot:

1. If a conclusion can be reasoned out in more than one way, then every
possible way must lead to the same result.

2. The robot takes into account all information relevant to the question.

3. The robot always represents equivalent states of information with
equivalent plausibility assignments.

I don't think any intelligent system (human or machine) can achieve
any of the three desiderata, except in trivial cases.

I agree with Pei. This is an "ideal probabilistic robot" which is irrelevant to AGI.

I introduced such a mind as a theoretical tool in a prior email, but you can't
build it except in toy domains.

ben

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to