On Feb 3, 2007, at 9:02 AM, Russell Wallace wrote:

On 2/3/07, Ben Goertzel <[EMAIL PROTECTED]> wrote:
My approach was to formulate a notion of "general intelligence" as
"achieving a complex goal", and then ask something like: Given what
resource levels R and goals G, is approximating probability theory
the best way to approximately achieve G using software that utilizes
only resources R.

That depends on what you mean by "approximating probability theory". Do you mean:

A) Giving answers that approximate those that would be given by (an omniscient entity using) probability theory,

or the stronger version:

B) Explicitly making use of probabilistic reasoning in the program's internal calculations.

The cleanest definition would be: "To act in such a way that its
behaviors are approximately consistent with probability theory"

That suggests you mean A. Well then, it seems to me that terms are being used in this discussion so that probability theory is _defined_ as giving the right answers in all cases. So the original question boils down to "is it always best to give approximately the right answers?"; the answer is then trivially yes.

I do mean A, but I don't think it is so trivial to prove, even though it is conceptually obvious...



If you mean B, then the answer is no. In mathematics only a single counterexample is required to disprove "always", and Eliezer gave a counterexample: if you want to write an efficient sudoku solver for just one sudoku which is known in advance, then the most efficient program is one that performs no calculations at all but consists of a single output statement.

I agree that the answer to B is no, and I think that B is also harder to define. What does "explicitly making use of..." really mean, in a general sense?

Ben

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to