Eliezer,


I don't think a mind that evaluates probabilities *is* automatically the best way to make use of limited computing resources. That is: if you have limited computing resources, and you want to write a computer program that makes the best use of those resources to solve a problem you're facing, then only under very *rare* circumstances does it make sense to write a program consisting of an intelligent mind that thinks in probabilities. In fact, these rare circumstances are what define AGI work.


Hmmm..

My approach was to formulate a notion of "general intelligence" as "achieving a complex goal", and then ask something like: Given what resource levels R and goals G, is approximating probability theory the best way to approximately achieve G using software that utilizes only resources R.



What state of subjective uncertainty must you, the programmer, be in with respect to the environment, before coding a probability- processing mind is a rational use of your limited computing resources? This is how I would state the question.


That is an interesting way to put the problem....

Consider the nearly-degenerate case of a demi-God-programmer, who knows an extremely large amount about the future of the universe. Suppose the demi-God-programmer wants to build a robotic salesman that will venture to all the planets in the galaxy and sell their residents soap. But, suppose the different creatures in the galaxy have different psychologies, so that a different strategy must be taken to sell each species soap. But, suppose the robot has a very limited brain capacity, comparable to that of a human. Then, the demi-God-programmer must program the robot with a learning system, because the robot's head can't be filled with all the details of the psychologies of the 10 decillion species it needs to sell soap to, because there isn't room in its brain.... So, a probabilistic solution to the robot-brain-programming problem will likely be optimal, even though the demi-God-programmer itself has comprehensive knowledge of the environment.

But in this case the demi-God-programmer does lack some knowledge about the future. It may know something about the psychology of each species (even though there's no room to put this knowledge into the brain of the robot), but it doesn't know the exact conversations that the robot will have on each planet....

The case of a God-programmer that has truly complete knowledge of the future of the universe is subtler. In this case, would the probabilistic solution still be optimal for the robot, or would the God-programmer be able to come up with some highly clever highly specialized trick that would give better performance given the exact specifics of the future destiny of the robot? You seem to be asserting the latter, but I'm not quite sure why...


Intuitively, I answer: When you, the programmer, can identify parts of the environmental structure; but you are extremely uncertain about other parts of the environment; and yet you do believe there's structure (the unknown parts are not believed by you to be pure random noise). In this case, it makes sense to write a Probabilistic Structure Identifier and Exploiter, aka, a rational mind.



Is it really necessary to introduce the programmer into the problem formulation, though?

Can't we just speak about this in terms of optimization?

I think I want to ask: If we want to write a program P that will optimize some function G, using resources R, in what cases will it be optimal for the program P to use an approximation to probability theory?

The subtlety comes in the definition of what it means to "use an approximation to probability theory."

The cleanest definition would be: "To act in such a way that its behaviors are approximately consistent with probability theory"

Now, how can we define this? We can define this if we have a notion of the set of actions that are available to the system at a given point in time.

Then, the system's choice of action A instead of action B may be taken as an implicit assessment that

"my expected achievement of G given I take action A" > "my expected achievement of G given I take action B"

where the quantities on both sides of this inequality are expected plausibilities.

So, we can then ask whether these expected plausibilities, implicit in the system's actions, are consistent with probability theory or not.

And, the hypothesis is that for a broad class of goal functions G and resource restrictions R, the optimal way to achieve G given R will be using software whose implicit plausibility judgments (as defined above) approximately obey probability theory.

Note that the hypothesis is about how the various implicit plausibility judgments of the system will **relate to each other**. They need to relate to each other approximately consistently according to probability theory, is the hypothesis (for G that is complex in the right way, and R that is not too small relative to the complexity of G).

Of course, the above formulation is just a bunch of hand-waving and I have not succeeded in proving this [or even formulating it in a thoroughly rigorous way] any better than you have succeeded in proving your formulation (nor have I put terribly much effort into it yet, due to other priorities).

What interests me in this dialogue is largely to see whether you feel we are both approaching essentially the same question, though using different specific formulations.

-- Ben






-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to