On Thu, Nov 27, 2014 at 11:39 PM, Jim Bromer via AGI <[email protected]> wrote:
> I certainly don't have the ability to predict what any of you are
> going to say next.

Let's just say that you assign probability distributions to future
events. You can try an experiment like Shannon did in 1950 when he
estimated the entropy of written English. Take a random page of a book
and cover half of it. Then try to guess the next letter one at a time.
Your first guess will be right about 80% of the time. Your second
guess will be right another 7% of the time. If you want to call this
prediction, then you are probably a little better at predicting text
than any known program.

> I  think an algorithm that employs Solomonoff Induction would be a
> learning algorithm but it would not be a complete AGI Learning
> Algorithm because it would be too narrow.

That would be true even if Solomonoff induction were computable. We
also have goals and actions. We do more than just predict.

> Finally, we do have the ability to change our utility function
> evaluations.

With drugs or brain surgery, I suppose you do. But the human brain is
not really a utility maximizer either. Goals are a useful but not
entirely accurate way to describe some of what the brain does. A goal
implies that an agent could potentially try any strategy to achieve
it. But we don't. The effect of pleasure and pain is not such that we
take actions to maximize one and minimize the other. What we do
instead is to increase the frequency of actions we took just before
receiving positive reinforcement and decrease the frequency of actions
we took just before negative reinforcement. This is usually a good
strategy for maximizing utility, but it is not the same. To understand
the difference, your desire to use heroin depends on how many times
you have tried it in the past. If you were a rational utility
maximizer, it should make no difference.

In some cases, our mathematical model of intelligence as a utility
maximizer is useless. For example, you could say that a thermostat
"wants" to keep the room at a constant temperature. But it is more
useful to describe a thermostat as a device that turns on the heat
when the temperature drops below a set point. The difference is that
an intelligent thermostat will not recursively self improve and turn
the world into computronium to achieve better temperature regulation.
I had to also specify the actions that a thermostat can take to
achieve its goal, just as I did with the brain.

I think it is more practical to forget about goals and utility and to
model AGI as something that could do any kind of work that we would
have to pay people to do. We're not going to do that by giving it a
goal (like work units or dollars) and let it figure out on its own how
to achieve it. That takes too long (because good approximations of
uncomputable algorithms tend to be slow). Instead, we are going to
design, code, and train agents to do specific tasks one at a time
through a slow, laborious, and iterative process. There isn't an easy
way to do this, or I think we would have discovered it by now.

-- 
-- Matt Mahoney, [email protected]


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to