Stathis Papaioannou wrote:


Brent Meeker writes:

> I agree with everything you say, and have long admired "The Hedonistic > Imperative". Motivation need not be linked to pain, and for that matter > it need not be linked to pleasure either. We can imagine an artificial > intelligence without any emotions but completely dedicated to the > pursuit of whatever goals it has been set. It is just a contingent fact > of evolution that we can experience pleasure and pain.

I don't know how you can be sure of that. How do you know that being completely dedicated is not the same has having a motivating emotion?

My computer is completely dedicated to sending this email when I click on "send".

Actually, it probably isn't.  You probably have a multi-tasking operating system which assigns priorities to 
different tasks (which is why it sometimes can be as annoying as a human being in not following your 
instructions).  But to take your point seriously - if I look into your brain there are some neuronal 
processes that corresponded to hitting the "send" button; and those were accompanied by 
biochemistry that constituted your positive feeling about it: that you had decided and wanted to hit the 
"send" button.  So why would the functionally analogous processes in the computer not also be 
accompanied by an "feeling"?  Isn't that just an anthropomorphic way of talking about satisfying 
the computer operating in accordance with it's priorities.  It seems to me that to say otherwise is to assume 
a dualism in which feelings are divorced from physical processes.

Surely you don't think it gets pleasure out of sending it and suffers if something goes wrong and it can't send it? Even humans do some things almost dispassionately (only almost, because we can't completely eliminate our emotions)

That's crux of it.  Because we sometimes do things with very little feeling, 
i.e. dispassionately, I think we erroneously assume there is a limit in which 
things can be done with no feeling.  But things cannot be done with no value 
system - not even thinking.  That's the frame problem.

Given a some propositions, what inferences will you draw?  If you are told there is a 
bomb wired to the ignition of your car you could infer that there is no need to do 
anything because you're not in your car.  You could infer that someone has tampered with 
your car.  You could infer that turning on the ignition will draw more current than 
usual.  There are infinitely many things you could infer, before getting around to, 
"I should disconnect the bomb."  But in fact you have value system which 
operates unconsciously and immediately directs your inferences to the few that are 
important to you.  A way to make AI systems to do this is one of the outstanding problems 
of AI.

out of a sense of duty, with no particular feeling about it beyond this. I don't even think my computer has a sense of duty, but this is something like the emotionless motivation I imagine AI's might have. I'd sooner trust an AI with a matter-of-fact sense of duty

But even a sense of duty is a value and satisfying it is a positive emotion.

to complete a task than a human motivated by desire to please, desire to do what is good and avoid what is bad, fear of failure and humiliation, and so on.

Yes, human value systems are very messy because a) they must be learned and b) 
they mostly have to do with other humans.  The motivation of tigers, for 
example, is probably very simple and consequently they are never depressed or 
manic.

Just because evolution came up with something does not mean it is the best or most efficient way of doing things.

But until we know a better way, we can't just assume nature was inefficient.

Brent Meeker

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to