Stathis Papaioannou wrote:
Brent Meeker writes:
> My computer is completely dedicated to sending this email when I
click > on "send".
Actually, it probably isn't. You probably have a multi-tasking
operating system which assigns priorities to different tasks (which is
why it sometimes can be as annoying as a human being in not following
your instructions). But to take your point seriously - if I look into
your brain there are some neuronal processes that corresponded to
hitting the "send" button; and those were accompanied by biochemistry
that constituted your positive feeling about it: that you had decided
and wanted to hit the "send" button. So why would the functionally
analogous processes in the computer not also be accompanied by an
"feeling"? Isn't that just an anthropomorphic way of talking about
satisfying the computer operating in accordance with it's priorities.
It seems to me that to say otherwise is to assume a dualism in which
feelings are divorced from physical processes.
Feelings are caused by physical processes (assuming a physical world),
but it seems impossible to deduce what the feeling will be by observing
the underlying physical process or the behaviour it leads to. Is a robot
that withdraws from hot stimuli experiencing something like pain,
disgust, shame, sense of duty to its programming, or just an irreducible
motivation to avoid heat?
>Surely you don't think it gets pleasure out of sending it and >
suffers if something goes wrong and it can't send it? Even humans do >
some things almost dispassionately (only almost, because we can't >
completely eliminate our emotions)
That's crux of it. Because we sometimes do things with very little
feeling, i.e. dispassionately, I think we erroneously assume there is
a limit in which things can be done with no feeling. But things
cannot be done with no value system - not even thinking. That's the
frame problem.
Given a some propositions, what inferences will you draw? If you are
told there is a bomb wired to the ignition of your car you could infer
that there is no need to do anything because you're not in your car.
You could infer that someone has tampered with your car. You could
infer that turning on the ignition will draw more current than usual.
There are infinitely many things you could infer, before getting
around to, "I should disconnect the bomb." But in fact you have value
system which operates unconsciously and immediately directs your
inferences to the few that are important to you. A way to make AI
systems to do this is one of the outstanding problems of AI.
OK, an AI needs at least motivation if it is to do anything, and we
could call motivation a feeling or emotion. Also, some sort of hierarchy
of motivations is needed if it is to decide that saving the world has
higher priority than putting out the garbage. But what reason is there
to think that an AI apparently frantically trying to save the world
would have anything like the feelings a human would under similar
circumstances? It might just calmly explain that saving the world is at
the top of its list of priorities, and it is willing to do things which
are normally forbidden it, such as killing humans and putting itself at
risk of destruction, in order to attain this goal. How would you add
emotions such as fear, grief, regret to this AI, given that the external
behaviour is going to be the same with or without them because the
hierarchy of motivation is already fixed?
You are assuming the AI doesn't have to exercise judgement about secondary objectives - judgement that may well involve conflicts of values that have to resolve before acting. If the AI is saving the world it might for example, raise it's cpu voltage and clock rate in order to computer faster - electronic adrenaline. It might cut off some peripheral functions, like running the printer. Afterwards it might "feel regret" when it cannot recover some functions.
Although there would be more conjecture in attributing these feelings to the AI
than to a person acting in the same situation, I think the principle is the
same. We think the persons emotions are part of the function - so why not the
AI's too.
>out of a sense of duty, with no > particular feeling about it beyond
this. I don't even think my computer > has a sense of duty, but this
is something like the emotionless > motivation I imagine AI's might
have. I'd sooner trust an AI with a > matter-of-fact sense of duty
But even a sense of duty is a value and satisfying it is a positive
emotion.
Yes, but it is complex and difficult to define. I suspect there is a
limitless variety of emotions that an AI could have, if the goal is to
explore what is possible rather than what is helpful in completing
particular tasks, and most of these would be unrecognisable to humans.
>to complete a task than a human motivated > by desire to please,
desire to do what is good and avoid what is bad, > fear of failure and
humiliation, and so on.
Yes, human value systems are very messy because a) they must be
learned and b) they mostly have to do with other humans. The
motivation of tigers, for example, is probably very simple and
consequently they are never depressed or manic.
Conversely, as above, we can imagine far more complicated value systems
and emotions.
>Just because evolution came > up with something does not mean it is
the best or most efficient way of > doing things.
But until we know a better way, we can't just assume nature was
inefficient.
Biological evolution is extremely limited in how it functions, and
efficiency given these limitations is not the same as absolute
efficiency. For example, we might do better with durable metal bodies
with factories producing spare parts as needed, but such a system is
unlikely to evolve naturally as a result of random genetic mutation.
True, but I'm imputing emotion to the AI at a functional level, not a hardware
or software level.
Brent Meeker
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---