Harshad RJ wrote:
On Feb 18, 2008 10:11 PM, Richard Loosemore <[EMAIL PROTECTED]
<mailto:[EMAIL PROTECTED]>> wrote:
You assume that the system does not go through a learning phase
(childhood) during which it acquires its knowledge by itself. Why do
you assume this? Because an AGI that was motivated only to seek
electricity and pheromones is going to be as curious, as active, as
knowledge seeking, as exploratory (etc etc etc) as a moth that has been
preprogrammed to go towards bright lights. It will never learn aything
by itself because you left out the [curiosity] motivation (and a lot
else besides!).
I think your reply points back to the confusion between intelligence and
motivation. "Curiosity" would be a property of intelligence and not
motivation. After all, you need a motivation to be curious. Moreover,
the curiosity would be guided by the kind of motivation. A benevolent
motive would drive the curiosity to seek benevolent solutions, like say
solar power, while a malevolent motive could drive it to seek
destructive ones.
No confusion, really. I do understand that "curiosity" is a difficult
case that lies on the borderline, but what I am talking about is
systematic exploration-behavior, or playing. The kind of activity that
children and curious adults engage in when they deliberately try to find
something out *because* they feel a curiosity urge (so to speak).
What I think you are referring to is just the understanding-mechanisms
that enable the "intelligence" part of the mind to solve problems or
generally find things out. Let's call this intelligence-mechanism a
[Finding-Out] activity, whereas the type of thing children do best is
[Curiosity], which is a motivation mode that they get into.
Then, using that terminology on your above paragraph:
"After all, you need a motivation to be curious" translates into "You
need a motivation of some sort to engage in [Finding-Out]." For
example, before you try to figure out where a particular link is located
on a web page, you need the (general) motivation that is pushing you to
do this, as well as the (specific) goal that drives you to find that
particular link.
"Moreover, the curiosity would be guided by the kind of motivation"
translates into "The [Finding-Out] activity would be guided by the
background motivation. This is what I have just said.
"A benevolent motive would drive the curiosity to seek benevolent
solutions, like say solar power, while a malevolent motive could drive
it to seek destructive ones." This translates into "A benevolent
motivation (and this really is a motivation, in my terminology) would
drive the [Finding-Out] mechanisms to seek benevolent solutions, like
say solar power, while a malevolent motivation (again, I would agree
that this is a motivation) could drive the [Finding-Out] mechanisms to
seek destructive ones."
What this all amounts to is that the thing I referred to as "curiosity"
really is a motivation, because a creature that has an unstructured,
background desire (a motivation) to find out about the world will
acquire a lot of background knowledge and become smart.
I see motivation as a much more basic property of intelligence. It needs
to answer "why" not "what" or "how".
But when we try to get an AGI to have the kind of structured behavior
necessary to learn by itself, we discover ..... what? That you cannot
have that kind of structured exploratory behavior without also having an
extremely sophisticated motivation system.
So, in the sense that I mentioned above, why do you say/imply that a
pheromone (or neuro transmitter) based motivation is not sophisticated
enough? And, without getting your hands messy with chemistry, how do you
propose to "explain" your emotions to a non-human intelligence? How
would you distinguish construction from destruction, chaos from order,
or why two people being able to eat a square meal is somehow better than
2 million reading Dilbert comics.
I frankly don't know if understand the question.
We already have creatures that seek nothing but chemical signals:
amoebae do this.
Imagine a human baby that did nothing but try to sniff out breast milk:
it would never develop because it would never do any of the other
things, like playing. It would just sit there and try to sniff for the
stuff it wanted.
In other words you cannot have your cake and eat it too: you cannot
assume that this hypothetical AGI is (a) completely able to build its
own understanding of the world, right up to the human level and beyond,
while also being (b) driven by an extremely dumb motivation system that
makes the AGI seek only a couple of simple goals.
In fact, I do think a & b are together possible and they best describe
how human brains work. Our motivation system is extremely "dumb":
reproduction! And it is expressed with nothing more than a feed back
loop using neuro-transmitters.
This flagrantly contradicts what we know about the human motivation
system. If you think that this is all there is to a motivation system,
then I am at a loss for words.
Richard Loosemore
-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com