Kaj Sotala wrote:
Gah, sorry for the awfully late response. Studies aren't leaving me
the energy to respond to e-mails more often than once in a blue
moon...
On Feb 4, 2008 8:49 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
They would not operate at the "proposition level", so whatever
difficulties they have, they would at least be different.
Consider [curiosity]. What this actually means is a tendency for the
system to seek pleasure in new ideas. "Seeking pleasure" is only a
colloquial term for what (in the system) would be a dimension of
constraint satisfaction (parallel, dynamic, weak-constraint
satisfaction). Imagine a system in which there are various
micro-operators hanging around, which seek to perform certain operations
on the structures that are currently active (for example, there will be
several micro-operators whose function is to take a representation such
as [the cat is sitting on the mat] and try to investigate various WHY
questions about the representation (Why is this cat sitting on this mat?
Why do cats in general like to sit on mats? Why does this cat Fluffy
always like to sit on mats? Does Fluffy like to sit on other things?
Where does the phrase 'the cat sat on the mat' come from? And so on).
[cut the rest]
Interesting. This sounds like it might be workable, though of course,
the exact assosciations and such that the AGI develops sound hard to
control. But then, that'd be the case for any real AGI system...
Humans have lots of desires - call them goals or motivations - that
manifest in differing degrees in different individuals, like wanting
to be respected or wanting to have offspring. Still, excluding the
most basic ones, they're all ones that a newborn child won't
understand or feel before (s)he gets older. You could argue that they
can't be inborn goals since the newborn mind doesn't have the concepts
to represent them and because they manifest variably with different
people (not everyone wants to have children, and there are probably
even people who don't care about the respect of others), but still,
wouldn't this imply that AGIs *can* be created with in-built goals? Or
if such behavior can only be implemented with a motivational-system
AI, how does that avoid the problem of some of the wanted final
motivations being impossible to define in the initial state?
I must think about this more carefully, because I am not quite sure of
the question.
However, note that we (humans) probably do not get many drives that are
introduced long after childhood, and that the exceptions (sex,
motherhood desires, teenage rebellion) could well be sudden increases in
the power of drives that were there from the beginning.
Ths may not have been your question, so I will put this one on hold.
Well, the basic gist was this: you say that AGIs can't be constructed
with built-in goals, because a "newborn" AGI doesn't yet have built up
the concepts needed to represent the goal. Yet humans seem tend to
have built-in (using the term a bit loosely, as all goals do not
manifest in everyone) goals, despite the fact that newborn humans
don't yet have built up the concepts needed to represent those goals.
It is true that many of those drives seem to begin in early childhood,
but it seems to me that there are still many goals that aren't
activated until after infancy, such as the drive to have children.
Oh, complete agreement here. I am only saying that the idea of a
"built-in goal" cannot be made to work in an AGI *if* one decides to
build that AGI using a "goal-stack" motivation system, because the
latter requires that any goals be expressed in terms of the system's
knowledge. If we step away from that simplistic type of GS system, and
instead use some other type of motivation system, then I believe it is
possible for the system to be motivated in a coherent way, even before
it has the explicit concepts to talk about its motivations (it can
pursue the goal "seek Momma's attention" long before it can explicitly
represent the concept of [attention], for example).
Now, you ask questions about built-in goals that only appear later.
This complicates the situation slghtly (i.e. we can easily lose
ourselves in the details if we are not careful...), but I actually think
this is very interesting. The crucial question is this: if a child
does not learn the concept "motherhood" until it starts to develop
concepts, then how can there be a "motivation" already built into the
system .... if it was built in, it would have to reach out and attach
itself to some well-developed symbols (e.g. motherhood) later in life,
so we theoreticians are left wondering how it "finds" the right concept
to latch onto. An even more dramatic example is sex: how does the
older child suddenly get a bunch of very specific ideas aboout what they
need to do to make babies? If there is a built-in [sex] motivation, how
does it hook up to the set of activities (some of them really pretty
specific, if you ask me) that need to be done in order to reproduce?
The reason this question is difficult is that you cannot assume that the
motivation system is smart enough to go find the right concepts, or you
risk putting an itelligent homunculus into your design: you assume that
the MS is so smart that it knows all these concepts already, and can go
on a smart search through all the concepts that the rest of the mind
painstakingly *acquired* since its birth, trying to find all the ideas
that need to be linked together to make an action plan for sex.
The way to get around that problem is to notice two things. One is that
the sex drives can indeed be there from the very beginning, but in very
mild form, just waiting to be kicked into high gear later on. I think
this accounts for a large chunk of the explanation (there is evidence
for this: some children are explictly thinking engaged in sex-related
activities at the age of three, at least). The second part of the
explanation is that, indeed, the human mind *does* have trouble making a
an easy connection to those later concepts: sexual ideas do tend to get
attached to the most peculiar behaviors. Perhaps this is a sigh that
the hook-up process is not straightforward.
More later.
Richard Loosemore
-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com