Re: [singularity] Motivational Systems that are stable

2006-10-28 Thread Richard Loosemore


Ben,

There is something about the gist of your response that seemed strange 
to me, but I think I have put my finger on it:  I am proposing a general 
*class* of architectures for an AI-with-motivational-system.  I am not 
saying that this is a specific instance (with all the details nailed 
down) of that architecture, but an entire class. an approach.


However, as I explain in detail below, most of your criticisms are that 
there MIGHT be instances of that architecture that do not work.  Out of 
the countless possible instantiations of my proposed architecture, you 
are searching for SOME that might not work.  But so what if they don't? 
 It has no consequences for my argument.



To see this more vividly, consider the following analogy, which captures 
the type of argument going on here.  Imagine we are back in the late 
1800s and someone claimed that the new-fangled four-wheeled automobiles 
CANNOT be used on the battlefield because they will bog down in 
trenches.  I disagree with this person and say that I know of a type of 
vehicle that CAN be used even where there are trenches.


They challenge me to propose such a thing.  So I describe the *class* of 
vehicles that uses tracks instead of wheels.  I don't describe a 
particular instance of a tracked vehicle, just the class, saying that IN 
PRINCIPLE this class of vehicle could do the job.


But then the type of replies I get are like these [this is not a parody, 
btw, just a genuine attempt to illuminate the argument]:


1) The existence of tracks does not guarantee that it will cross 
trenches:  what if the tracks are only 2 feet long?  ("The existence of

a large number of constraints does not intrinsically imply "tight
governance."   My Response:  No, but only in weird instances of my 
proposed architecture would tight governance be a difficult thing to 
arrange, so why would I care about those weird cases?)


2) Okay, so the tracks could be long enough, but you haven't presented 
any arguments to show the vehicle will be flexible enough to turn 
corners. ("But the question then becomes whether this set of constraints 
can simultaneously provide ... the flexibility of governance needed to 
permit general, broad-based learning".  My Response:   I don't 
understand:  why would it *not* be capable of general, broad-based 
learning?  I can see how it might be possible for such a problem to 
arise, but only in weird instances of the architecture I proposed, not 
in the general case.  Please explain why this might be a general 
property of the class of systems, because I don't think it is.)


3) Well, I wonder if it would be possible, using this tracked vehicle, 
to really do all the required tasks and carry all the required equipment 
... maybe it is possible, but you don't give an argument re this point. 
 ("I just wonder if, in this sort of architecture you describe, it is 
really possible to guarantee Friendliness without hampering creative
learning.  Maybe it is possible, but you don't give an argument re this 
point."   My Response:  Why would you even suspect that 'creative 
learning' might be a problem?  I gave no argument re that point, because 
I cannot see any way that it should be a problem.  Please explain why 
this would follow.)


4) Yes, but your whole argument seems to assume tracks on only the first 
generation of vehicles, not on all future production models. ("However, 
your whole argument seems to assume an AGI with a fixed level of 
intelligence, rather than a constantly self-modifying and improving AGI. 
 If an AGI is rapidly increasing its hardware infrastructure and its 
intelligence, then I maintain that guaranteeing its Friendliness is 
probably impossible ... and your argument gives no way of getting around 
this."  My Response:  I'm afraid this point is simply not true... I 
very specifically said that once you had built the first AI with this 
architecture, it would then choose to augment itself into a new system 
with the same constraints.  It understands the significance of not doing 
so, and therefore will take the necessary steps.  I cannot answer that 
point any plainer than I did.  I certainly said nothing at all that 
implied a fixed level of AI intelligence.)



At the end you make this point, which I will deal with directly:

> In a radically self-improving AGI built according to your
> architecture, the set of constraints would constantly be increasing in
> number and complexity ... in a pattern based on stimuli from the
> environment as well as internal stimuli ... and it seems to me you
> have no way to guarantee based on the smaller **initial** set of
> constraints, that the eventual larger set of constraints is going to
> preserve "Friendliness" or any other criterion.

On the contrary, this is a system that grows by adding new ideas whose 
motivatonal status must be consistent with ALL of the previous ones, and 
the longer the system is allowed to develop, the deeper the new ideas 
are constrained by

Re: Re: [singularity] Re: [agi] Motivational Systems that are stable

2006-10-28 Thread Ben Goertzel

Hi,


The problem, Ben, is that your response amounts to "I don't see why that
would work", but without any details.


The problem, Richard, is that you did not give any details as to why
you think your proposal will "work" (in the sense of delivering a
system whose Friendliness can be very confidently known)


The central claim was that because the behavior of the system is
constrained by a large number of connections that go from motivational
mechanism to thinking mechanism, the latter is tightly governed.


But this claim, as stated, seems not to be true  The existence of
a large number of constraints does not intrinsically imply "tight
governance."

Of course, though, one can posit the existence of a large number of
constraints that DOES provide tight governance.

But the question then becomes whether this set of constraints can
simultaneously provide

a) the tightness of governance needed to guarantee Friendliness

b) the flexibility of governance needed to permit general, broad-based learning

You don't present any argument as to why this is going to be the case

I just wonder if, in this sort of architecture you describe, it is
really possible to guarantee Friendliness without hampering creative
learning.  Maybe it is possible, but you don't give an argument re
this point.

Actually, I suspect that it probably **is** possible to make a
reasonably benevolent AGI according to the sort of NN architecture you
suggest ... (as well as according to a bunch of other sorts of
architectures)

However, your whole argument seems to assume an AGI with a fixed level
of intelligence, rather than a constantly self-modifying and improving
AGI.  If an AGI is rapidly increasing its hardware infrastructure and
its intelligence, then I maintain that guaranteeing its Friendliness
is probably impossible ... and your argument gives no way of getting
around this.

In a radically self-improving AGI built according to your
architecture, the set of constraints would constantly be increasing in
number and complexity ... in a pattern based on stimuli from the
environment as well as internal stimuli ... and it seems to me you
have no way to guarantee based on the smaller **initial** set of
constraints, that the eventual larger set of constraints is going to
preserve "Friendliness" or any other criterion.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Re: [agi] Motivational Systems that are stable

2006-10-28 Thread Richard Loosemore

Ben Goertzel wrote:

Richard,

As I see it, in this long message you have given a conceptual sketch
of an AI design including a motivational subsystem and a cognitive
subsystem, connected via a complex network of continually adapting
connections.  You've discussed the way such a system can potentially
build up a self-model involving empathy and a high level of awareness,
and stability, etc.

All this makes sense, conceptually; though as you point out, the story
you give is short on details, and I'm not so sure you really know how
to "cash it out" in terms of mechanisms that will actually function
with adequate intelligence ... but that's another story...

However, you have given no argument as to why the failure of this kind
of architecture to be stably Friendly is so ASTOUNDINGLY UNLIKELY as
you claimed in your original email.  You have just argued why it's
plausible to believe such a system would probably have a stable goal
system.  As I see it, you did not come close to proving your original
claim, that


>> > The motivational system of some types of AI (the types you would
>> > classify as tainted by complexity) can be made so reliable that the
>> > likelihood of them becoming unfriendly would be similar to the
>> > likelihood of the molecules of an Ideal Gas suddenly deciding to 
split

>> > into two groups and head for opposite ends of their container.


I don't understand how this extreme level of reliability would be
achieved, in your design.

Rather, it seems to me that the reliance on complex, self-organizing
dynamics makes some degree of indeterminacy in the system almost
inevitable, thus making the system less than absolutely reliable.
Illustratng this point, humans (who are complex dynamical systems) are
certainly NOT reliable in terms of Friendliness or any other subtle
psychological property...

-- Ben G


The problem, Ben, is that your response amounts to "I don't see why that 
would work", but without any details.  You ask no questions, nor 
redescribe the proposal back to me in specific terms, so it is hard not 
to conclude that your comments are based on not understanding it.


You do go further at one point and say that you don't believe I can cash 
out the sketch in terms of mechanisms that work.  I fail to see how can 
you come to such a strong conclusion, when the rest of what you say 
implies (or says directly) that you do not understand the proposed 
mechanism.


**

The central claim was that because the behavior of the system is 
constrained by a large number of connections that go from motivational 
mechanism to thinking mechanism, the latter is tightly governed.  You 
know as well as I do about the power of massive numbers of weak 
constraints.  You know that as the numbers of constraints rise, the 
depth of the potential well that they can define increases.  I used that 
general idea, coupled with some details about a motivational system, to 
claim that the latter would constrain the thinking mechanism in just 
that way.  That leads to the possibility of an extremely deep potential 
well which is the behavior we call Friendly.


You may disagree about the details, but in that case you should talk 
about the details, not try to imply that there is no way whatsoever that 
a type of behavior could be extremely tightly constrained.  That latter 
assertion is just wrong:  there are ways to make a system very 
predictable when multiple simultaneous weak constraints are applied.  So 
you are in no position to just deny *that* part.  In principle, that is 
doable.  What matters is how I propose to get those multiple constraints 
to work.  I gave details.  You do not respond with arguments against any 
of those details.


**
ASIDE

In case anyone else is reading this and is puzzled by the idea of 
multiple weak constraints, let me give a classic example, due to Hinton:


  There is an unknown thing (call it "x").
  x is constrained in three ways, each of which is extremely vague.
  Actually, it is worse than that: one of the constraints is actually 
wrong. (But I will not tell you which one).

  Here are the three constraints:

  1   x was intelligent
  2   x was once an actor
  3   x was once a president

  Of all the things in the universe that x could be, most people are 
capable of identifying what x referred to.  (Or were, in the 1980s, when 
this example was proposed).  And yet there were only three extremely 
weak pieces of information:  weak because of ambiguity, and because one 
of the pieces of information was not even reliable.


Now imagine an x constrained from a thousand different directions at 
once.  In principle, the value of x could be pinned down extremely 
precisely.  For what it is worth, this is the basic reason why neural 
nets work as well as they do.



**
As for the last paragraph, the point you make there is pretty 
unbelievable.  You are claiming that human beings are complex dynamical 
systems, and that they are not 

Re: Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-10-28 Thread Ben Goertzel

Hi,


Do most in the filed believe that only a war can advance technology to
the point of singularity-level events?
Any opinions would be helpful.


My view is that for technologies involving large investment in
manufacturing infrastructure, the US military is one very likely
source of funds.  But not the only one.  For instance, suppose that
computer manufacturers decide they need powerful nanotech in order to
build better and better processors: that would be a convincing
nonmilitary source for massive nanotech R&D funds.

OTOH for technologies like AGI where the main need is innovation
rather than expensive infrastructure, I think a key role for the
military is less likely.  I would expect the US military to be among
the leaders in robotics, because robotics is
costly-infrastructure-centric.  But not necessarily in robot
*cognition* (as opposed to hardware) because cognition R&D is more
innovation-centric.

Not that I'm saying the US military is incapable of innovation, just
that it seems to be more reliable as a source of development $$ for
technologies not yet mature enough to attract commercial investment,
than as a source for innovative ideas.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]