On Feb 3, 2008 10:22 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:

> Harshad RJ wrote:
> > I read the conversation from the start and believe that Matt's
> > argument is correct.
>
> Did you mean to send this only to me?  It looks as though you mean it
> for the list.  I will send this reply back to you personally, but let me
> know if you prefer it to be copied to the AGI list.


Richard, thanks for replying. I did want to send it to the list.. and your
email address (as it turns out) was listed on the forum for replying to the
list.



>
>
> > There is a difference between intelligence and motive which Richard
> > seems to be ignoring. A brilliant instance of intelligence could still
> > be subservient to a malicious or ignorant motive, and I think that is
> > the crux of Matt's argument.
>
> With respect, I was not at all ignoring this point:  this is a
> misunderstanding that occurs very frequently, and I thought that I
> covered it on this occasion (my apologies if I forgot to do so..... I
> have had to combat this point on so many previous occasions that I may
> have overlooked yet another repeat).
>
> The crucial words are "... could still be subservient to a malicious or
> ignorant motive."
>
> The implication behind these words is that, somehow, the "motive" of
> this intelligence could arise after the intelligence, as a completely
> independent thing over which we had no control.  We are so used to this
> pattern in the human case (we can make babies, but we cannot stop the
> babies from growing up to be dictators, if that is the way they happen
> to go).
>
> This implication is just plain wrong.


I don't believe so, though your next statement..


> If you build an artificial
> intelligence, you MUST choose how it is motivated before you can even
> switch it on.


... might be true. Yes, a motivation of some form could be coded into the
system, but the paucity of expression in the level at which it is coded, may
still allow for "unintended" motivations to emerge out.

Say, for example, the motivation is coded in a form similar to current
biological systems. The AGI system is motivated to keep itself happy, and it
is happy when it has sufficient electrical energy at its disposal AND when
the pheromones from nearby humans are all screaming "positive".

It is easy to see how this kind of motivation could cause unintended
results. The AGI system could do dramatic things like taking over a nuclear
power station and manufacturing its own pheromone supply from  a chemical
plant. Or it could do more subtle things like, manipulating government
policies to ensure that the above happens!

Even allowing for a higher level of coding for motivation, like those
Asimov's Robot rules (#1 : Though shall not harm any human), it is very easy
for the system to go out of hand, since such codings are ambiguous. Should
"stem cell research" be allowed for example? It might harm some embryos but
help more number of adults. "Should prostitution be legalised?" It might
harm the human gene pool in some vague way, or might even harm some specific
individuals, but it also allows the victims themselves to earn some money
and survive longer.

So, yes, motivation might be coded, but an AGI system would eventually need
to have the *capability* to deduce its own motivation, and that emergent
motivation could be malicious/ignorant.

I quote the rest of the message, only for the benefit of the list.
Otherwise, my case rests here.



>  Nature does this in our case (and nature is very
> insistent that it wants its creations to have plenty of selfishness and
> aggressiveness built into them, because selfish and aggressive species
> survive), but nature does it so quietly that we sometimes think that all
> she does is build an intelligence, then leave the motivations to grow
> however they will.  But what nature does quietly, we have to do
> explicitly.
>
> My argument was (at the beginning of the debate with Matt, I believe)
> that, for a variety of reasons, the first AGI will be built with
> peaceful motivations.  Seems hard to believe, but for various technical
> reasons I think we can make a very powerful case that this is exactly
> what will happen.  After that, every other AGI will be the same way
> (again, there is an argument behind that).  Furthermore, there will not
> be any "evolutionary" pressures going on, so we will not find that (say)
> the first few million AGIs are built with perfect motivations, and then
> some rogue ones start to develop.
>
> So, when you say that "A brilliant instance of intelligence could still
> be subservient to a malicious or ignorant motive" you are saying
> something equivalent to "Toyota could build a car with a big red button
> on the roof, and whenever anyone slapped the button a nuclear weapon
> would go off in the car's trunk."  Technically, yes, I am sure Toyota
> could find a way to do this!  But oing this kind of thing is not an
> automatic consequence (or even a remotely probably consequence) of a
> company becoming a car manufacturer with enough resources to do such a
> thing.  Similarly, having malevolent motives is not an automatic
> consequence (or even a remotely probably consequence) of a system being
> an intelligence with enough resources to do such a thing.
>
>
>
> > There are two possibilities:
> > 1. The AGI in question could have been programmed to choose it's own
> > motive, in which case, the AGI may very well choose a motive that is
> > malicious to humanity.
>
> As I just explained, it must already have a motive before it can do any
> choosing.  That first motive will determine whether it even considers
> the possibility of being malevolent.
>
> Also, to be smart enough to redesign its own motivations, it has to be
> very smart indeed, having a profound understanding of different motives
> and the consequences of tampering with its own motivations.  Under those
> circumstances I believe it will simply take steps to reinforce its
> bening motives to make sure there is no chance of accidental deviation.
>
>
> >
> > 2. The AGI could be programmed to satisfy a motive specified by the
> > creator, in which case the maliciousness (or ignorance) of the creator
> > is what should be considered in this discussion. And since the
> > creators (that is we, humans) are known to be both ignorant and
> > capable of malice, the system is highly susceptible to "an AGI
> > singularity that leads to humanity being rendered redundant".
>
> Again, this argument is similar to the "Since Toyota is staffed by
> humans, who are are known to be both ignorant and capable of malice, the
> Toyota company is highly susceptible to a scenario in which they create
> a car with a nuclear weapon in the trunk".  Being capable of doing it is
> not the same as actually doing it.
>
> All that needs to happen is for the first AGI to be built with benign
> motives, and then all the possibilities for malicious systems go down to
> (virtually) zero within a few hours.  (There is a long list of
> supporting arguments for this, but that will have to be a separate
> discussion).
>
> To argue that there is any possibility of a malevolent AGI emerging, you
> have to attack the hinge point of this argument - you have to give a
> convincing reason why the first AGI (a) is going to be created by
> someone who has genocidal motives, (b) is going to be stable AND
> malevolent:  something that is probably very hard to achieve.
>
> I believe, overall, that when you look into the details of these
> scenarios, what you find is that all the Bad Outcome scenarios involve
> assumptions that, when you examine them carefully, are wildly
> improbable, or based on pure supposition about technical topics that we
> have not resolved yet.
>
> I think we should definitely have structured debate about this.
>
>
> Richard Loosemore.
>
>
>
>
>
> > -HRJ
> >
> > ------------------------------
> >
> >
> > Richard Loosemore wrote:
> >
> > Your comments below are unfounded, and all the worse for being so
> > poisonously phrased. If you read the conversation from the beginning
> > you will discover why: Matt initially suggested the idea that an AGI
> > might be asked to develop a virus of maximum potential, for purposes
> > of testing a security system, and that it might respond by inserting
> > an entire AGI system into the virus, since this would give the virus
> > its maximum potential. The thrust of my reply was that his entire idea
> > of Matt's made no sense, since the AGI could not be a "general"
> > intelligence if it could not see the full implications of the request.
> >
> >
> > Please feel free to accuse me of gross breaches of rhetorical
> > etiquette, but if you do, please make sure first that I really have
> > committed the crimes. ;-)
> >
> >
> >
> > Richard Loosemore
> >
> >
>
>

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com

Reply via email to