Jef Allbright wrote:

[snip]

Jef, I accept that you did not necessarily introduce any of the confusions that I dealt with in the snipped section, above: but your question was ambiguous enough that many people might have done so, so I was just covering all the bases, not asserting that you had made those mistakes.

But the second mistake is more subtle.  There is no good reason to
suppose that it will come to its own decisions about whether to nanny,
or what kind of nannying to do.  WE get to choose, when we design it,
what kind of motivations lie behind any nannying behavior.

Here I think it's apparent that you don't grasp the pragmatics of
meaning within an expanding context.  Motivations are meaningful only
within context.  As the context evolves, in a necessarily
unpredictable way, it is increasingly unlikely that meaning will be
preserved.  This is separate from your valid point about the
robustness of a complex system.

I strongly disagree with this, because you have taken the concept of "motivations" that I am trying to introduce and reverted it _back_ to an interpretation of semantics in which a heavily loaded phrase like "the pragmatics of meaning within an expanding context" has some meaning.

This is no fair, because I am challenging the very concept of semantics (and by extension, pragmatics) that is buried in the orthodox narrative in this area.

I fully understand the pragmatics of meaning within an expanding context: but that entire intepretation of what meaning is, is not applicable here.

In fact, I think you are (unintentionally) making a point about the systems that I am targeting as inadequate (both for AGI as a whole, and for the task of guaranteeing friendliness). It is precisely because the meaning of a motivation would drift as a result of context in those situations that I am proposing an alternative.

I submit that you are not understanding how the feedback between context and motivational system is actually working in the system I propose: the thing that defines what the AGI regards as "empathy toward the human race" is the collective effect of the initially learned constraints AND the ongoing development of societies ideas about what it wants to have in the way of 'empathy' .... because of the way the system is designed, these two develop in lockstep.

So, sure, if the entire human race eventually develops so that it decides (all of it) that it wants to sublime into the next dimension (cf Iain M Banks), then by that stage the AGI will have evolved in lockstep with humankind and will allow it.

So, it would evolve with context, but only in accordance with people's wishes: you cannot name a single force that will cause it deviate from that sychronisation.

It is not enough for you to make vague accusations about my not understanding pragmatics (please!). You gotta come down to cases and say what force would pull it away from a synchronisation with the human race and its desires.




The main thrust of my argument about motivational systems that are
stable, is that we do have precise control over the
motivations.

Which shfts the question to this:  given that we can control the
motivations that lie behind the nannying, can we design them in such a
way that they satisfy our requirements and are not overbearing?

I think we can easily do that.

First, consider that most of the things that drive us to do things that
are for our childrens' own good do not apply. There is no urgency to
get things right before we grow up. There are no hangups (like me
wanting to play a musical instrument when I was young). There is no need
to worry about other kids competing with us and making us feel bad as we
grow up.

The more you think about it, the more that all the drivers that would
cause a system (human or AGI) to impose on its protectees would simply
not be there.

Second, the AGI would be driven by general concerns about empathy with
the needs of the human race, so it would be motivated by something far
more subtle and flexible than "Make sure these kids grow up right."  It
would get its kicks from the general satisfaction of needs and wants on
a case by case basis.  If someone says "Let me live free and take my own
risks, even if that means I might accidentally kill myself", the AGI
would not be crudely and stupidly programmed to override that and say to
itself "Stupid human:  I'll save it when it is in mortal peril, and it
will thank me afterwards", it will say "These creatures are grown ups:
if that is what this person wants, so be it."

Actually, when you think this through you find that any attempt to
optimize a particular outcome ultimately incurs an increased cost in
terms of local entropy .  The best one can do (with anything less than
infinite computing power) is act to implement best-known principles,
not preferred outcomes.  What you seem to fail to realize, is that the
greater understanding of principles, under conditions of significantly
asymmetrical intelligence, is bound to displease the lesser
entit(y|ies), who will in all sincerity see its actions as "bad"
(against their perceived interests.)

This is why I emphasize the merits of (lower-case) friendly machine
intelligence assisting humans only at the upcoming phase of our
development.

But there are so many questionable assumptions in this!

(As well as assertions that there are things that I obviously don't understand, or fail to realize: I could do without those.)

Increased cost in terms of local entropy?  Huh?

How can you assert that there will be a "greater understanding of principles [by the AGI, that will be] bound to displease the lesser entities"? That is a naked and unjustified assertion. Comes out of thin air. Why do I have to believe that for a minute?



Richard Loosemore





-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=48591872-75576c

Reply via email to