> I mean that ethics or friendliness is an algorithmically complex function,
> like our legal system.  It can't be simplified.

The determination of whether a given action is friendly or ethical or not is 
certainly complicated but the base principles are actually pretty darn simple.

> However, I don't believe that friendliness can be made stable through RSI.  

Your wording is a bit unclear here.  RSI really has nothing to do with 
friendliness other than the fact that RSI makes the machine smarter and the 
machine then being smarter *might* have any of the consequences of:
  1.. understanding friendliness better
  2.. evaluating whether something is friendly better
  3.. convincing the machine that friendliness should only apply to the most 
evolved life-form (something that this less-evolved life-form sees as patently 
ridiculous)
I'm assuming that you mean you believe that friendliness can't be made stable 
under improving intelligence.  I believe that you're wrong.

> We
> can summarize the function's decision process as "what would the average human
> do in this situation?"

That's not an accurate summary as far as I'm concerned.  I don't want *average* 
human judgement.  I want better.

> The function therefore has to be
> modifiable because human ethics changes over time, e.g. attitudes toward the
> rights of homosexuals, the morality of slavery, or whether hanging or
> crucifixion is an appropriate form of punishment.

I suspect that our best current instincts are fairly close to friendliness.  
Humans started out seriously unfriendly because friendly entities *don't* 
survive in an environment populated only by unfriendlies.  As society grows and 
each individual becomes friendlier, it's an upward spiral to where we need/want 
to be.  I think that the top of the spiral (i.e. the base principles) is pretty 
obvious.  I think that the primary difficulties are determining all the cases 
where we're constrained by circumstances and what won't work yet and can't 
determine what is best.

> Second, as I mentioned before, RSI is necessarily experimental, and therefore
> evolutionary, and the only stable goal in an evolutionary process is rapid
> reproduction and acquisition of resources.  

I disagree strongly.  Experimental only implies a weak meaning of the term 
evolutionary and your assertion that the only stable goal in a evolutionary 
process is rapid reproduction and acquisition of resources may apply to the 
most obvious case of animal evolution but it certainly doesn't apply to 
numerous evolutionary process that scientists perform all the time (For 
example, when scientists are trying to evolve a protein that binds to a certain 
receptor.  In that case, the stable goal is binding strength and nothing else 
since the scientists then provide the reproduction for the best goal-seekers).

> But as AGI grows more powerful, humans
> will be less significant and more like a lower species that competes for
> resources.

So you don't believe that humans will self-improve?  You don't believe that 
humans will be able to provide something that the AGI might value?  You don't 
believe that a friendly AGI would be willing not to hog *all* the resources.  
Personally, I think that the worst case with a friendly AGI is that we would 
end up as pampered pets until we could find a way to free ourselves of our 
biology.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49672841-ba128c

Reply via email to