On Wednesday 11 June 2008 06:18:03 pm, Vladimir Nesov wrote:
> On Wed, Jun 11, 2008 at 6:33 PM, J Storrs Hall, PhD <[EMAIL PROTECTED]> 
wrote:
> > I claim that there's plenty of historical evidence that people fall into 
this
> > kind of attractor, as the word nirvana indicates (and you'll find similar
> > attractors at the core of many religions).
> 
> Yes, some people get addicted to a point of self-destruction. But it
> is not a catastrophic problem on the scale of humanity. And it follows
> from humans not being nearly stable under reflection -- we embody many
> drives which are not integrated in a whole. Which would be a bad
> design choice for a Friendly AI, if it needs to stay rational about
> Freindliness content.

This is quite true but not exactly what I was talking about. I would claim 
that the Nirvana attractors that AIs are vulnerable to are the ones that are 
NOT generally considered self-destructive in humans -- such as religions that 
teach Nirvana! 

Let's look at it another way: You're going to improve yourself. You will be 
able to do more than you can now, so you can afford to expand the range of 
things you will expend effort achieving. How do you pick them? It's the frame 
problem, amplified by recursion. So it's not easy nor has it a simple 
solution. 

But it does have this hidden trap: If you use stochastic search, say, and use 
an evaluation of (probability of success * value if successful), then Nirvana 
will win every time. You HAVE to do something more sophisticated.

Josh




-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com

Reply via email to