--- On Sat, 8/30/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
> Won't work, Moore's law is ticking, and one day a
> morally arbitrary
> self-improving optimization will go FOOM. We have to try.

I wish I had a response to that. I wish I could believe it was even possible. 
To me, this is like saying "we have to try to build an anti-H-bomb device 
before someone builds an H-bomb."  Oh, and the anti-H-bomb looks exactly like 
the H-bomb. It just behaves differently. We have to try, right?

> Given the psychological unity of humankind, giving the
> focus of
> "right" to George W. Bush personally will be
> enormously better for
> everyone than going in any direction assumed by AI without
> the part of
> Friendliness structure that makes it absorb the goals from
> humanity.
> CEV is an attempt to describe how to focus AI on humanity
> as a whole,
> rather than on a specific human.

Psychological unity of humankind?!  What of suicide bombers and biological 
weapons and all the other charming ways we humans have of killing one another?  
If giving an FAI to George Bush, or Barack Obama, or any other political 
leader, is your idea of Friendliness, then I have to wonder about your grasp of 
human nature. It is impossible to see how that technology would not be used as 
a weapon. 
 
> And you are assembling the H-bomb (err, evolved
> intelligence) in the
> garage just out of curiosity, and occasionally to use it as
> a tea
> table, all the while advocating global disarmament.

That's why I advocate limiting the scope and power of any such creation, which 
is possible because it's simulated, and not RSI.  
 
> > The question is whether its possible to know in
> advance that an modification
> > won't be unstable, within the finite computational
> resources available to an AGI.
> 
> If you write something redundantly 10^6 times, it won't
> all just
> spontaneously *change*, in the lifetime of the universe. In
> the worst
> case, it'll all just be destroyed by some catastrophe
> or another, but
> it won't change in any interesting way.

You lost me there - not sure how that relates to "whether its possible to know 
in advance that an modification won't be unstable, within the finite 
computational resources available to an AGI."
 
> > With the kind of recursive scenarios we're talking
> about, simulation is the only
> > way to guarantee that a modification is an
> improvement, and an AGI simulating
> > its own modified operation requires exponentially
> increasing resources, particularly
> > as it simulates itself simulating itself simulating
> itself, and so on for N future
> > modifications.
> 
> Again, you are imagining an impossible or faulty strategy,
> pointing to
> this image, and saying "don't do that!".
> Doesn't mean there is no good
> strategy.

What was faulty or impossible about what I wrote? 

Terren


      


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to