On Sat, Aug 30, 2008 at 9:18 PM, Terren Suydam <[EMAIL PROTECTED]> wrote:
>
> --- On Sat, 8/30/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
>> Given the psychological unity of humankind, giving the
>> focus of
>> "right" to George W. Bush personally will be
>> enormously better for
>> everyone than going in any direction assumed by AI without
>> the part of
>> Friendliness structure that makes it absorb the goals from
>> humanity.
>> CEV is an attempt to describe how to focus AI on humanity
>> as a whole,
>> rather than on a specific human.
>
> Psychological unity of humankind?!  What of suicide bombers and biological
> weapons and all the other charming ways we humans have of killing one another?
> If giving an FAI to George Bush, or Barack Obama, or any other political 
> leader,
> is your idea of Friendliness, then I have to wonder about your grasp of human
> nature. It is impossible to see how that technology would not be used as a 
> weapon.

(Assuming you read my reply in "What is Friendly AI?" post)

Did you read the part about al-Qaeda programmers in CEV? The design of
initial dynamics of FAI needs to be good enough to be bootstrapped
from a rather deviant group of people and still turn out right. You
don't need to include the best achievements of the last thousands of
years in the core of dynamics that will define us for the next
billions of years. These achievements are factual information and they
won't be lost anyway. The only thing that you need to get right the
first time is the reference to the right concept that will be able to
unfold from there, and I believe there is little to add to this core
dynamics by specifying a particular human, having particular qualities
or knowledge. It exists on panhuman level, in evolutionarily
programmed complexity, that is pretty much the same in every one of
us. Now what *is* important is getting the initial dynamics right,
which might require much knowledge and understanding of the
bootstrapping process and the concept of "right" itself.


>> > The question is whether its possible to know in
>> advance that an modification
>> > won't be unstable, within the finite computational
>> resources available to an AGI.
>>
>> If you write something redundantly 10^6 times, it won't
>> all just
>> spontaneously *change*, in the lifetime of the universe. In
>> the worst
>> case, it'll all just be destroyed by some catastrophe
>> or another, but
>> it won't change in any interesting way.
>
> You lost me there - not sure how that relates to "whether its possible to
> know in advance that an modification won't be unstable, within the finite
> computational resources available to an AGI."

You may be unable to know whether an alien artifact X will explode in
a next billion years, but you can build your own artifact that pretty
definitely won't.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to