Re: [agi] What should we do to be prepared?

2008-03-09 Thread Vladimir Nesov
On Sun, Mar 9, 2008 at 2:09 AM, Mark Waser [EMAIL PROTECTED] wrote:
   What is different in my theory is that it handles the case where the
dominant theory turns unfriendly.  The core of my thesis is that the
particular Friendliness that I/we are trying to reach is an
   attractor --
which means that if the dominant structure starts to turn unfriendly, it
   is
actually a self-correcting situation.
  
  
   Can you explain it without using the word attractor?

  Sure!  Friendliness is a state which promotes an entity's own goals;
  therefore, any entity will generally voluntarily attempt to return to that
  (Friendly) state since it is in it's own self-interest to do so.

In my example it's also explicitly in dominant structure's
self-interest to crush all opposition. You used a word friendliness
in place of attractor.


   I can't see why
   sufficiently intelligent system without brittle constraints should
   be unable to do that.

  Because it may not *want* to.  If an entity with Eliezer's view of
  Friendliness has it's goals altered either by error or an exterior force, it
  is not going to *want* to return to the Eliezer-Friendliness goals since
  they are not in the entity's own self-interest.


It doesn't explain the behavior, it just reformulates your statement.
You used a word want in place of attractor.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Recap/Summary/Thesis Statement

2008-03-09 Thread Mark Waser
I've just carefully reread Eliezer's CEV 
http://www.singinst.org/upload/CEV.html, and I believe your basic idea 
is realizable in Eliezer's envisioned system.


The CEV of humanity is only the initial dynamic, and is *intended* to be 
replaced with something better.


I completely agree with these statements.  It is Eliezer's current initial 
trajectory that I strongly disagree with (believe to be seriously 
sub-optimal) since it is in the OPPOSITE direction of where I see 
Friendliness.


Actually, on second thought, I disagree with your statement that The CEV is 
only the initial dynamic.  I believe that it is the final dynamic as well. 
A better phrasing that makes my point is that Eliezer's view of the CEV of 
humanity is only the initial dynamic and is intended to be replaced with 
something better.  My claim is that my view is something better/closer to 
the true CEV of humanity.



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-09 Thread Mark Waser
  Sure!  Friendliness is a state which promotes an entity's own goals;
  therefore, any entity will generally voluntarily attempt to return to that
  (Friendly) state since it is in it's own self-interest to do so.
 
 In my example it's also explicitly in dominant structure's
 self-interest to crush all opposition. You used a word friendliness
 in place of attractor.

While it is explicitly in dominant structure's self-interest to crush all 
opposition, I don't believe that doing so is OPTIMAL except in a *vanishingly* 
small minority of cases.  I believe that such thinking is an error of taking 
the most obvious and provably successful/satisfiable (but sub-optimal) action 
FOR A SINGLE GOAL over a less obvious but more optimal action for multiple 
goals.  Yes, crushing the opposition works -- but it is *NOT* optimal for the 
dominant structure's long-term self-interest (and the intelligent/wise dominant 
structure is clearly going to want to OPTIMIZE it's self-interest).

Huh?  I only used the work Friendliness as the first part of the definition as 
in Friendliness is . . . .   I don't understand your objection.

  Because it may not *want* to.  If an entity with Eliezer's view of
  Friendliness has it's goals altered either by error or an exterior force, it
  is not going to *want* to return to the Eliezer-Friendliness goals since
  they are not in the entity's own self-interest.

 It doesn't explain the behavior, it just reformulates your statement.
 You used a word want in place of attractor.

OK.  I'll continue to play . . . .  :-)

Replace *want* to with *in it's self interest to do so* and not going to 
*want* to with *going to see that it is not in it's self-interest* to yield
  Because it is not *in it's self interest to do so*.  If an entity with 
Eliezer's view of
  Friendliness has it's goals altered either by error or an exterior force, it 
is *going to 
  see that it is not in it's self-interest*  to return to the 
Eliezer-Friendliness goals since
  they are not in the entity's own self-interest.
Does that satisfy your objections?

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-09 Thread Vladimir Nesov
On Sun, Mar 9, 2008 at 8:13 PM, Mark Waser [EMAIL PROTECTED] wrote:


   Sure!  Friendliness is a state which promotes an entity's own goals;
   therefore, any entity will generally voluntarily attempt to return to
 that
   (Friendly) state since it is in it's own self-interest to do so.
 
  In my example it's also explicitly in dominant structure's
  self-interest to crush all opposition. You used a word friendliness
  in place of attractor.

 While it is explicitly in dominant structure's self-interest to crush all
 opposition, I don't believe that doing so is OPTIMAL except in a
 *vanishingly* small minority of cases.  I believe that such thinking is an
 error of taking the most obvious and provably successful/satisfiable (but
 sub-optimal) action FOR A SINGLE GOAL over a less obvious but more optimal
 action for multiple goals.  Yes, crushing the opposition works -- but it is
 *NOT* optimal for the dominant structure's long-term self-interest (and the
 intelligent/wise dominant structure is clearly going to want to OPTIMIZE
 it's self-interest).

 Huh?  I only used the work Friendliness as the first part of the definition
 as in Friendliness is . . . .   I don't understand your objection.


Terms of the game are described here:
http://www.overcomingbias.com/2008/02/taboo-words.html

What I'm trying to find out is what your alternative is and why is it
more optimal then crush-them-all.

My impression was that your friendliness-thing was about the strategy
of avoiding being crushed by next big thing that takes over. When I'm
in a position to prevent that from ever happening, why
friendliness-thing is still relevant?

The objective of taboo game is to avoid saying things like
friendliness-thing will be preferred because it's an attractor or
because it's more optimal, or because it's in system's
self-interest, and to actually explain why that is the case. For now,
I see crush-them-all as a pretty good solution.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-09 Thread Tim Freeman
From: Mark Waser [EMAIL PROTECTED]:
Hmm.  Bummer.  No new feedback.  I wonder if a) I'm still in Well
duh land, b) I'm so totally off the mark that I'm not even worth
replying to, or c) I hope being given enough rope to hang myself.
:-)

I'll read the paper if you post a URL to the finished version, and I
somehow get the URL.  I don't want to sort out the pieces from the
stream of AGI emails, and I don't want to try to provide feedback on
part of a paper.

-- 
Tim Freeman   http://www.fungible.com   [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Recap/Summary/Thesis Statement

2008-03-09 Thread j.k.

On 03/09/2008 10:20 AM,, Mark Waser wrote:
My claim is that my view is something better/closer to the true CEV 
of humanity.




Why do you believe it likely that Eliezer's CEV of humanity would not 
recognize your approach is better and replace CEV1 with your improved 
CEV2, if it is actually better?



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-09 Thread Ben Goertzel
Agree... I have not followed this discussion in detail, but if you have
a concrete proposal written up somewhere in a reasonably compact
format, I'll read it and comment

-- Ben G

On Sun, Mar 9, 2008 at 1:48 PM, Tim Freeman [EMAIL PROTECTED] wrote:
 From: Mark Waser [EMAIL PROTECTED]:

 Hmm.  Bummer.  No new feedback.  I wonder if a) I'm still in Well
  duh land, b) I'm so totally off the mark that I'm not even worth
  replying to, or c) I hope being given enough rope to hang myself.
  :-)

  I'll read the paper if you post a URL to the finished version, and I
  somehow get the URL.  I don't want to sort out the pieces from the
  stream of AGI emails, and I don't want to try to provide feedback on
  part of a paper.

  --
  Tim Freeman   http://www.fungible.com   [EMAIL PROTECTED]



  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-09 Thread Mark Waser

My impression was that your friendliness-thing was about the strategy
of avoiding being crushed by next big thing that takes over.


My friendliness-thing is that I believe that a sufficiently intelligent 
self-interested being who has discovered the f-thing or had the f-thing 
explained to it will not crush me because it will see/believe that doing so 
is *almost certainly* not in it's own self-interest.


My strategy is to define the f-thing well enough that I can explain it to 
the next big thing so that it doesn't crush me.



When I'm
in a position to prevent that from ever happening, why
friendliness-thing is still relevant?


Because you're *NEVER* going to be sure that you're in a position where you 
can prevent that from ever happening.



For now, I see crush-them-all as a pretty good solution.


Read Part 4 of my stuff (just posted).  Crush-them-all is a seriously 
sub-optimal solution even if it does clearly satisfy one goal since it 
easily can CAUSE your butt to get kicked later.



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Recap/Summary/Thesis Statement

2008-03-09 Thread Mark Waser
Why do you believe it likely that Eliezer's CEV of humanity would not 
recognize your approach is better and replace CEV1 with your improved 
CEV2, if it is actually better?


If it immediately found my approach, I would like to think that it would do 
so (recognize that it is better and replace Eliezer's CEV with mine).


Unfortunately, it is doesn't immediately find/evaluate my approach, it might 
traverse some *really* bad territory while searching (with the main problem 
being that I perceive the proportionality attractor as being on the uphill 
side of the revenge attractor and Eliezer's initial CEV as being downhill 
of all that). 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-09 Thread Mark Waser
OK.  Sorry for the gap/delay between parts.  I've been doing a substantial 
rewrite of this section . . . .

Part 4.

Despite all of the debate about how to *cause* Friendly behavior, there's 
actually very little debate about what Friendly behavior looks like.  Human 
beings actually have had the concept of Friendly behavior for quite some time.  
It's called ethics.

We've also been grappling with the problem of how to *cause* Friendly/ethical 
behavior for an equally long time under the guise of making humans act 
ethically . . . .

One of the really cool things that I enjoy about the Attractor Theory of 
Friendliness is that it has *a lot* of explanatory power for human behavior 
(see the next Interlude) as well as providing a path for moving humanity to 
Friendliness (and we all do want all *other* humans, except for ourselves, to 
be Friendly -- don't we?  :-)

My personal problem with, say, Jef Albright's treatises on ethics is that he 
explicitly dismisses self-interest.  I believe that his view of ethical 
behavior is generally more correct than that of the vast majority of people -- 
but his justification for ethical behavior is merely because such behavior is 
ethical or right.  I don't find that tremendously compelling.

Now -- my personal self-interest . . . . THAT I can get behind.  Which is the 
beauty of the Attractor Theory of Friendliness.  If Friendliness is in my own 
self-interest, then I'm darn well going to get Friendly and stay that way.  So, 
the constant question for humans is Is ethical behavior on my part in the 
current circumstances in *my* best interest?  So let's investigate that 
question some . . . . 

It is to the advantage of Society (i.e. the collection of everyone else) to 
*make* me be Friendly/ethical and Society is pretty darn effective at it -- to 
the extent that there are only two cases/circumstances where 
unethical/UnFriendly behavior is still in my best interest:
  a.. where society doesn't catch me being unethical/unFriendly OR 
  b.. where society's sanctions don't/can't successfully outweigh my 
self-interest in a particular action.
Note that Vladimir's crush all opposition falls under the second case since 
there are effectively no sanctions when society is destroyed

But why is Society (or any society) the way that it is and how did/does it come 
up with the particular ethics that it did/does?  Let's define a society as a 
set of people with common goals that we will call that society's goals.  And 
let's start out with a society with a trial goal of Promote John's goals.  
Now, John could certainly get behind that but everyone else would probably drop 
out as soon as they realized that they were required to grant John's every whim 
-- even at the expense of their deepest desires -- and the society would 
rapidly end up with exactly one person -- John.  The societal goal of Don't 
get in the way of John's goals is somewhat easier for other people and might 
not drive *everyone* away -- but I'm sure that any intelligent person would 
still defect towards a society that most accurately represented *their* goals.  
Eventually, you're going to get down to Don't mess with anyone's goals, be 
forced to add the clause unless absolutely necessary, and then have to fight 
over what when absolutely necessary means.  But what we've got here is what I 
would call the goal of a Friendly society -- Don't mess with anyone's goals 
unless absolutely necessary and I would call this a huge amount of progress.

If we (as individuals) could recruit everybody *ELSE* to this society (without 
joining ourselves), the world would clearly be a much, much better place for 
us.  It is obviously in our enlightened self-interest to do this.  *BUT* (and 
this is a huge one), the obvious behavior of this society would be to convert 
anybody that it can and kick the ass of anyone not in the society (but only to 
the extent to which they mess with the goals of the society since doing more 
would violate the society's own goal of not messing with anyone's goals).

So, the question is -- Is joining such a society in our self-interest?

To the members of any society, our not joining clearly is a result of our 
believing that our goals are more important than that society's goals.  In the 
case of the Friendly society, it is a clear signal of hostility since they are 
willing to not interfere with our goals as long as we don't interfere with 
theirs -- and we are not willing to sign up to that (i.e. we're clearly 
signaling our intention to mess with them).  The success of the optimistic 
tit-for-tat algorithm shows that the best strategy for deterrence of an 
undesired behavior is directly proportional to the undesired behavior.  Thus, 
any entity who knows about Friendliness and does not become Friendly should 
*expect* that the next Friendly entity to come along that is bigger than it 
*WILL* kick it's ass in direct proportion to it's unFriendliness to maintain 
the effectiveness of 

Re: [agi] What should we do to be prepared?

2008-03-09 Thread Vladimir Nesov
On Mon, Mar 10, 2008 at 12:35 AM, Mark Waser [EMAIL PROTECTED] wrote:

  Because you're *NEVER* going to be sure that you're in a position where you
  can prevent that from ever happening.


That's a current point of disagreement then. Let's iterate from here.
I'll break it up this way:

1) If I physically destroy every other intelligent thing, what is
going to threaten me?

2) Given 1), if something does come along, what is going to be a
standard of friendliness? Can I just say I'm friendly. Honest. and
be done with it, avoiding annihilation? History is rewritten by
victors.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-09 Thread Mark Waser

1) If I physically destroy every other intelligent thing, what is
going to threaten me?


Given the size of the universe, how can you possibly destroy every other 
intelligent thing (and be sure that no others ever successfully arise 
without you crushing them too)?


Plus, it seems like an awfully lonely universe.  I don't want to live there 
even if I could somehow do it.



2) Given 1), if something does come along, what is going to be a
standard of friendliness? Can I just say I'm friendly. Honest. and
be done with it, avoiding annihilation? History is rewritten by
victors.


These are good points.  The point to my thesis is exactly what the standard 
of Friendliness is.  It's just taking me a while to get there because 
there's *A LOT* of groundwork first (which is what we're currently hashing 
over).


If you're smart enough to say I'm friendly.  Honest. and smart enough to 
successfully hide the evidence from whatever comes along, then you will 
avoid annihilation (for a while, at least).  The question is -- Are you 
truly sure enough that you aren't being watched at this very moment that you 
believe that avoiding the *VERY* minor burden of Friendliness is worth 
courting annihilation?


Also, while history is indeed rewritten by the victors, but subsequent 
generations frequently do dig further and successfully unearth the truth. 
Do you really want to live in perpetual fear that maybe you didn't 
successfully hide all of the evidence?  It seems to me to be a pretty high 
cost for unjustifiably crushing-them-all.


Also, if you crush them all, you can't have them later for allies, friends, 
and co-workers.  It just doesn't seem like a bright move unless you truly 
can't avoid it. 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-09 Thread J Storrs Hall, PhD
On Sunday 09 March 2008 08:04:39 pm, Mark Waser wrote:
  1) If I physically destroy every other intelligent thing, what is
  going to threaten me?
 
 Given the size of the universe, how can you possibly destroy every other 
 intelligent thing (and be sure that no others ever successfully arise 
 without you crushing them too)?

You'd have to be a closed-world-assumption AI written in Prolog, I imagine.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-09 Thread Nathan Cravens
Pack your bags foaks, we're headed toward damnation and hellfire! haha!

Nathan

On Sun, Mar 9, 2008 at 7:10 PM, J Storrs Hall, PhD [EMAIL PROTECTED]
wrote:

 On Sunday 09 March 2008 08:04:39 pm, Mark Waser wrote:
   1) If I physically destroy every other intelligent thing, what is
   going to threaten me?
 
  Given the size of the universe, how can you possibly destroy every other
  intelligent thing (and be sure that no others ever successfully arise
  without you crushing them too)?

 You'd have to be a closed-world-assumption AI written in Prolog, I
 imagine.

 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Recap/Summary/Thesis Statement

2008-03-09 Thread j.k.

On 03/09/2008 02:43 PM, Mark Waser wrote:
Why do you believe it likely that Eliezer's CEV of humanity would not 
recognize your approach is better and replace CEV1 with your improved 
CEV2, if it is actually better?


If it immediately found my approach, I would like to think that it 
would do so (recognize that it is better and replace Eliezer's CEV 
with mine).


Unfortunately, it is doesn't immediately find/evaluate my approach, it 
might traverse some *really* bad territory while searching (with the 
main problem being that I perceive the proportionality attractor as 
being on the uphill side of the revenge attractor and Eliezer's 
initial CEV as being downhill of all that).


It *might* get stuck in bad territory, but can you make an argument why 
there is a *significant* chance of that happening? Given that humanity 
has many times expanded the set of 'friendlies deserving friendly 
behavior', it seems an obvious candidate for further research. And of 
course, those smarter, better, more ... ones will be in a better 
position than us to determine that.


One thing that I think most of will agree on is that if things did work 
as Eliezer intended, things certainly could go very wrong if it turns 
out that the vast majority of people --  when smarter, more the people 
they wish they could be, as if they grew up more together ... -- are 
extremely unfriendly in approximately the same way (so that their 
extrapolated volition is coherent and may be acted upon). Our 
meanderings through state space would then head into very undesirable 
territory. (This is the people turn out to be evil and screw it all up 
scenario.) Your approach suffers from a similar weakness though, since 
it would suffer under the seeming friendly people turn out to be evil 
and screw it all up before there are non-human intelligent friendlies to 
save us scenario.



Which, if either, of 'including all of humanity' rather than just 
'friendly humanity', or 'excluding non-human friendlies (initially)' do 
you see as the greater risk? Or is there some other aspect of Eliezer's 
approach that especially concerns you and motivates your alternative 
approach?


Thanks for continuing to answer my barrage of questions.

joseph

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com