Re: [agi] What should we do to be prepared?

2008-03-12 Thread Vladimir Nesov
On Wed, Mar 12, 2008 at 6:21 PM, Mark Waser <[EMAIL PROTECTED]> wrote: > > From: "Vladimir Nesov" <[EMAIL PROTECTED]> > >I give up. > > > > with or without conceding the point (or declaring that I've convinced you > enough that you are now unsure but not enough that you're willing to concede > i

Re: [agi] What should we do to be prepared?

2008-03-12 Thread Mark Waser
> I understand it would be complicated and tedious to describe your > information-theoretical argument by yourself, however I'm guessing that > others are curious besides Vladimir. I for one would like to understand what > your argument entails, and I would be the first one to > admit I don't k

Re: [agi] What should we do to be prepared?

2008-03-12 Thread Mark Waser
;[EMAIL PROTECTED]> To: Sent: Tuesday, March 11, 2008 11:56 PM Subject: Re: [agi] What should we do to be prepared? I give up. On Tue, Mar 11, 2008 at 5:30 PM, Mark Waser <[EMAIL PROTECTED]> wrote: > Please reformulate what you mean by "my approach" independently then >

Re: [agi] What should we do to be prepared?

2008-03-11 Thread Maksym Taran
I understand it would be complicated and tedious to describe your information-theoretical argument by yourself, however I'm guessing that others are curious besides Vladimir. I for one would like to understand what your argument entails, and I would be the first one to admit I don't know as much in

Re: [agi] What should we do to be prepared?

2008-03-11 Thread Vladimir Nesov
I give up. On Tue, Mar 11, 2008 at 5:30 PM, Mark Waser <[EMAIL PROTECTED]> wrote: > > > > Please reformulate what you mean by "my approach" independently then > > and sketch how you are going to use information theory... I feel that > > my point failed to be communicated. > > You've already accept

Re: [agi] What should we do to be prepared?

2008-03-11 Thread Vladimir Nesov
On Tue, Mar 11, 2008 at 4:47 AM, Mark Waser <[EMAIL PROTECTED]> wrote: > > I can't prove a negative but if you were more familiar with Information > Theory, you might get a better handle on why your approach is ludicrously > expensive. > Please reformulate what you mean by "my approach" indepen

Re: [agi] What should we do to be prepared?

2008-03-10 Thread Mark Waser
Part 5. "The nature of evil" or "The good, the bad, and the evil" Since we've got the (slightly revised :-) goal of a Friendly individual and the Friendly society -- "Don't act contrary to anyone's goals unless absolutely necessary" -- we now can evaluate actions as good or bad in relation to t

Re: [agi] What should we do to be prepared?

2008-03-10 Thread Mark Waser
My second point that you omitted from this response doesn't need there to be universal substrate, which is what I mean. Ditto for "significant resources". I didn't omit your second point, I covered it as part of the difference between our views. You believe that certain tasks/options are rela

Re: [agi] What should we do to be prepared?

2008-03-10 Thread Vladimir Nesov
On Tue, Mar 11, 2008 at 12:37 AM, Mark Waser <[EMAIL PROTECTED]> wrote: > > >> How do we get from here to there? Without a provable path, it's all > >> just > >> magical hand-waving to me. (I like it but it's ultimately an > >> unsatifying > >> illusion) > > > > It's an independent stat

Re: [agi] What should we do to be prepared?

2008-03-10 Thread Mark Waser
On Mon, Mar 10, 2008 at 11:36 PM, Mark Waser <[EMAIL PROTECTED]> wrote: > Note that you are trying to use a technical term in a non-technical > way to "fight" a non-technical argument. Do you really think that I'm > asserting that virtual environment can be *exactly* as capable as > physical e

Re: [agi] What should we do to be prepared?

2008-03-10 Thread Vladimir Nesov
errata: On Tue, Mar 11, 2008 at 12:13 AM, Vladimir Nesov <[EMAIL PROTECTED]> wrote: > I'm sure that > for computational efficiency it should be a very strict limitation. it *shouldn't* be a very strict limitation -- Vladimir Nesov [EMAIL PROTECTED] ---

Re: [agi] What should we do to be prepared?

2008-03-10 Thread Vladimir Nesov
On Mon, Mar 10, 2008 at 11:36 PM, Mark Waser <[EMAIL PROTECTED]> wrote: > > Note that you are trying to use a technical term in a non-technical > > way to "fight" a non-technical argument. Do you really think that I'm > > asserting that virtual environment can be *exactly* as capable as > > phys

Re: [agi] What should we do to be prepared?

2008-03-10 Thread Mark Waser
Note that you are trying to use a technical term in a non-technical way to "fight" a non-technical argument. Do you really think that I'm asserting that virtual environment can be *exactly* as capable as physical environment? No, I think that you're asserting that the virtual environment is clos

Re: [agi] What should we do to be prepared?

2008-03-10 Thread Vladimir Nesov
On Mon, Mar 10, 2008 at 8:10 PM, Mark Waser <[EMAIL PROTECTED]> wrote: > > Information Theory is generally accepted as > correct and clearly indicates that you are wrong. > Note that you are trying to use a technical term in a non-technical way to "fight" a non-technical argument. Do you really

Re: [agi] What should we do to be prepared?

2008-03-10 Thread Vladimir Nesov
On Mon, Mar 10, 2008 at 6:13 PM, Mark Waser <[EMAIL PROTECTED]> wrote: > > I can destroy all Earth-originated life if I start early enough. If > > there is something else out there, it can similarly be hostile and try > > destroy me if it can, without listening to any friendliness prayer. > > Al

Re: [agi] What should we do to be prepared?

2008-03-10 Thread Stan Nilsen
Mark Waser wrote: Part 4. ... Eventually, you're going to get down to "Don't mess with anyone's goals", be forced to add the clause "unless absolutely necessary", and then have to fight over what "when absolutely necessary" means. But what we've got here is what I would call the goal of a F

Re: [agi] What should we do to be prepared?

2008-03-10 Thread Vladimir Nesov
On Mon, Mar 10, 2008 at 3:04 AM, Mark Waser <[EMAIL PROTECTED]> wrote: > > 1) If I physically destroy every other intelligent thing, what is > > going to threaten me? > > Given the size of the universe, how can you possibly destroy every other > intelligent thing (and be sure that no others ever

Re: [agi] What should we do to be prepared?

2008-03-09 Thread Charles D Hixson
Mark Waser wrote: Joseph has been asking about other goal-seeking entity attractors, so . . . . Interlude 2. The God Attractor How many of y'all are willing to recognize/agree that the vast majority of humans in Western Civilization are currently spread around the basin of yet another *v

Re: [agi] What should we do to be prepared?

2008-03-09 Thread Mark Waser
Joseph has been asking about other goal-seeking entity attractors, so . . . . Interlude 2. The God Attractor How many of y'all are willing to recognize/agree that the vast majority of humans in Western Civilization are currently spread around the basin of yet another *very* powerful ethical a

Re: [agi] What should we do to be prepared?

2008-03-09 Thread Nathan Cravens
Pack your bags foaks, we're headed toward damnation and hellfire! haha! Nathan On Sun, Mar 9, 2008 at 7:10 PM, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote: > On Sunday 09 March 2008 08:04:39 pm, Mark Waser wrote: > > > 1) If I physically destroy every other intelligent thing, what is > > > goin

Re: [agi] What should we do to be prepared?

2008-03-09 Thread J Storrs Hall, PhD
On Sunday 09 March 2008 08:04:39 pm, Mark Waser wrote: > > 1) If I physically destroy every other intelligent thing, what is > > going to threaten me? > > Given the size of the universe, how can you possibly destroy every other > intelligent thing (and be sure that no others ever successfully ari

Re: [agi] What should we do to be prepared?

2008-03-09 Thread Mark Waser
1) If I physically destroy every other intelligent thing, what is going to threaten me? Given the size of the universe, how can you possibly destroy every other intelligent thing (and be sure that no others ever successfully arise without you crushing them too)? Plus, it seems like an awfull

Re: [agi] What should we do to be prepared?

2008-03-09 Thread Vladimir Nesov
On Mon, Mar 10, 2008 at 12:35 AM, Mark Waser <[EMAIL PROTECTED]> wrote: > > Because you're *NEVER* going to be sure that you're in a position where you > can prevent that from ever happening. > That's a current point of disagreement then. Let's iterate from here. I'll break it up this way: 1) I

Re: [agi] What should we do to be prepared?

2008-03-09 Thread Mark Waser
OK. Sorry for the gap/delay between parts. I've been doing a substantial rewrite of this section . . . . Part 4. Despite all of the debate about how to *cause* Friendly behavior, there's actually very little debate about what Friendly behavior looks like. Human beings actually have had the

Re: [agi] What should we do to be prepared?

2008-03-09 Thread Mark Waser
My impression was that your friendliness-thing was about the strategy of avoiding being crushed by next big thing that takes over. My "friendliness-thing" is that I believe that a sufficiently intelligent self-interested being who has discovered the "f-thing" or had the "f-thing" explained to

Re: [agi] What should we do to be prepared?

2008-03-09 Thread Ben Goertzel
Agree... I have not followed this discussion in detail, but if you have a concrete proposal written up somewhere in a reasonably compact format, I'll read it and comment -- Ben G On Sun, Mar 9, 2008 at 1:48 PM, Tim Freeman <[EMAIL PROTECTED]> wrote: > From: "Mark Waser" <[EMAIL PROTECTED]>: > > >

Re: [agi] What should we do to be prepared?

2008-03-09 Thread Tim Freeman
From: "Mark Waser" <[EMAIL PROTECTED]>: >Hmm. Bummer. No new feedback. I wonder if a) I'm still in "Well >duh" land, b) I'm so totally off the mark that I'm not even worth >replying to, or c) being given enough rope to hang myself. >:-) I'll read the paper if you post a URL to the finished ver

Re: [agi] What should we do to be prepared?

2008-03-09 Thread Vladimir Nesov
On Sun, Mar 9, 2008 at 8:13 PM, Mark Waser <[EMAIL PROTECTED]> wrote: > > > >> Sure! Friendliness is a state which promotes an entity's own goals; > >> therefore, any entity will generally voluntarily attempt to return to > that > >> (Friendly) state since it is in it's own self-interest to do

Re: [agi] What should we do to be prepared?

2008-03-09 Thread Mark Waser
>> Sure! Friendliness is a state which promotes an entity's own goals; >> therefore, any entity will generally voluntarily attempt to return to that >> (Friendly) state since it is in it's own self-interest to do so. > > In my example it's also explicitly in dominant structure's > self-interes

Re: [agi] What should we do to be prepared?

2008-03-09 Thread Vladimir Nesov
On Sun, Mar 9, 2008 at 2:09 AM, Mark Waser <[EMAIL PROTECTED]> wrote: > >> What is different in my theory is that it handles the case where "the > >> dominant theory turns unfriendly". The core of my thesis is that the > >> particular Friendliness that I/we are trying to reach is an > >> "at

Re: [agi] What should we do to be prepared?

2008-03-08 Thread Mark Waser
What is different in my theory is that it handles the case where "the dominant theory turns unfriendly". The core of my thesis is that the particular Friendliness that I/we are trying to reach is an "attractor" -- which means that if the dominant structure starts to turn unfriendly, it is

Re: [agi] What should we do to be prepared?

2008-03-08 Thread Vladimir Nesov
On Sat, Mar 8, 2008 at 6:30 PM, Mark Waser <[EMAIL PROTECTED]> wrote: > > > > This sounds like magic thinking, sweeping the problem under the rug of > > 'attractor' word. Anyway, even if this trick somehow works, it doesn't > > actually address the problem of friendly AI. The problem with > > unfri

Re: [agi] What should we do to be prepared?

2008-03-08 Thread Mark Waser
This raises another point for me though. In another post (2008-03-06 14:36) you said: "It would *NOT* be Friendly if I have a goal that I not be turned into computronium even if (which I hereby state that I do)" Yet, if I understand our recent exchange correctly, it is possible for this to

Re: [agi] What should we do to be prepared?

2008-03-08 Thread J Storrs Hall, PhD
On Friday 07 March 2008 05:13:17 pm, Matt Mahoney wrote: > How does an agent know if another agent is Friendly or not, especially if the > other agent is more intelligent? See Beyond AI, p331-2. What's needed is a form of open source and provable reliability guarantees. This would have to be wor

Re: [agi] What should we do to be prepared?

2008-03-07 Thread Vladimir Nesov
On Fri, Mar 7, 2008 at 5:24 PM, Mark Waser <[EMAIL PROTECTED]> wrote: > > The core of my thesis is that the > particular Friendliness that I/we are trying to reach is an "attractor" -- > which means that if the dominant structure starts to turn unfriendly, it is > actually a self-correcting sit

Re: [agi] What should we do to be prepared?

2008-03-07 Thread j.k.
On 03/07/2008 03:20 PM,, Mark Waser wrote: > For there to be another attractor F', it would of necessity have to be > an attractor that is not desirable to us, since you said there is only > one stable attractor for us that has the desired characteristics. Uh, no. I am not claiming that there

Re: [agi] What should we do to be prepared?

2008-03-07 Thread Mark Waser
> For there to be another attractor F', it would of necessity have to be > an attractor that is not desirable to us, since you said there is only > one stable attractor for us that has the desired characteristics. Uh, no. I am not claiming that there is ONLY one unique attractor (that has the

Re: [agi] What should we do to be prepared?

2008-03-07 Thread Mark Waser
How does an agent know if another agent is Friendly or not, especially if the other agent is more intelligent? An excellent question but I'm afraid that I don't believe that there is an answer (but, fortunately, I don't believe that this has any effect on my thesis). -

Re: [agi] What should we do to be prepared?

2008-03-07 Thread j.k.
On 03/07/2008 08:09 AM,, Mark Waser wrote: There is one unique attractor in state space. No. I am not claiming that there is one unique attractor. I am merely saying that there is one describable, reachable, stable attractor that has the characteristics that we want. There are *clearly* o

Re: [agi] What should we do to be prepared?

2008-03-07 Thread Matt Mahoney
--- Mark Waser <[EMAIL PROTECTED]> wrote: > TAKE-AWAY: Having the statement "The goal of Friendliness is to promote the > goals of all Friendly entities" allows us to make considerable progress in > describing and defining Friendliness. How does an agent know if another agent is Friendly or not,

Re: [agi] What should we do to be prepared?

2008-03-07 Thread Mark Waser
Comments seem to be dying down and disagreement appears to be minimal, so let me continue . . . . Part 3. Fundamentally, what I'm trying to do here is to describe an attractor that will appeal to any goal-seeking entity (self-interest) and be beneficial to humanity at the same time (Friendly)

Re: [agi] What should we do to be prepared?

2008-03-07 Thread Stan Nilsen
Matt Mahoney wrote: --- Stan Nilsen <[EMAIL PROTECTED]> wrote: Reprogramming humans doesn't appear to be an option. We do it all the time. It is called "school". I might be tempted to call this "manipulation" rather than programming. The results of schooling are questionable while program

Re: [agi] What should we do to be prepared?

2008-03-07 Thread Matt Mahoney
--- Stan Nilsen <[EMAIL PROTECTED]> wrote: > Reprogramming humans doesn't appear to be an option. We do it all the time. It is called "school". Less commonly, the mentally ill are forced to take drugs or treatment "for their own good". Most notably, this includes drug addicts. Also, it is comm

Re: [agi] What should we do to be prepared?

2008-03-07 Thread Stan Nilsen
Matt Mahoney wrote: --- Mark Waser <[EMAIL PROTECTED]> wrote: How do you propose to make humans Friendly? I assume this would also have the effect of ending war, crime, etc. I don't have such a proposal but an obvious first step is defining/describing Friendliness and why it might be a good i

Re: [agi] What should we do to be prepared?

2008-03-07 Thread Matt Mahoney
--- Mark Waser <[EMAIL PROTECTED]> wrote: > > How do you propose to make humans Friendly? I assume this would also have > > the > > effect of ending war, crime, etc. > > I don't have such a proposal but an obvious first step is > defining/describing Friendliness and why it might be a good idea

Re: [agi] What should we do to be prepared?

2008-03-07 Thread Mark Waser
How do you propose to make humans Friendly? I assume this would also have the effect of ending war, crime, etc. I don't have such a proposal but an obvious first step is defining/describing Friendliness and why it might be a good idea for us. Hopefully then, the attractor takes over. (Actu

Re: [agi] What should we do to be prepared?

2008-03-07 Thread Mark Waser
Whether humans conspire to weed out wild carrots impacts whether humans are classified as Friendly (or, it would if the wild carrots were sentient). Why does it matter what word we/they assign to this situation? My vision of Friendliness places many more constraints on the behavior towards

Re: [agi] What should we do to be prepared?

2008-03-07 Thread J Storrs Hall, PhD
On Thursday 06 March 2008 08:45:00 pm, Vladimir Nesov wrote: > On Fri, Mar 7, 2008 at 3:27 AM, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote: > > The scenario takes on an entirely different tone if you replace "weed out some > > wild carrots" with "kill all the old people who are economically >

Re: [agi] What should we do to be prepared?

2008-03-06 Thread Vladimir Nesov
On Fri, Mar 7, 2008 at 3:27 AM, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote: > On Thursday 06 March 2008 06:46:43 pm, Vladimir Nesov wrote: > > My argument doesn't need 'something of a completely different kind'. > > Society and human is fine as substitute for human and carrot in my > > example

Re: [agi] What should we do to be prepared?

2008-03-06 Thread J Storrs Hall, PhD
On Thursday 06 March 2008 06:46:43 pm, Vladimir Nesov wrote: > My argument doesn't need 'something of a completely different kind'. > Society and human is fine as substitute for human and carrot in my > example, only if society could extract profit from replacing humans > with 'cultivated humans'.

Re: [agi] What should we do to be prepared?

2008-03-06 Thread j.k.
On 03/06/2008 02:18 PM,, Mark Waser wrote: I wonder if this is a substantive difference with Eliezer's position though, since one might argue that 'humanity' means 'the [sufficiently intelligent and sufficiently ...] thinking being' rather than 'homo sapiens sapiens', and the former would of co

Re: [agi] What should we do to be prepared?

2008-03-06 Thread Matt Mahoney
--- Mark Waser <[EMAIL PROTECTED]> wrote: > > I think this one is a package deal fallacy. I can't see how whether > > humans conspire to weed out wild carrots or not will affect decisions > > made by future AGI overlords. ;-) > > Whether humans conspire to weed out wild carrots impacts whether h

Re: [agi] What should we do to be prepared?

2008-03-06 Thread j.k.
At the risk of oversimplifying or misinterpreting your position, here are some thoughts that I think follow from what I understand of your position so far. But I may be wildly mistaken. Please correct my mistakes. There is one unique attractor in state space. Any individual of a species that d

Re: [agi] What should we do to be prepared?

2008-03-06 Thread Vladimir Nesov
On Fri, Mar 7, 2008 at 1:46 AM, Mark Waser <[EMAIL PROTECTED]> wrote: > > I think this one is a package deal fallacy. I can't see how whether > > humans conspire to weed out wild carrots or not will affect decisions > > made by future AGI overlords. ;-) > > Whether humans conspire to weed out wi

Re: [agi] What should we do to be prepared?

2008-03-06 Thread Vladimir Nesov
On Fri, Mar 7, 2008 at 1:48 AM, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote: > On Thursday 06 March 2008 04:28:20 pm, Vladimir Nesov wrote: > > > > This is different from what I replied to (comparative advantage, which > > J Storrs Hall also assumed), although you did state this point > > earl

Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser
And more generally, how is this all to be quantified? Does your paper go into the math? All I'm trying to establish and get agreement on at this point are the absolutes. There is no math at this point because it would be premature and distracting. --

Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser
Would an acceptable response be to reprogram the goals of the UFAI to make it friendly? Yes -- but with the minimal possible changes to do so (and preferably done by enforcing Friendliness and allowing the AI to resolve what to change to resolve integrity with Friendliness -- i.e. don't mess

Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser
I think this one is a package deal fallacy. I can't see how whether humans conspire to weed out wild carrots or not will affect decisions made by future AGI overlords. ;-) Whether humans conspire to weed out wild carrots impacts whether humans are classified as Friendly (or, it would if the wil

Re: [agi] What should we do to be prepared?

2008-03-06 Thread J Storrs Hall, PhD
On Thursday 06 March 2008 04:28:20 pm, Vladimir Nesov wrote: > > This is different from what I replied to (comparative advantage, which > J Storrs Hall also assumed), although you did state this point > earlier. > > I think this one is a package deal fallacy. I can't see how whether > humans cons

Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser
Would it be Friendly to turn you into computronium if your memories were preserved and the newfound computational power was used to make you immortal in a a simulated world of your choosing, for example, one without suffering, or where you had a magic genie or super powers or enhanced intelligen

Re: [agi] What should we do to be prepared?

2008-03-06 Thread j.k.
On 03/06/2008 12:55 PM,, Mark Waser wrote: Mark, how do you intend to handle the friendliness obligations of the AI towards vastly different levels of intelligence (above the threshold, of course)? Ah. An excellent opportunity for continuation of my previous post rebutting my personal conver

Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser
I wonder if this is a substantive difference with Eliezer's position though, since one might argue that 'humanity' means 'the [sufficiently intelligent and sufficiently ...] thinking being' rather than 'homo sapiens sapiens', and the former would of course include SAIs and intelligent alien bei

Re: [agi] What should we do to be prepared?

2008-03-06 Thread Matt Mahoney
--- Mark Waser <[EMAIL PROTECTED]> wrote: > A Friendly entity does *NOT* snuff > out (objecting/non-self-sacrificing) sub-peers simply because it has decided > that it has a "better" use for the resources that they represent/are. That > way lies death for humanity when/if become sub-peers (aka Un

Re: [agi] What should we do to be prepared?

2008-03-06 Thread Matt Mahoney
--- Mark Waser <[EMAIL PROTECTED]> wrote: > > My concern is what happens if a UFAI attacks a FAI. The UFAI has the goal > > > of > > killing the FAI. Should the FAI show empathy by helping the UFAI achieve > > its > > goal? > > Hopefully this concern was answered by my last post but . . . . >

Re: [agi] What should we do to be prepared?

2008-03-06 Thread j.k.
On 03/05/2008 05:04 PM,, Mark Waser wrote: And thus, we get back to a specific answer to jk's second question. "*US*" should be assumed to apply to any sufficiently intelligent goal-driven intelligence. We don't need to define "*us*" because I DECLARE that it should be assumed to include curr

Re: [agi] What should we do to be prepared?

2008-03-06 Thread Vladimir Nesov
On Thu, Mar 6, 2008 at 11:23 PM, Mark Waser <[EMAIL PROTECTED]> wrote: > > Friendliness must include reasonable protection for sub-peers or else there > is no "enlightened self-interest" or "attractor-hood" to it -- since any > rational entity will realize that it could *easily* end up as a sub-

Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser
Mark, how do you intend to handle the friendliness obligations of the AI towards vastly different levels of intelligence (above the threshold, of course)? Ah. An excellent opportunity for continuation of my previous post rebutting my personal conversion to computronium . . . . First off, my

Re: [agi] What should we do to be prepared?

2008-03-06 Thread j.k.
On 03/06/2008 08:32 AM,, Matt Mahoney wrote: --- Mark Waser <[EMAIL PROTECTED]> wrote: And thus, we get back to a specific answer to jk's second question. "*US*" should be assumed to apply to any sufficiently intelligent goal-driven intelligence. We don't need to define "*us*" because I DEC

Re: [agi] What should we do to be prepared?

2008-03-06 Thread Vladimir Nesov
On Thu, Mar 6, 2008 at 8:27 PM, Mark Waser <[EMAIL PROTECTED]> wrote: > > Now, I've just attempted to sneak a critical part of the answer right past > everyone with my plea . . . . so let's go back and review it in slow-motion. > :-) > > Part of our environment is that we have peers. And peers bec

Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser
My concern is what happens if a UFAI attacks a FAI. The UFAI has the goal of killing the FAI. Should the FAI show empathy by helping the UFAI achieve its goal? Hopefully this concern was answered by my last post but . . . . Being Friendly *certainly* doesn't mean fatally overriding your own

Re: [agi] What should we do to be prepared?

2008-03-06 Thread J Storrs Hall, PhD
On Thursday 06 March 2008 12:27:57 pm, Mark Waser wrote: > TAKE-AWAY: Friendliness is an attractor because it IS equivalent to "enlightened self-interest" -- but it only works where all entities involved are Friendly. Check out Beyond AI pp 178-9 and 350-352, or the Preface which sums up the

Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser
Or should we not worry about the problem because the more intelligent agent is more likely to win the fight? My concern is that evolution could favor unfriendly behavior, just as it has with humans. I don't believe that evolution favors unfriendly behavior. I believe that evolution is tendin

Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser
ot; by answering "What is in the set of "horrible nasty thing[s]?". ----- Original Message ----- From: Mark Waser To: agi@v2.listbox.com Sent: Thursday, March 06, 2008 10:01 AM Subject: Re: [agi] What should we do to be prepared? Hmm. Bummer. No new feedback. I wo

Re: [agi] What should we do to be prepared?

2008-03-06 Thread Matt Mahoney
--- Mark Waser <[EMAIL PROTECTED]> wrote: > And thus, we get back to a specific answer to jk's second question. "*US*" > should be assumed to apply to any sufficiently intelligent goal-driven > intelligence. We don't need to define "*us*" because I DECLARE that it > should be assumed to include c

Re: [agi] What should we do to be prepared?

2008-03-06 Thread Stephen Reed
From: Mark Waser <[EMAIL PROTECTED]> To: agi@v2.listbox.com Sent: Thursday, March 6, 2008 9:01:53 AM Subject: Re: [agi] What should we do to be prepared? Hmm. Bummer. No new feedback. I wonder if a) I'm still in "Well duh" land, b) I'm so totally off the mark that

Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser
Hmm. Bummer. No new feedback. I wonder if a) I'm still in "Well duh" land, b) I'm so totally off the mark that I'm not even worth replying to, or c) being given enough rope to hang myself. :-) Since I haven't seen any feedback, I think I'm going to divert to a section that I'm not quite sur

Re: [agi] What should we do to be prepared?

2008-03-05 Thread Mark Waser
> 1. How will the AI determine what is in the set of "horrible nasty > thing[s] that would make *us* unhappy"? I guess this is related to how you > will define the attractor precisely. > > 2. Preventing the extinction of the human race is pretty clear today, but > *human race* will become increasin

Re: [agi] What should we do to be prepared?

2008-03-05 Thread Richard Loosemore
rg wrote: Hi I made some responses below. Richard Loosemore wrote: rg wrote: Hi Is anyone discussing what to do in the future when we have made AGIs? I thought that was part of why the singularity institute was made ? Note, that I am not saying we should not make them! Because someone will

Re: [agi] What should we do to be prepared?

2008-03-05 Thread Richard Loosemore
rg wrote: Hi You said friendliness was AGIs locked in empathy towards mankind. How can you make them feel this? How did we humans get empathy? Is it not very likely that we have empathy because it turned out to be an advantage during our evolution ensuring the survival of groups of humans. So

Re: [agi] What should we do to be prepared?

2008-03-05 Thread j.k.
On 03/05/2008 12:36 PM,, Mark Waser wrote: snip... The obvious initial starting point is to explicitly recognize that the point of Friendliness is that we wish to prevent the extinction of the *human race* and/or to prevent many other horrible nasty things that would make *us* unhappy. After

Re: [agi] What should we do to be prepared?

2008-03-05 Thread Mark Waser
--- rg <[EMAIL PROTECTED]> wrote: Matt: Why will an AGI be friendly ? The question only makes sense if you can define friendliness, which we can't. Why Matt, thank you for such a wonderful opening . . . . :-) Friendliness *CAN* be defined. Furthermore, it is my contention that Friendline

Re: [agi] What should we do to be prepared?

2008-03-05 Thread Matt Mahoney
--- rg <[EMAIL PROTECTED]> wrote: > ok see my responses below.. > > Matt Mahoney wrote: > > --- rg <[EMAIL PROTECTED]> wrote: > > > >> Matt: Why will an AGI be friendly ? > >> > > > > The question only makes sense if you can define friendliness, which we > can't. > > > > > We could sa

Re: [agi] What should we do to be prepared?

2008-03-05 Thread Matt Mahoney
--- Richard Loosemore <[EMAIL PROTECTED]> wrote: > Matt Mahoney wrote: > > --- Richard Loosemore <[EMAIL PROTECTED]> wrote: > >> Friendliness, briefly, is a situation in which the motivations of the > >> AGI are locked into a state of empathy with the human race as a whole. > > > > Which is fin

Re: [agi] What should we do to be prepared?

2008-03-05 Thread rg
Hi You said friendliness was AGIs locked in empathy towards mankind. How can you make them feel this? How did we humans get empathy? Is it not very likely that we have empathy because it turned out to be an advantage during our evolution ensuring the survival of groups of humans. So if an AGI

Re: [agi] What should we do to be prepared?

2008-03-05 Thread rg
Hi I made some responses below. Richard Loosemore wrote: rg wrote: Hi Is anyone discussing what to do in the future when we have made AGIs? I thought that was part of why the singularity institute was made ? Note, that I am not saying we should not make them! Because someone will regardless

Re: [agi] What should we do to be prepared?

2008-03-05 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore <[EMAIL PROTECTED]> wrote: Friendliness, briefly, is a situation in which the motivations of the AGI are locked into a state of empathy with the human race as a whole. Which is fine as long as there is a sharp line dividing human from non-human. When

Re: [agi] What should we do to be prepared?

2008-03-05 Thread Matt Mahoney
--- Richard Loosemore <[EMAIL PROTECTED]> wrote: > Friendliness, briefly, is a situation in which the motivations of the > AGI are locked into a state of empathy with the human race as a whole. Which is fine as long as there is a sharp line dividing human from non-human. When that line goes away

Re: [agi] What should we do to be prepared?

2008-03-05 Thread rg
ok see my responses below.. Matt Mahoney wrote: --- rg <[EMAIL PROTECTED]> wrote: Matt: Why will an AGI be friendly ? The question only makes sense if you can define friendliness, which we can't. We could say behavior that is acceptable in our society then.. In your mail you beli

Re: [agi] What should we do to be prepared?

2008-03-05 Thread Richard Loosemore
Matt Mahoney wrote: --- rg <[EMAIL PROTECTED]> wrote: Matt: Why will an AGI be friendly ? The question only makes sense if you can define friendliness, which we can't. Wrong. *You* cannot define friendliness for reasons of your own. Others cmay well be able to do so. It would be fine to

Re: [agi] What should we do to be prepared?

2008-03-05 Thread Matt Mahoney
--- rg <[EMAIL PROTECTED]> wrote: > Matt: Why will an AGI be friendly ? The question only makes sense if you can define friendliness, which we can't. Initially I believe that a distributed AGI will do what we want it to do because it will evolve in a competitive, hostile environment that rewards

Re: [agi] What should we do to be prepared?

2008-03-05 Thread Richard Loosemore
rg wrote: Hi Is anyone discussing what to do in the future when we have made AGIs? I thought that was part of why the singularity institute was made ? Note, that I am not saying we should not make them! Because someone will regardless of what we decide. I am asking for what should do to prepar

Re: [agi] What should we do to be prepared?

2008-03-05 Thread Anthony George
On Wed, Mar 5, 2008 at 2:46 AM, rg <[EMAIL PROTECTED]> wrote: > > > Anthony: Do not sociopaths understand the > rules and the justice system ? > Two responses come to mind. Both will be unsatisfactory probably, but oh well... 1. There's a difference between understanding rules and the justice s

Re: [agi] What should we do to be prepared?

2008-03-05 Thread rg
Hi Again I stress that I am not saying we should try to stop development (I do not think we can) But what is wrong with thinking about the possible outcomes and try to be prepared? To try to affect the development and stear it in better directions to take smaller steps to wherever we are going. N

Re: [agi] What should we do to be prepared?

2008-03-04 Thread Matt Mahoney
--- rg <[EMAIL PROTECTED]> wrote: > Hi > > Is anyone discussing what to do in the future when we > have made AGIs? I thought that was part of why > the singularity institute was made ? > > Note, that I am not saying we should not make them! > Because someone will regardless of what we decide. >

Re: [agi] What should we do to be prepared?

2008-03-04 Thread Mike Tintner
Vlad: How to survive a zombie attack? I really like that thought :). You're right:we should seriously consider that possibility. But personally, I don't think we need to be afraid ... I'm sure they will be friendly zombies... --- agi Archives: http:/

Re: [agi] What should we do to be prepared?

2008-03-04 Thread Vladimir Nesov
On Tue, Mar 4, 2008 at 9:53 PM, rg <[EMAIL PROTECTED]> wrote: > Hi > > Is anyone discussing what to do in the future when we > have made AGIs? I thought that was part of why > the singularity institute was made ? > > Note, that I am not saying we should not make them! > Because someone will re

Re: [agi] What should we do to be prepared?

2008-03-04 Thread Anthony George
the community that I > feel it is necessary> > > - Original Message - > *From:* Anthony George <[EMAIL PROTECTED]> > *To:* agi@v2.listbox.com > *Sent:* Tuesday, March 04, 2008 2:47 PM > *Subject:* Re: [agi] What should we do to be prepared? > > > >

Re: [agi] What should we do to be prepared?

2008-03-04 Thread Mark Waser
the community. You are in a *very* small minority. - Original Message - From: Anthony George To: agi@v2.listbox.com Sent: Tuesday, March 04, 2008 2:47 PM Subject: Re: [agi] What should we do to be prepared? On Tue, Mar 4, 2008 at 10:53 AM, rg <[EMAIL PROTECTED]&g

Re: [agi] What should we do to be prepared?

2008-03-04 Thread Anthony George
On Tue, Mar 4, 2008 at 10:53 AM, rg <[EMAIL PROTECTED]> wrote: > Hi > > Is anyone discussing what to do in the future when we > have made AGIs? I thought that was part of why > the singularity institute was made ? > > Note, that I am not saying we should not make them! > Because someone will regar