Re: [agi] What should we do to be prepared?

2008-03-10 Thread Vladimir Nesov
On Mon, Mar 10, 2008 at 3:04 AM, Mark Waser [EMAIL PROTECTED] wrote: 1) If I physically destroy every other intelligent thing, what is going to threaten me? Given the size of the universe, how can you possibly destroy every other intelligent thing (and be sure that no others ever

Re: [agi] What should we do to be prepared?

2008-03-10 Thread Stan Nilsen
Mark Waser wrote: Part 4. ... Eventually, you're going to get down to Don't mess with anyone's goals, be forced to add the clause unless absolutely necessary, and then have to fight over what when absolutely necessary means. But what we've got here is what I would call the goal of a

Re: [agi] Recap/Summary/Thesis Statement

2008-03-10 Thread Mark Waser
It *might* get stuck in bad territory, but can you make an argument why there is a *significant* chance of that happening? Not off the top of my head. I'm just playing it better safe than sorry since, as far as I can tell, there *may* be a significant chance of it happening. Also, I'm not

[agi] Some thoughts of an AGI designer

2008-03-10 Thread Richard Loosemore
I find myself totally bemused by the recent discussion of AGI friendliness. I am in sympathy with some aspects of Mark's position, but I also see a serious problem running through the whole debate: everyone is making statements based on unstated assumptions about the motivations of AGI

Re: [agi] Some thoughts of an AGI designer

2008-03-10 Thread Ben Goertzel
The three most common of these assumptions are: 1) That it will have the same motivations as humans, but with a tendency toward the worst that we show. 2) That it will have some kind of Gotta Optimize My Utility Function motivation. 3) That it will have an intrinsic urge to

Re: [agi] Some thoughts of an AGI designer

2008-03-10 Thread Vladimir Nesov
On Mon, Mar 10, 2008 at 5:47 PM, Richard Loosemore [EMAIL PROTECTED] wrote: In the past I have argued strenuously that (a) you cannot divorce a discussion of friendliness from a discussion of what design of AGI you are talking about, and (b) some assumptions about AGI motivation are

Re: [agi] Some thoughts of an AGI designer

2008-03-10 Thread Mark Waser
I am in sympathy with some aspects of Mark's position, but I also see a serious problem running through the whole debate: everyone is making statements based on unstated assumptions about the motivations of AGI systems. Bummer. I thought that I had been clearer about my assumptions. Let me

Re: [agi] What should we do to be prepared?

2008-03-10 Thread Vladimir Nesov
On Mon, Mar 10, 2008 at 6:13 PM, Mark Waser [EMAIL PROTECTED] wrote: I can destroy all Earth-originated life if I start early enough. If there is something else out there, it can similarly be hostile and try destroy me if it can, without listening to any friendliness prayer. All

Re: [agi] Some thoughts of an AGI designer

2008-03-10 Thread Mark Waser
For instance, a Novamente-based AGI will have an explicit utility function, but only a percentage of the system's activity will be directly oriented toward fulfilling this utility function Some of the system's activity will be spontaneous ... i.e. only implicitly goal-oriented .. and as such

Re: [agi] Some thoughts of an AGI designer

2008-03-10 Thread Richard Loosemore
Mark Waser wrote: I am in sympathy with some aspects of Mark's position, but I also see a serious problem running through the whole debate: everyone is making statements based on unstated assumptions about the motivations of AGI systems. Bummer. I thought that I had been clearer about my

Re: [agi] What should we do to be prepared?

2008-03-10 Thread Vladimir Nesov
On Mon, Mar 10, 2008 at 8:10 PM, Mark Waser [EMAIL PROTECTED] wrote: Information Theory is generally accepted as correct and clearly indicates that you are wrong. Note that you are trying to use a technical term in a non-technical way to fight a non-technical argument. Do you really think

Re: [agi] Some thoughts of an AGI designer

2008-03-10 Thread Mark Waser
First off -- yours was a really helpful post. Thank you! I think that I need to add a word to my initial assumption . . . . Assumption - The AGI will be an optimizing goal-seeking entity. There are two main things. One is that the statement The AGI will be a goal-seeking entity has many

Re: [agi] What should we do to be prepared?

2008-03-10 Thread Mark Waser
Note that you are trying to use a technical term in a non-technical way to fight a non-technical argument. Do you really think that I'm asserting that virtual environment can be *exactly* as capable as physical environment? No, I think that you're asserting that the virtual environment is close

Re: [agi] What should we do to be prepared?

2008-03-10 Thread Vladimir Nesov
On Mon, Mar 10, 2008 at 11:36 PM, Mark Waser [EMAIL PROTECTED] wrote: Note that you are trying to use a technical term in a non-technical way to fight a non-technical argument. Do you really think that I'm asserting that virtual environment can be *exactly* as capable as physical

Re: [agi] What should we do to be prepared?

2008-03-10 Thread Vladimir Nesov
errata: On Tue, Mar 11, 2008 at 12:13 AM, Vladimir Nesov [EMAIL PROTECTED] wrote: I'm sure that for computational efficiency it should be a very strict limitation. it *shouldn't* be a very strict limitation -- Vladimir Nesov [EMAIL PROTECTED] ---

Re: [agi] What should we do to be prepared?

2008-03-10 Thread Vladimir Nesov
On Tue, Mar 11, 2008 at 12:37 AM, Mark Waser [EMAIL PROTECTED] wrote: How do we get from here to there? Without a provable path, it's all just magical hand-waving to me. (I like it but it's ultimately an unsatifying illusion) It's an independent statement. No, it

Re: [agi] Some thoughts of an AGI designer

2008-03-10 Thread Charles D Hixson
Mark Waser wrote: ... The motivation that is in the system is I want to achieve *my* goals. The goals that are in the system I deem to be entirely irrelevant UNLESS they are deliberately and directly contrary to Friendliness. I am contending that, unless the initial goals are deliberately

Re: [agi] What should we do to be prepared?

2008-03-10 Thread Mark Waser
My second point that you omitted from this response doesn't need there to be universal substrate, which is what I mean. Ditto for significant resources. I didn't omit your second point, I covered it as part of the difference between our views. You believe that certain tasks/options are

Re: [agi] What should we do to be prepared?

2008-03-10 Thread Mark Waser
Part 5. The nature of evil or The good, the bad, and the evil Since we've got the (slightly revised :-) goal of a Friendly individual and the Friendly society -- Don't act contrary to anyone's goals unless absolutely necessary -- we now can evaluate actions as good or bad in relation to that

Re: [agi] Some thoughts of an AGI designer

2008-03-10 Thread Mark Waser
I think here we need to consider A. Maslow's hierarchy of needs. That an AGI won't have the same needs as a human is, I suppose, obvious, but I think it's still true that it will have a hierarchy (which isn't strictly a hierarchy). I.e., it will have a large set of motives, and which it is

Re: [agi] Artificial general intelligence

2008-03-10 Thread Linas Vepstas
On 27/02/2008, a [EMAIL PROTECTED] wrote: This causes real controversy in this discussion list, which pressures me to build my own AGI. How about joining effort with one of the existing AGI projects? --linas --- agi Archives: