Re: [agi] Recap/Summary/Thesis Statement

2008-03-08 Thread j.k.
On 03/07/2008 05:28 AM, Mark Waser wrote: */Attractor Theory of Friendliness/* There exists a describable, reachable, stable attractor in state space that is sufficiently Friendly to reduce the risks of AGI to acceptable levels I've just carefully reread Eliezer's CEV

Re: [agi] What should we do to be prepared?

2008-03-08 Thread Mark Waser
What is different in my theory is that it handles the case where "the dominant theory turns unfriendly". The core of my thesis is that the particular Friendliness that I/we are trying to reach is an "attractor" -- which means that if the dominant structure starts to turn unfriendly, it is

Re: [agi] What should we do to be prepared?

2008-03-08 Thread Vladimir Nesov
On Sat, Mar 8, 2008 at 6:30 PM, Mark Waser <[EMAIL PROTECTED]> wrote: > > > > This sounds like magic thinking, sweeping the problem under the rug of > > 'attractor' word. Anyway, even if this trick somehow works, it doesn't > > actually address the problem of friendly AI. The problem with > > unfri

[agi] Brief report on AGI-08

2008-03-08 Thread Ben Goertzel
Hi all, The AGI-08 conference (agi-08.org) occurred last weekend in Memphis...! I had hoped to write up a real scientific summary of AGI-08, but at the moment it doesn't look like I'll find the time, so instead I'll send out this briefer and more surface-level summary... Firstly, the conference

Special status to Homo Sap. (was Re: [agi] Recap/Summary/Thesis Statement)

2008-03-08 Thread Tim Freeman
From: Matt Mahoney <[EMAIL PROTECTED]>, in reply to Mark Waser: >You seem to be giving special status to Homo Sapiens. How does this >arise out of your dynamic? I know you can program an initial bias, >but how is it stable? Keep in mind that Mark made a subsequent reply saying he isn't giving a

Re: [agi] What should we do to be prepared?

2008-03-08 Thread Mark Waser
This raises another point for me though. In another post (2008-03-06 14:36) you said: "It would *NOT* be Friendly if I have a goal that I not be turned into computronium even if (which I hereby state that I do)" Yet, if I understand our recent exchange correctly, it is possible for this to

Re: [agi] Recap/Summary/Thesis Statement

2008-03-08 Thread Mark Waser
- Original Message - From: "Matt Mahoney" <[EMAIL PROTECTED]> To: Sent: Friday, March 07, 2008 6:38 PM Subject: Re: [agi] Recap/Summary/Thesis Statement --- Mark Waser <[EMAIL PROTECTED]> wrote: >> Huh? Why can't an irreversible dynamic be part of an attractor? (Not >> that >>

Re: [agi] What should we do to be prepared?

2008-03-08 Thread J Storrs Hall, PhD
On Friday 07 March 2008 05:13:17 pm, Matt Mahoney wrote: > How does an agent know if another agent is Friendly or not, especially if the > other agent is more intelligent? See Beyond AI, p331-2. What's needed is a form of open source and provable reliability guarantees. This would have to be wor