Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-20 Thread Steve Richfield
Ben, Mapping RRA to Hegel's space isn't trivial, but here goes... On 11/19/08, Ben Goertzel [EMAIL PROTECTED] wrote: I have nothing against Hegel; I think he was a great philosopher. His Logic is really fantastic reading. And, having grown up surrounded by Marxist wannabe-revolutionaries

Definition of pain (was Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...)

2008-11-19 Thread Matt Mahoney
--- On Tue, 11/18/08, Mark Waser [EMAIL PROTECTED] wrote: add-rule kill-file Matt Mahoney Mark, whatever happened to that friendliness-religion you caught a few months ago? Anyway, with regard to grounding, internal feedback, and volition, autobliss already has two of these three properties,

Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-19 Thread Daniel Yokomizo
On Tue, Nov 18, 2008 at 11:23 PM, Matt Mahoney [EMAIL PROTECTED] wrote: Steve, what is the purpose of your political litmus test? If you are trying to assemble a team of seed-AI programmers with the correct ethics, forget it. Seed AI is a myth. http://www.mattmahoney.net/agi2.html (section 2).

Seed AI (was Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...)

2008-11-19 Thread Matt Mahoney
--- On Wed, 11/19/08, Daniel Yokomizo [EMAIL PROTECTED] wrote: On Tue, Nov 18, 2008 at 11:23 PM, Matt Mahoney [EMAIL PROTECTED] wrote: Seed AI is a myth. http://www.mattmahoney.net/agi2.html (section 2). (I'm assuming you meant the section 5.1. Recursive Self Improvement) That too, but

Re: Seed AI (was Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...)

2008-11-19 Thread Ben Goertzel
BTW, for those who are newbies to this list, Matt's argument attempting to refute RSI was extensively discussed on this list a few months ago. In my view, I refuted his argument pretty clearly, although he does not agree. His mathematics is correct, but seemed to me irrelevant to real-life RSI

Re: Seed AI (was Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...)

2008-11-19 Thread Daniel Yokomizo
On Wed, Nov 19, 2008 at 1:21 PM, Matt Mahoney [EMAIL PROTECTED] wrote: --- On Wed, 11/19/08, Daniel Yokomizo [EMAIL PROTECTED] wrote: On Tue, Nov 18, 2008 at 11:23 PM, Matt Mahoney [EMAIL PROTECTED] wrote: Seed AI is a myth. http://www.mattmahoney.net/agi2.html (section 2). (I'm assuming

Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-19 Thread Steve Richfield
Ben: On 11/18/08, Ben Goertzel [EMAIL PROTECTED] wrote: This sounds an awful lot like the Hegelian dialectical method... Your point being? We are all stuck in Hegal's Hell whether we like it or not. Reverse Reductio ad Absurdum is just a tool to help guide us through it. There seems to be

Re: Seed AI (was Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...)

2008-11-19 Thread Steve Richfield
Back to reality for a moment... I have greatly increased the IQs of some pretty bright people since I started doing this in 2001 (the details are way off topic here, so contact me off-line for more if you are interested), and now, others are also doing this. I think that these people give us a

Re: Seed AI (was Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...)

2008-11-19 Thread Matt Mahoney
--- On Wed, 11/19/08, Daniel Yokomizo [EMAIL PROTECTED] wrote: I just want to be clear, you agree that an agent is able to create a better version of itself, not just in terms of a badly defined measure as IQ but also as a measure of resource utilization. Yes, even bacteria can do this. Do

[agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Steve Richfield
To all, I am considering putting up a web site to filter the crazies as follows, and would appreciate all comments, suggestions, etc. Everyone visiting the site would get different questions, in different orders, etc. Many questions would have more than one correct answer, and in many cases,

Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread martin biehl
Hi Steve I am not an expert so correct me if I am wrong. As I see it every day logical arguments (and rationality?) are based on standard classical logic (or something very similar). Yet I am (sadly) not aware of a convincing argument that this logic is the one to accept as the right choice. You

Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Bob Mottram
2008/11/18 Steve Richfield [EMAIL PROTECTED]: I am considering putting up a web site to filter the crazies as follows, and would appreciate all comments, suggestions, etc. This all sounds peachy in principle, but I expect it would exclude virtually everyone except perhaps a few of the most

Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Trent Waddington
On Tue, Nov 18, 2008 at 8:38 PM, Bob Mottram [EMAIL PROTECTED] wrote: I think most people have at least a few beliefs which cannot be strictly justified rationally You would think that. :) Trent --- agi Archives:

Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Ben Goertzel
3. A statement in their own words that they hereby disavow allegiance to any non-human god or alien entity, and that they will NOT follow the directives of any government led by people who would obviously fail this test. This statement would be included on the license. Hmmm... don't I fail

Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Richard Loosemore
Steve Richfield wrote: To all, I am considering putting up a web site to filter the crazies as follows, and would appreciate all comments, suggestions, etc. Everyone visiting the site would get different questions, in different orders, etc. Many questions would have more than one correct

Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread BillK
On Tue, Nov 18, 2008 at 1:22 PM, Richard Loosemore wrote: I see how this would work: crazy people never tell lies, so you'd be able to nail 'em when they gave the wrong answers. Yup. That's how they pass lie detector tests as well. They sincerely believe the garbage they spread around.

Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Steve Richfield
Martin, On 11/18/08, martin biehl [EMAIL PROTECTED] wrote: I don't know what reverse reductio ad absurdum is, so it may not be a precise counterexample, but I think you get my point. HERE is the crux of my argument, as other forms of logic fall short of being adequate to run a world with.

Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Ben Goertzel
This sounds an awful lot like the Hegelian dialectical method... ben g On Tue, Nov 18, 2008 at 5:29 PM, Steve Richfield [EMAIL PROTECTED]wrote: Martin, On 11/18/08, martin biehl [EMAIL PROTECTED] wrote: I don't know what reverse reductio ad absurdum is, so it may not be a precise

Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Steve Richfield
Bob, On 11/18/08, Bob Mottram [EMAIL PROTECTED] wrote: 2008/11/18 Steve Richfield [EMAIL PROTECTED]: I am considering putting up a web site to filter the crazies as follows, and would appreciate all comments, suggestions, etc. This all sounds peachy in principle, but I expect it would

Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Steve Richfield
Ben, On 11/18/08, Ben Goertzel [EMAIL PROTECTED] wrote: 3. A statement in their own words that they hereby disavow allegiance to any non-human god or alien entity, and that they will NOT follow the directives of any government led by people who would obviously fail this test. This

Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Steve Richfield
Richard and Bill, On 11/18/08, BillK [EMAIL PROTECTED] wrote: On Tue, Nov 18, 2008 at 1:22 PM, Richard Loosemore wrote: I see how this would work: crazy people never tell lies, so you'd be able to nail 'em when they gave the wrong answers. Yup. That's how they pass lie detector tests as

RE: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Benjamin Johnston
Could we please stick to discussion of AGI? -Ben From: Steve Richfield [mailto:[EMAIL PROTECTED] Sent: Wednesday, 19 November 2008 10:39 AM To: agi@v2.listbox.com Subject: Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies... Richard and Bill, On 11

Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Matt Mahoney
Richfield [EMAIL PROTECTED] wrote: From: Steve Richfield [EMAIL PROTECTED] Subject: Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies... To: agi@v2.listbox.com Date: Tuesday, November 18, 2008, 6:39 PM Richard and Bill, On 11/18/08, BillK [EMAIL PROTECTED] wrote: On Tue

Re: **SPAM** Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Mark Waser
: Tuesday, November 18, 2008 8:23 PM Subject: **SPAM** Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies... Steve, what is the purpose of your political litmus test? If you are trying to assemble a team of seed-AI programmers with the correct ethics, forget

Re: **SPAM** Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Steve Richfield
has, that he would question that goal. Thanks everyone for your comments. Steve Richfield = --- On *Tue, 11/18/08, Steve Richfield [EMAIL PROTECTED]*wrote: From: Steve Richfield [EMAIL PROTECTED] Subject: Re: [agi] My prospective plan to neutralize AGI and other dangerous