[agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Steve Richfield
To all, I am considering putting up a web site to "filter the crazies" as follows, and would appreciate all comments, suggestions, etc. Everyone visiting the site would get different questions, in different orders, etc. Many questions would have more than one correct answer, and in many cases, so

Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread martin biehl
Hi Steve I am not an expert so correct me if I am wrong. As I see it every day logical arguments (and rationality?) are based on standard classical logic (or something very similar). Yet I am (sadly) not aware of a convincing argument that this logic is the one to accept as the right choice. You m

RE: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-18 Thread John G. Rose
> From: Trent Waddington [mailto:[EMAIL PROTECTED] > > On Tue, Nov 18, 2008 at 7:44 AM, Matt Mahoney <[EMAIL PROTECTED]> > wrote: > > I mean that people are free to decide if others feel pain. For > example, a scientist may decide that a mouse does not feel pain when it > is stuck in the eye with

Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Bob Mottram
2008/11/18 Steve Richfield <[EMAIL PROTECTED]>: > I am considering putting up a web site to "filter the crazies" as follows, > and would appreciate all comments, suggestions, etc. This all sounds peachy in principle, but I expect it would exclude virtually everyone except perhaps a few of the mos

Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Trent Waddington
On Tue, Nov 18, 2008 at 8:38 PM, Bob Mottram <[EMAIL PROTECTED]> wrote: > I think most people have at least a few beliefs which cannot be strictly > justified rationally You would think that. :) Trent --- agi Archives: https://www.listbox.com/member/arc

Re: [agi] Now hear this: Human qualia are generated in the human cranial CNS and no place else

2008-11-18 Thread Mike Tintner
Colin, May I suggest if you want clarity you dispense with eccentric philosophical terms like p-consciousness (phenomenal consciousness?) The phantom limb case you bring up is interesting but first I have to understand what you're talking about. Would you mind sticking to simple, basic (and sc

Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-18 Thread Mark Waser
I mean that people are free to decide if others feel pain. Wow! You are one sick puppy, dude. Personally, you have just hit my "Do not bother debating with" list. You can "decide" anything you like -- but that doesn't make it true. - Original Message - From: "Matt Mahoney" <[EMAIL

[agi] AGI Light Humor - first words

2008-11-18 Thread Stan Nilsen
First words to come from the brand new AGI? Hello World or Gotta paper clip? What's the meaning of life? Am I really conscious? Where am I? I come from a dysfunctional family. --- agi Archives: https:/

Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Ben Goertzel
> 3. A statement in their own words that they hereby disavow allegiance > to any non-human god or alien entity, and that they will NOT follow the > directives of any government led by people who would obviously fail this > test. This statement would be included on the license. > > Hmmm... don't I

Re: [agi] Now hear this: Human qualia are generated in the human cranial CNS and no place else

2008-11-18 Thread Richard Loosemore
Colin Hales wrote: Mike Tintner wrote: Colin:Qualia generation has been highly localised into specific regions in *cranial *brain material already. Qualia are not in the periphery. Qualia are not in the spinal CNS, Qualia are not in the cranial periphery eg eyes or lips Colin, This is to a

Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Richard Loosemore
Steve Richfield wrote: To all, I am considering putting up a web site to "filter the crazies" as follows, and would appreciate all comments, suggestions, etc. Everyone visiting the site would get different questions, in different orders, etc. Many questions would have more than one correct

Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread BillK
On Tue, Nov 18, 2008 at 1:22 PM, Richard Loosemore wrote: > > I see how this would work: crazy people never tell lies, so you'd be able > to nail 'em when they gave the wrong answers. > Yup. That's how they pass lie detector tests as well. They sincerely believe the garbage they spread around.

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-18 Thread Harry Chesley
Richard Loosemore wrote: > Harry Chesley wrote: >> Richard Loosemore wrote: >>> I completed the first draft of a technical paper on consciousness >>> the other day. It is intended for the AGI-09 conference, and it >>> can be found at: >>> >>> http://susaro.com/wp-content/uploads/2008/11/draft_con

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-18 Thread Harry Chesley
Trent Waddington wrote: > As I believe the "is that conciousness?" debate could go on forever, > I think I should make an effort here to save this thread. > > Setting aside the objections of vegetarians and animal lovers, many > hard nosed scientists decided long ago that jamming things into the >

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-18 Thread Mark Waser
My problem is if qualia are atomic, with no differentiable details, why do some "feel" different than others -- shouldn't they all be separate but equal? "Red" is relatively neutral, while "searing hot" is not. Part of that is certainly lower brain function, below the level of consciousness, but t

Re: [agi] Now hear this: Human qualia are generated in the human cranial CNS and no place else

2008-11-18 Thread Colin Hales
Trent Waddington wrote: On Tue, Nov 18, 2008 at 4:07 PM, Colin Hales <[EMAIL PROTECTED]> wrote: I'd like to dispel all such delusion in this place so that neurally inspired AGI gets discussed accurately, even if your intent is to "explain P-consciousness away"... know exactly what you are exp

[agi] Neurogenesis critical to mammalian learning and memory?

2008-11-18 Thread Ben Goertzel
.. interesting if true .. http://www.medindia.net/news/Key-to-Learning-and-Memory-Continuous-Brain-Cell-Generation-41297-1.htm -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI [EMAIL PROTECTED] "A human being should be able to change a diaper, plan an invasion

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-18 Thread Harry Chesley
Mark Waser wrote: >> My problem is if qualia are atomic, with no differentiable details, >> why do some "feel" different than others -- shouldn't they all be >> separate but equal? "Red" is relatively neutral, while "searing >> hot" is not. Part of that is certainly lower brain function, below >> t

Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-18 Thread Matt Mahoney
--- On Tue, 11/18/08, Mark Waser <[EMAIL PROTECTED]> wrote: > > I mean that people are free to decide if others feel pain. > > Wow! You are one sick puppy, dude. Personally, you have > just hit my "Do not bother debating with" list. > > You can "decide" anything you like -- but that > doesn't

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-18 Thread Richard Loosemore
Harry Chesley wrote: Richard Loosemore wrote: Harry Chesley wrote: Richard Loosemore wrote: I completed the first draft of a technical paper on consciousness the other day. It is intended for the AGI-09 conference, and it can be found at: http://susaro.com/wp-content/uploads/2008/11/draft_c

Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Steve Richfield
Martin, On 11/18/08, martin biehl <[EMAIL PROTECTED]> wrote: > I don't know what reverse reductio ad absurdum is, so it may not be a > precise counterexample, but I think you get my point. HERE is the crux of my argument, as other forms of logic fall short of being adequate to run a world with.

Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Ben Goertzel
This sounds an awful lot like the Hegelian dialectical method... ben g On Tue, Nov 18, 2008 at 5:29 PM, Steve Richfield <[EMAIL PROTECTED]>wrote: > Martin, > > On 11/18/08, martin biehl <[EMAIL PROTECTED]> wrote: > >> I don't know what reverse reductio ad absurdum is, so it may not be a >> preci

Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-18 Thread Mark Waser
Aren't you the one who decided that autobliss feels pain? Or did you decide that it doesn't? Autobliss has no grounding, no internal feedback, and no volition. By what definitions does it feel pain? On the other hand, by what definitions do people not feel pain (other than by some fictitiou

Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Steve Richfield
Bob, On 11/18/08, Bob Mottram <[EMAIL PROTECTED]> wrote: > > 2008/11/18 Steve Richfield <[EMAIL PROTECTED]>: > > I am considering putting up a web site to "filter the crazies" as > follows, > > and would appreciate all comments, suggestions, etc. > > > This all sounds peachy in principle, but I ex

Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Steve Richfield
Ben, On 11/18/08, Ben Goertzel <[EMAIL PROTECTED]> wrote: > > > 3. A statement in their own words that they hereby disavow allegiance >> to any non-human god or alien entity, and that they will NOT follow the >> directives of any government led by people who would obviously fail this >> test. Th

Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)

2008-11-18 Thread Matt Mahoney
--- On Tue, 11/18/08, Mark Waser <[EMAIL PROTECTED]> wrote: > Autobliss has no grounding, no internal feedback, and no > volition. By what definitions does it feel pain? Now you are making up new rules to decide that autobliss doesn't feel pain. My definition of pain is negative reinforcement i

Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)

2008-11-18 Thread Ben Goertzel
On Tue, Nov 18, 2008 at 6:26 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote: > --- On Tue, 11/18/08, Mark Waser <[EMAIL PROTECTED]> wrote: > > > Autobliss has no grounding, no internal feedback, and no > > volition. By what definitions does it feel pain? > > Now you are making up new rules to decide

Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)

2008-11-18 Thread Trent Waddington
On Wed, Nov 19, 2008 at 9:29 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote: > Clearly, this can be done, and has largely been done already ... though > cutting and pasting or summarizing the relevant literature in emails would > not a productive use of time Apparently, it was Einstein who said that i

Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Steve Richfield
Richard and Bill, On 11/18/08, BillK <[EMAIL PROTECTED]> wrote: > > On Tue, Nov 18, 2008 at 1:22 PM, Richard Loosemore wrote: > > I see how this would work: crazy people never tell lies, so you'd be > able > > to nail 'em when they gave the wrong answers. > Yup. That's how they pass lie detector

RE: [agi] Neurogenesis critical to mammalian learning and memory?

2008-11-18 Thread Ed Porter
I attended a two day seminar on brain science at MIT about six years ago in which one of the papers was about neurognesis in the hippocampus. The speaker said he though neurogenisis was necessary in the hippocampus because hippocampus cells tend to die much more rapidly than most cells, and thus n

RE: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Benjamin Johnston
Could we please stick to discussion of AGI? -Ben From: Steve Richfield [mailto:[EMAIL PROTECTED] Sent: Wednesday, 19 November 2008 10:39 AM To: agi@v2.listbox.com Subject: Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies... Richard and Bill, On 11/18

Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)

2008-11-18 Thread Matt Mahoney
Just to clarify, I'm not really interested in whether machines feel pain. I am just trying to point out the contradictions in Mark's sweeping generalizations about the treatment of intelligent machines. But to be fair, such criticism is unwarented. Mark is arguing about ethics. Everyone has ethi

Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Matt Mahoney
Steve, what is the purpose of your political litmus test? If you are trying to assemble a team of seed-AI programmers with the "correct" ethics, forget it. Seed AI is a myth. http://www.mattmahoney.net/agi2.html (section 2). -- Matt Mahoney, [EMAIL PROTECTED] --- On Tue, 11/18/08, Steve Richfie

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-18 Thread Ben Goertzel
Richard, I re-read your paper and I'm afraid I really don't grok why you think it solves Chalmers' hard problem of consciousness... It really seems to me like what you're suggesting is a "cognitive correlate of consciousness", to morph the common phrase "neural correlate of consciousness" ... Yo

Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)

2008-11-18 Thread Mark Waser
Now you are making up new rules to decide that autobliss doesn't feel pain. My definition of pain is negative reinforcement in a system that learns. There is no other requirement. I made up no rules. I merely asked a question. You are the one who makes a definition like this and then says th

Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)

2008-11-18 Thread Mark Waser
>> I am just trying to point out the contradictions in Mark's sweeping >> generalizations about the treatment of intelligent machines Huh? That's what you're trying to do? Normally people do that by pointing to two different statements and arguing that they contradict each other. Not by crea

Re: **SPAM** Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Mark Waser
>> Seed AI is a myth. Ah. Now I get it. You are on this list solely to try to slow down progress as much as possible . . . . (sorry that I've been so slow to realize this) add-rule kill-file "Matt Mahoney" - Original Message - From: Matt Mahoney To: agi@v2.listbox.com Sent:

Re: **SPAM** Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Steve Richfield
Matt and Mark, I think you both missed my point, but in different ways, namely, that there is a LOT of traffic here on this forum over a problem that appears easy to resolve once and for all time, and further, that the solution may work for much more important worldwide social problems. Continuin