Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)

2008-11-19 Thread Jiri Jelinek
Trent, Feynman's page on wikipedia has it as: "If you can't explain something to a first year student, then you haven't really understood it." but Feynman reportedly said it in a number of ways, including the grandmother variant. I learned about it when taking physics classes a while ago so I don'

Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)

2008-11-19 Thread Matt Mahoney
--- On Wed, 11/19/08, Jiri Jelinek <[EMAIL PROTECTED]> wrote: > >My definition of pain is negative reinforcement in a system that learns. > > IMO, pain is more like a data with the potential to cause disorder in > hard-wired algorithms. I'm not saying this fully covers it but it's > IMO already o

Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)

2008-11-19 Thread Trent Waddington
On Wed, Nov 19, 2008 at 6:20 PM, Jiri Jelinek <[EMAIL PROTECTED]> wrote: >>Trent Waddington wrote: >>Apparently, it was Einstein who said that if you can't explain it to >>your grandmother then you don't understand it. > > That was Richard Feynman When? I don't really know who said it.. but every

Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)

2008-11-19 Thread Jiri Jelinek
>Matt Mahoney wrote: >Autobliss... Imagine that there is another human language which is the same as English, just the pain/pleasure related words have the opposite meaning. Then consider what would that mean for your Autobliss. >My definition of pain is negative reinforcement in a system that le

Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)

2008-11-18 Thread Mark Waser
or when people are convinced that they don't have free will. = = = = = BAH! I should have quit answering you long ago. No more. - Original Message - From: Matt Mahoney To: agi@v2.listbox.com Sent: Tuesday, November 18, 2008 7:58 PM Subject: Re: Definition of pain (was

Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)

2008-11-18 Thread Mark Waser
<[EMAIL PROTECTED]> To: Sent: Tuesday, November 18, 2008 6:26 PM Subject: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction) --- On Tue, 11/18/08, Mark Waser <[EMAIL PROTECTED]> wrote: Autobliss has no grounding, no inte

Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)

2008-11-18 Thread Matt Mahoney
PROTECTED]> wrote: From: Ben Goertzel <[EMAIL PROTECTED]> Subject: Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction) To: agi@v2.listbox.com Date: Tuesday, November 18, 2008, 6:29 PM On Tue, Nov 18, 2008 at 6:26 PM, Mat

Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)

2008-11-18 Thread Trent Waddington
On Wed, Nov 19, 2008 at 9:29 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote: > Clearly, this can be done, and has largely been done already ... though > cutting and pasting or summarizing the relevant literature in emails would > not a productive use of time Apparently, it was Einstein who said that i

Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)

2008-11-18 Thread Ben Goertzel
On Tue, Nov 18, 2008 at 6:26 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote: > --- On Tue, 11/18/08, Mark Waser <[EMAIL PROTECTED]> wrote: > > > Autobliss has no grounding, no internal feedback, and no > > volition. By what definitions does it feel pain? > > Now you are making up new rules to decide

Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)

2008-11-18 Thread Matt Mahoney
--- On Tue, 11/18/08, Mark Waser <[EMAIL PROTECTED]> wrote: > Autobliss has no grounding, no internal feedback, and no > volition. By what definitions does it feel pain? Now you are making up new rules to decide that autobliss doesn't feel pain. My definition of pain is negative reinforcement i

Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-18 Thread Mark Waser
riginal Message - From: "Matt Mahoney" <[EMAIL PROTECTED]> To: Sent: Tuesday, November 18, 2008 5:05 PM Subject: Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction --- On Tue, 11/18/08, Mark Waser <[EMAIL PROTECTED]> wrote: &g

Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-18 Thread Matt Mahoney
--- On Tue, 11/18/08, Mark Waser <[EMAIL PROTECTED]> wrote: > > I mean that people are free to decide if others feel pain. > > Wow! You are one sick puppy, dude. Personally, you have > just hit my "Do not bother debating with" list. > > You can "decide" anything you like -- but that > doesn't

Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-18 Thread Mark Waser
From: "Matt Mahoney" <[EMAIL PROTECTED]> To: Sent: Monday, November 17, 2008 4:44 PM Subject: RE: FW: [agi] A paper that actually does solve the problem of consciousness--correction --- On Mon, 11/17/08, Ed Porter <[EMAIL PROTECTED]> wrote: First, it is not clear "p

RE: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-18 Thread John G. Rose
> From: Trent Waddington [mailto:[EMAIL PROTECTED] > > On Tue, Nov 18, 2008 at 7:44 AM, Matt Mahoney <[EMAIL PROTECTED]> > wrote: > > I mean that people are free to decide if others feel pain. For > example, a scientist may decide that a mouse does not feel pain when it > is stuck in the eye with

RE: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Ed Porter
, simultaneity, and meaning. -Original Message- From: Matt Mahoney [mailto:[EMAIL PROTECTED] Sent: Monday, November 17, 2008 8:46 PM To: agi@v2.listbox.com Subject: RE: FW: [agi] A paper that actually does solve the problem of consciousness--correction --- On Mon, 11/17/08, Ed Porter

RE: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Ed Porter
Trent Waddington [mailto:[EMAIL PROTECTED] Sent: Monday, November 17, 2008 7:36 PM To: agi@v2.listbox.com Subject: Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction On Tue, Nov 18, 2008 at 10:21 AM, Ed Porter <[EMAIL PROTECTED]> wrote: > I am talking

RE: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Matt Mahoney
--- On Mon, 11/17/08, Ed Porter <[EMAIL PROTECTED]> wrote: >I think a good enough definition >to get started with is that which we humans feel our minds are directly aware >of, including awareness of senses, emotions, perceptions, and thoughts. You are describing episodic memory, the ability to re

Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Trent Waddington
On Tue, Nov 18, 2008 at 10:21 AM, Ed Porter <[EMAIL PROTECTED]> wrote: > I am talking about the type of awareness that we humans have when we say we > are "conscious" of something. You must talk to different humans to me. I've not had anyone use the word "conscious" around me in decades.. and usu

RE: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Ed Porter
but it is certainly possible. In fifty years, humankind will probably know for sure. Ed Porter -Original Message- From: Trent Waddington [mailto:[EMAIL PROTECTED] Sent: Monday, November 17, 2008 6:19 PM To: agi@v2.listbox.com Subject: Re: FW: [agi] A paper that actually does solve t

Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Mike Tintner
[so who's near Berkeley to report back?]: UC Berkeley Cognitive Science Students Association presents: "Pain and the Brain" Wednesday, November 19th 5101 Tolman Hall 6 pm - 8 pm UCSF neuroscienctist Dr. Howard Fields and Berkeley philosopher John Searle represent some of the most knowl

Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Trent Waddington
On Tue, Nov 18, 2008 at 9:03 AM, Ed Porter <[EMAIL PROTECTED]> wrote: > I think a good enough definition to get started with is that which we humans > feel our minds are directly aware of, including awareness of senses, > emotions, perceptions, and thoughts. (This would include much of what > Rich

RE: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Ed Porter
.listbox.com Subject: RE: FW: [agi] A paper that actually does solve the problem of consciousness--correction Before you can start searching for consciousness, you need to describe precisely what you are looking for. -- Matt Mahoney, [EMAIL PROTECTED] --- On Mon, 11/17/08, Ed Porter <[EM

RE: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Matt Mahoney
Porter > > -Original Message----- > From: Matt Mahoney [mailto:[EMAIL PROTECTED] > Sent: Monday, November 17, 2008 4:45 PM > To: agi@v2.listbox.com > Subject: RE: FW: [agi] A paper that actually does solve the > problem of > consciousness--correction > > --- On M

Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Matt Mahoney
--- On Mon, 11/17/08, Eric Burton <[EMAIL PROTECTED]> wrote: > There are procedures in place for experimenting on humans. And the > biologies of people and animals are orthogonal! Much of this will be > simulated soon When we start simulating people, there will be ethical debates about that. And

Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Matt Mahoney
--- On Mon, 11/17/08, Trent Waddington <[EMAIL PROTECTED]> wrote: > On Tue, Nov 18, 2008 at 7:44 AM, Matt Mahoney > <[EMAIL PROTECTED]> wrote: > > I mean that people are free to decide if others feel > pain. For example, a scientist may decide that a mouse does > not feel pain when it is stuck in

RE: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Ed Porter
ailto:[EMAIL PROTECTED] Sent: Monday, November 17, 2008 4:45 PM To: agi@v2.listbox.com Subject: RE: FW: [agi] A paper that actually does solve the problem of consciousness--correction --- On Mon, 11/17/08, Ed Porter <[EMAIL PROTECTED]> wrote: >First, it is not clear "people >are fre

Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Matt Mahoney
--- On Mon, 11/17/08, Mark Waser <[EMAIL PROTECTED]> wrote: > > Autobliss responds to pain by changing its behavior to > make it less likely. Please explain how this is different > from human suffering. And don't tell me its because one > is human and the other is a simple program, because... > >

Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Eric Burton
There are procedures in place for experimenting on humans. And the biologies of people and animals are orthogonal! Much of this will be simulated soon On 11/17/08, Trent Waddington <[EMAIL PROTECTED]> wrote: > On Tue, Nov 18, 2008 at 7:44 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote: >> I mean tha

Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Trent Waddington
On Tue, Nov 18, 2008 at 7:44 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote: > I mean that people are free to decide if others feel pain. For example, a > scientist may decide that a mouse does not feel pain when it is stuck in the > eye with a needle (the standard way to draw blood) even though it s

RE: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Matt Mahoney
--- On Mon, 11/17/08, Ed Porter <[EMAIL PROTECTED]> wrote: >First, it is not clear "people >are free to decide what makes pain "real"," at least >subjectively real. I mean that people are free to decide if others feel pain. For example, a scientist may decide that a mouse does not feel pain when

Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Mark Waser
s real for both. - Original Message - From: "Matt Mahoney" <[EMAIL PROTECTED]> To: Sent: Monday, November 17, 2008 2:17 PM Subject: Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction --- On Mon, 11/17/08, Mark Waser <[EMAIL PROTECTED]&

RE: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Ed Porter
M To: agi@v2.listbox.com Subject: Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction --- On Mon, 11/17/08, Ed Porter <[EMAIL PROTECTED]> wrote: > For example, in > fifty years, I think it is quite possible we will be able to say with some >

Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Matt Mahoney
--- On Mon, 11/17/08, Mark Waser <[EMAIL PROTECTED]> wrote: > >> No it won't, because people are free to decide what makes pain "real". > > What? You've got to be kidding . . . . What makes > pain real is how the sufferer reacts to it -- not some > abstract wishful thinking that we use to justi

Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Mark Waser
ahoney" <[EMAIL PROTECTED]> To: Sent: Monday, November 17, 2008 12:44 PM Subject: Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction --- On Mon, 11/17/08, Ed Porter <[EMAIL PROTECTED]> wrote: For example, in fifty years, I think i

Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Matt Mahoney
--- On Mon, 11/17/08, Ed Porter <[EMAIL PROTECTED]> wrote: > For example, in > fifty years, I think it is quite possible we will be able to say with some > confidence if certain machine intelligences we design are conscious nor not, > and whether their pain is as real as the pain of another type of