Trent,
Feynman's page on wikipedia has it as: "If you can't explain something
to a first year student, then you haven't really understood it." but
Feynman reportedly said it in a number of ways, including the
grandmother variant. I learned about it when taking physics classes a
while ago so I don'
--- On Wed, 11/19/08, Jiri Jelinek <[EMAIL PROTECTED]> wrote:
> >My definition of pain is negative reinforcement in a system that learns.
>
> IMO, pain is more like a data with the potential to cause disorder in
> hard-wired algorithms. I'm not saying this fully covers it but it's
> IMO already o
On Wed, Nov 19, 2008 at 6:20 PM, Jiri Jelinek <[EMAIL PROTECTED]> wrote:
>>Trent Waddington wrote:
>>Apparently, it was Einstein who said that if you can't explain it to
>>your grandmother then you don't understand it.
>
> That was Richard Feynman
When? I don't really know who said it.. but every
>Matt Mahoney wrote:
>Autobliss...
Imagine that there is another human language which is the same as
English, just the pain/pleasure related words have the opposite
meaning. Then consider what would that mean for your Autobliss.
>My definition of pain is negative reinforcement in a system that le
or when people are convinced
that they don't have free will.
= = = = =
BAH! I should have quit answering you long ago. No more.
- Original Message -
From: Matt Mahoney
To: agi@v2.listbox.com
Sent: Tuesday, November 18, 2008 7:58 PM
Subject: Re: Definition of pain (was
<[EMAIL PROTECTED]>
To:
Sent: Tuesday, November 18, 2008 6:26 PM
Subject: Definition of pain (was Re: FW: [agi] A paper that actually does
solve the problem of consciousness--correction)
--- On Tue, 11/18/08, Mark Waser <[EMAIL PROTECTED]> wrote:
Autobliss has no grounding, no inte
PROTECTED]> wrote:
From: Ben Goertzel <[EMAIL PROTECTED]>
Subject: Re: Definition of pain (was Re: FW: [agi] A paper that actually does
solve the problem of consciousness--correction)
To: agi@v2.listbox.com
Date: Tuesday, November 18, 2008, 6:29 PM
On Tue, Nov 18, 2008 at 6:26 PM, Mat
On Wed, Nov 19, 2008 at 9:29 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> Clearly, this can be done, and has largely been done already ... though
> cutting and pasting or summarizing the relevant literature in emails would
> not a productive use of time
Apparently, it was Einstein who said that i
On Tue, Nov 18, 2008 at 6:26 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> --- On Tue, 11/18/08, Mark Waser <[EMAIL PROTECTED]> wrote:
>
> > Autobliss has no grounding, no internal feedback, and no
> > volition. By what definitions does it feel pain?
>
> Now you are making up new rules to decide
--- On Tue, 11/18/08, Mark Waser <[EMAIL PROTECTED]> wrote:
> Autobliss has no grounding, no internal feedback, and no
> volition. By what definitions does it feel pain?
Now you are making up new rules to decide that autobliss doesn't feel pain. My
definition of pain is negative reinforcement i
riginal Message -
From: "Matt Mahoney" <[EMAIL PROTECTED]>
To:
Sent: Tuesday, November 18, 2008 5:05 PM
Subject: Re: FW: [agi] A paper that actually does solve the problem of
consciousness--correction
--- On Tue, 11/18/08, Mark Waser <[EMAIL PROTECTED]> wrote:
&g
--- On Tue, 11/18/08, Mark Waser <[EMAIL PROTECTED]> wrote:
> > I mean that people are free to decide if others feel pain.
>
> Wow! You are one sick puppy, dude. Personally, you have
> just hit my "Do not bother debating with" list.
>
> You can "decide" anything you like -- but that
> doesn't
From: "Matt Mahoney" <[EMAIL PROTECTED]>
To:
Sent: Monday, November 17, 2008 4:44 PM
Subject: RE: FW: [agi] A paper that actually does solve the problem of
consciousness--correction
--- On Mon, 11/17/08, Ed Porter <[EMAIL PROTECTED]> wrote:
First, it is not clear "p
> From: Trent Waddington [mailto:[EMAIL PROTECTED]
>
> On Tue, Nov 18, 2008 at 7:44 AM, Matt Mahoney <[EMAIL PROTECTED]>
> wrote:
> > I mean that people are free to decide if others feel pain. For
> example, a scientist may decide that a mouse does not feel pain when it
> is stuck in the eye with
,
simultaneity, and meaning.
-Original Message-
From: Matt Mahoney [mailto:[EMAIL PROTECTED]
Sent: Monday, November 17, 2008 8:46 PM
To: agi@v2.listbox.com
Subject: RE: FW: [agi] A paper that actually does solve the problem of
consciousness--correction
--- On Mon, 11/17/08, Ed Porter
Trent Waddington [mailto:[EMAIL PROTECTED]
Sent: Monday, November 17, 2008 7:36 PM
To: agi@v2.listbox.com
Subject: Re: FW: [agi] A paper that actually does solve the problem of
consciousness--correction
On Tue, Nov 18, 2008 at 10:21 AM, Ed Porter <[EMAIL PROTECTED]> wrote:
> I am talking
--- On Mon, 11/17/08, Ed Porter <[EMAIL PROTECTED]> wrote:
>I think a good enough definition
>to get started with is that which we humans feel our minds are directly aware
>of, including awareness of senses, emotions, perceptions, and thoughts.
You are describing episodic memory, the ability to re
On Tue, Nov 18, 2008 at 10:21 AM, Ed Porter <[EMAIL PROTECTED]> wrote:
> I am talking about the type of awareness that we humans have when we say we
> are "conscious" of something.
You must talk to different humans to me. I've not had anyone use the
word "conscious" around me in decades.. and usu
but it is
certainly possible.
In fifty years, humankind will probably know for sure.
Ed Porter
-Original Message-
From: Trent Waddington [mailto:[EMAIL PROTECTED]
Sent: Monday, November 17, 2008 6:19 PM
To: agi@v2.listbox.com
Subject: Re: FW: [agi] A paper that actually does solve t
[so who's near Berkeley to report back?]:
UC Berkeley Cognitive Science Students Association presents:
"Pain and the Brain"
Wednesday, November 19th
5101 Tolman Hall
6 pm - 8 pm
UCSF neuroscienctist Dr. Howard Fields and Berkeley philosopher John Searle
represent some of the most knowl
On Tue, Nov 18, 2008 at 9:03 AM, Ed Porter <[EMAIL PROTECTED]> wrote:
> I think a good enough definition to get started with is that which we humans
> feel our minds are directly aware of, including awareness of senses,
> emotions, perceptions, and thoughts. (This would include much of what
> Rich
.listbox.com
Subject: RE: FW: [agi] A paper that actually does solve the problem of
consciousness--correction
Before you can start searching for consciousness, you need to describe
precisely what you are looking for.
-- Matt Mahoney, [EMAIL PROTECTED]
--- On Mon, 11/17/08, Ed Porter <[EM
Porter
>
> -Original Message-----
> From: Matt Mahoney [mailto:[EMAIL PROTECTED]
> Sent: Monday, November 17, 2008 4:45 PM
> To: agi@v2.listbox.com
> Subject: RE: FW: [agi] A paper that actually does solve the
> problem of
> consciousness--correction
>
> --- On M
--- On Mon, 11/17/08, Eric Burton <[EMAIL PROTECTED]> wrote:
> There are procedures in place for experimenting on humans. And the
> biologies of people and animals are orthogonal! Much of this will be
> simulated soon
When we start simulating people, there will be ethical debates about that. And
--- On Mon, 11/17/08, Trent Waddington <[EMAIL PROTECTED]> wrote:
> On Tue, Nov 18, 2008 at 7:44 AM, Matt Mahoney
> <[EMAIL PROTECTED]> wrote:
> > I mean that people are free to decide if others feel
> pain. For example, a scientist may decide that a mouse does
> not feel pain when it is stuck in
ailto:[EMAIL PROTECTED]
Sent: Monday, November 17, 2008 4:45 PM
To: agi@v2.listbox.com
Subject: RE: FW: [agi] A paper that actually does solve the problem of
consciousness--correction
--- On Mon, 11/17/08, Ed Porter <[EMAIL PROTECTED]> wrote:
>First, it is not clear "people
>are fre
--- On Mon, 11/17/08, Mark Waser <[EMAIL PROTECTED]> wrote:
> > Autobliss responds to pain by changing its behavior to
> make it less likely. Please explain how this is different
> from human suffering. And don't tell me its because one
> is human and the other is a simple program, because...
>
>
There are procedures in place for experimenting on humans. And the
biologies of people and animals are orthogonal! Much of this will be
simulated soon
On 11/17/08, Trent Waddington <[EMAIL PROTECTED]> wrote:
> On Tue, Nov 18, 2008 at 7:44 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
>> I mean tha
On Tue, Nov 18, 2008 at 7:44 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> I mean that people are free to decide if others feel pain. For example, a
> scientist may decide that a mouse does not feel pain when it is stuck in the
> eye with a needle (the standard way to draw blood) even though it s
--- On Mon, 11/17/08, Ed Porter <[EMAIL PROTECTED]> wrote:
>First, it is not clear "people
>are free to decide what makes pain "real"," at least
>subjectively real.
I mean that people are free to decide if others feel pain. For example, a
scientist may decide that a mouse does not feel pain when
s real for both.
- Original Message -
From: "Matt Mahoney" <[EMAIL PROTECTED]>
To:
Sent: Monday, November 17, 2008 2:17 PM
Subject: Re: FW: [agi] A paper that actually does solve the problem of
consciousness--correction
--- On Mon, 11/17/08, Mark Waser <[EMAIL PROTECTED]&
M
To: agi@v2.listbox.com
Subject: Re: FW: [agi] A paper that actually does solve the problem of
consciousness--correction
--- On Mon, 11/17/08, Ed Porter <[EMAIL PROTECTED]> wrote:
> For example, in
> fifty years, I think it is quite possible we will be able to say with some
>
--- On Mon, 11/17/08, Mark Waser <[EMAIL PROTECTED]> wrote:
> >> No it won't, because people are free to decide what makes pain "real".
>
> What? You've got to be kidding . . . . What makes
> pain real is how the sufferer reacts to it -- not some
> abstract wishful thinking that we use to justi
ahoney" <[EMAIL PROTECTED]>
To:
Sent: Monday, November 17, 2008 12:44 PM
Subject: Re: FW: [agi] A paper that actually does solve the problem of
consciousness--correction
--- On Mon, 11/17/08, Ed Porter <[EMAIL PROTECTED]> wrote:
For example, in
fifty years, I think i
--- On Mon, 11/17/08, Ed Porter <[EMAIL PROTECTED]> wrote:
> For example, in
> fifty years, I think it is quite possible we will be able to say with some
> confidence if certain machine intelligences we design are conscious nor not,
> and whether their pain is as real as the pain of another type of
35 matches
Mail list logo