[agi] Pure reason is a disease.

2007-05-01 Thread Mark Waser
>From the Boston Globe 
>(http://www.boston.com/news/education/higher/articles/2007/04/29/hearts__minds/?page=full)

Antonio Damasio, a neuroscientist at USC, has played a pivotal role in 
challenging the old assumptions and establishing emotions as an important 
scientific subject. When Damasio first published his results in the early 
1990s, most cognitive scientists assumed that emotions interfered with rational 
thought. A person without any emotions should be a better thinker, since their 
cortical computer could process information without any distractions.

But Damasio sought out patients who had suffered brain injuries that prevented 
them from perceiving their own feelings, and put this idea to the test. The 
lives of these patients quickly fell apart, he found, because they could not 
make effective decisions. Some made terrible investments and ended up bankrupt; 
most just spent hours deliberating over irrelevant details, such as where to 
eat lunch. These results suggest that proper thinking requires feeling. Pure 
reason is a disease.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Re: [agi] Pure reason is a disease.

2007-05-01 Thread Benjamin Goertzel

Well, this tells you something interesting about the human cognitive
architecture, but not too much about intelligence in general...

I think the dichotomy btw feeling and thinking is a consequence of the
limited reflective capabilities of the human brain...  I wrote about this in
"The Hidden Pattern", and an earlier brief essay on the topic is here:

http://www.goertzel.org/dynapsyc/2004/Emotions.htm

-- Ben G

On 5/1/07, Mark Waser <[EMAIL PROTECTED]> wrote:


 From the Boston Globe (
http://www.boston.com/news/education/higher/articles/2007/04/29/hearts__minds/?page=full
)

Antonio Damasio, a neuroscientist at USC, has played a pivotal role in
challenging the old assumptions and establishing emotions as an important
scientific subject. When Damasio first published his results in the early
1990s, most cognitive scientists assumed that emotions interfered with
rational thought. A person without any emotions should be a better thinker,
since their cortical computer could process information without any
distractions.

But Damasio sought out patients who had suffered brain injuries that
prevented them from perceiving their own feelings, and put this idea to the
test. The lives of these patients quickly fell apart, he found, because they
could not make effective decisions. Some made terrible investments and ended
up bankrupt; most just spent hours deliberating over irrelevant details,
such as where to eat lunch. These results suggest that proper thinking
requires feeling. Pure reason is a disease.
--
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Re: [agi] Pure reason is a disease.

2007-05-01 Thread Mark Waser
>> Well, this tells you something interesting about the human cognitive 
>> architecture, but not too much about intelligence in general...

How do you know that it doesn't tell you much about intelligence in general?  
That was an incredibly dismissive statement.  Can you justify it?

>> I think the dichotomy btw feeling and thinking is a consequence of the 
>> limited reflective capabilities of the human brain...  

I don't believe that there is a true dichotomy between thinking and feeling.  I 
think that it is a spectrum that, in the case of humans, is weighted towards 
the ends (and I could give reasons why I believe it has happened this way) but 
which, in a ideal world/optimized entity, would be continuous. 

- Original Message - 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, May 01, 2007 11:05 AM
  Subject: Re: [agi] Pure reason is a disease.



  Well, this tells you something interesting about the human cognitive 
architecture, but not too much about intelligence in general...

  I think the dichotomy btw feeling and thinking is a consequence of the 
limited reflective capabilities of the human brain...  I wrote about this in 
"The Hidden Pattern", and an earlier brief essay on the topic is here: 

  http://www.goertzel.org/dynapsyc/2004/Emotions.htm

  -- Ben G


  On 5/1/07, Mark Waser <[EMAIL PROTECTED]> wrote:
From the Boston Globe ( 
http://www.boston.com/news/education/higher/articles/2007/04/29/hearts__minds/?page=full)

Antonio Damasio, a neuroscientist at USC, has played a pivotal role in 
challenging the old assumptions and establishing emotions as an important 
scientific subject. When Damasio first published his results in the early 
1990s, most cognitive scientists assumed that emotions interfered with rational 
thought. A person without any emotions should be a better thinker, since their 
cortical computer could process information without any distractions.

But Damasio sought out patients who had suffered brain injuries that 
prevented them from perceiving their own feelings, and put this idea to the 
test. The lives of these patients quickly fell apart, he found, because they 
could not make effective decisions. Some made terrible investments and ended up 
bankrupt; most just spent hours deliberating over irrelevant details, such as 
where to eat lunch. These results suggest that proper thinking requires 
feeling. Pure reason is a disease.



This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to: 
http://v2.listbox.com/member/?&; 


--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Re: [agi] Pure reason is a disease.

2007-05-01 Thread Benjamin Goertzel

On 5/1/07, Mark Waser <[EMAIL PROTECTED]> wrote:


 >> Well, this tells you something interesting about the human cognitive
architecture, but not too much about intelligence in general...

How do you know that it doesn't tell you much about intelligence in
general?  That was an incredibly dismissive statement.  Can you justify it?




Well I tried to in the essay that I pointed to in my response.

My point, in that essay, is that the nature of human emotions is rooted in
the human brain architecture, according to which our systemic physiological
responses to cognitive phenomena ("emotions") are rooted in primitive parts
of the brain that we don't have much conscious introspection into.  So, we
actually can't reason about the intermediate conclusions that go into our
emotional reactions very easily, because the "conscious, reasoning" parts of
our brains don't have the ability to look into the intermediate results
stored and manipulated within the more primitive "emotionally reacting"
parts of the brain.  So our deliberative consciousness has choice of either

-- accepting not-very-thoroughly-analyzable outputs from the emotional parts
of the brain

or

-- rejecting them

and doesn't have the choice to focus deliberative attention on the
intermediate steps used by the emotional brain to arrive at its conclusions.

Of course, through years of practice one can learn to bring more and more of
the emotional brain's operations into the scope of conscious deliberation,
but one can never do this completely due to the structure of the human
brain.

On the other hand, an AI need not have the same restrictions.  An AI should
be able to introspect into the intermediary conclusions and manipulations
used to arrive at its "feeling responses".  Yes there are restrictions on
the amount of introspection possible, imposed by computational resource
limitations; but this is different than the blatant and severe architectural
restrictions imposed by the design of the human brain.

Because of the difference mentioned in the prior paragraph, the rigid
distinction between emotion and reason that exists in the human brain will
not exist in a well-design AI.

Sorry for not giving references regarding my analysis of the human
cognitive/neural system -- I have read them but don't have the reference
list at hand. Some (but not a thorough list) are given in the article I
referenced before.

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Re: [agi] Pure reason is a disease.

2007-05-01 Thread Mark Waser
>> My point, in that essay, is that the nature of human emotions is rooted in 
>> the human brain architecture, 

I'll agree that human emotions are rooted in human brain architecture but 
there is also the question -- is there something analogous to emotion which is 
generally necessary for *effective* intelligence?  My answer is a qualified but 
definite yes since emotion clearly serves a number of purposes that apparently 
aren't otherwise served (in our brains) by our pure logical reasoning 
mechanisms (although, potentially, there may be something else that serves 
those purposes equally well).  In particular, emotions seem necessary (in 
humans) to a) provide goals, b) provide pre-programmed constraints (for when 
logical reasoning doesn't have enough information), and c) enforce urgency.

Without looking at these things that emotions provide, I'm not sure that 
you can create an *effective* general intelligence (since these roles need to 
be filled by *something*).

>> Because of the difference mentioned in the prior paragraph, the rigid 
>> distinction between emotion and reason that exists in the human brain will 
>> not exist in a well-design AI.

Which is exactly why I was arguing that emotions and reason (or feeling and 
thinking) were a spectrum rather than a dichotomy.


  - Original Message - 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, May 01, 2007 1:05 PM
  Subject: Re: [agi] Pure reason is a disease.





  On 5/1/07, Mark Waser <[EMAIL PROTECTED]> wrote:
>> Well, this tells you something interesting about the human cognitive 
architecture, but not too much about intelligence in general...

How do you know that it doesn't tell you much about intelligence in 
general?  That was an incredibly dismissive statement.  Can you justify it?


  Well I tried to in the essay that I pointed to in my response.

  My point, in that essay, is that the nature of human emotions is rooted in 
the human brain architecture, according to which our systemic physiological 
responses to cognitive phenomena ("emotions") are rooted in primitive parts of 
the brain that we don't have much conscious introspection into.  So, we 
actually can't reason about the intermediate conclusions that go into our 
emotional reactions very easily, because the "conscious, reasoning" parts of 
our brains don't have the ability to look into the intermediate results stored 
and manipulated within the more primitive "emotionally reacting" parts of the 
brain.  So our deliberative consciousness has choice of either 

  -- accepting not-very-thoroughly-analyzable outputs from the emotional parts 
of the brain

  or

  -- rejecting them

  and doesn't have the choice to focus deliberative attention on the 
intermediate steps used by the emotional brain to arrive at its conclusions. 

  Of course, through years of practice one can learn to bring more and more of 
the emotional brain's operations into the scope of conscious deliberation, but 
one can never do this completely due to the structure of the human brain. 

  On the other hand, an AI need not have the same restrictions.  An AI should 
be able to introspect into the intermediary conclusions and manipulations used 
to arrive at its "feeling responses".  Yes there are restrictions on the amount 
of introspection possible, imposed by computational resource limitations; but 
this is different than the blatant and severe architectural restrictions 
imposed by the design of the human brain. 

  Because of the difference mentioned in the prior paragraph, the rigid 
distinction between emotion and reason that exists in the human brain will not 
exist in a well-design AI.

  Sorry for not giving references regarding my analysis of the human 
cognitive/neural system -- I have read them but don't have the reference list 
at hand. Some (but not a thorough list) are given in the article I referenced 
before. 

  -- Ben G

--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Re: [agi] Pure reason is a disease.

2007-05-01 Thread Russell Wallace

On 5/1/07, Mark Waser <[EMAIL PROTECTED]> wrote:


I'll agree that human emotions are rooted in human brain architecture
but there is also the question -- is there something analogous to emotion
which is generally necessary for *effective* intelligence?  My answer is a
qualified but definite yes since emotion clearly serves a number of purposes
that apparently aren't otherwise served (in our brains) by our pure logical
reasoning mechanisms (although, potentially, there may be something else
that serves those purposes equally well).  In particular, emotions seem
necessary (in humans) to a) provide goals, b) provide pre-programmed
constraints (for when logical reasoning doesn't have enough information),
and c) enforce urgency.



And for what it's worth, I think a) and b) are necessary - any intelligent
system will need some equivalent of emotions to perform those functions -
while c) is contingent, an architectural feature of humans that might or
might not be shared by other intelligences.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Re: [agi] Pure reason is a disease.

2007-05-01 Thread Benjamin Goertzel

  In particular, emotions seem necessary (in humans) to a) provide goals,
b) provide pre-programmed constraints (for when logical reasoning doesn't
have enough information), and c) enforce urgency.



Agreed.

But I think that much of the particular flavor of emotions in humans comes
from their relative opacity to the deliberative mind... and this aspect will
not be there to anywhere near the same extent in a well-design AI.

So, IMO, it becomes a toss-up, whether to use the label "emotion" to
describe the emotion-analogues of an AI with transparent view into the
innards of its emotion-analogues...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Re: [agi] Pure reason is a disease.

2007-05-01 Thread Mark Waser
>> So, IMO, it becomes a toss-up, whether to use the label "emotion" to 
>> describe the emotion-analogues of an AI with transparent view into the 
>> innards of its emotion-analogues...

True, but not having these emotion-analogues would be a diseased condition (to 
close the loop with the original post :-).
  - Original Message - 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, May 01, 2007 2:06 PM
  Subject: Re: [agi] Pure reason is a disease.




  In particular, emotions seem necessary (in humans) to a) provide goals, 
b) provide pre-programmed constraints (for when logical reasoning doesn't have 
enough information), and c) enforce urgency.

  Agreed.

  But I think that much of the particular flavor of emotions in humans comes 
from their relative opacity to the deliberative mind... and this aspect will 
not be there to anywhere near the same extent in a well-design AI. 

  So, IMO, it becomes a toss-up, whether to use the label "emotion" to describe 
the emotion-analogues of an AI with transparent view into the innards of its 
emotion-analogues...

  -- Ben G

--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Re: [agi] Pure reason is a disease.

2007-05-01 Thread Jiri Jelinek

emotions.. to a) provide goals.. b) provide pre-programmed constraints, and

c) enforce urgency.

Our AI = our tool = should work for us = will get high level goals (+
urgency info and constraints) from us. Allowing other sources of high level
goals = potentially asking for conflicts. For sub-goals, AI can go with
reasoning.


Pure reason is a disease


For humans - yes, for our artificial problem solvers - emotion is a disease.

Jiri Jelinek

On 5/1/07, Mark Waser <[EMAIL PROTECTED]> wrote:


 >> My point, in that essay, is that the nature of human emotions is
rooted in the human brain architecture,

I'll agree that human emotions are rooted in human brain architecture
but there is also the question -- is there something analogous to emotion
which is generally necessary for *effective* intelligence?  My answer is a
qualified but definite yes since emotion clearly serves a number of purposes
that apparently aren't otherwise served (in our brains) by our pure logical
reasoning mechanisms (although, potentially, there may be something else
that serves those purposes equally well).  In particular, emotions seem
necessary (in humans) to a) provide goals, b) provide pre-programmed
constraints (for when logical reasoning doesn't have enough information),
and c) enforce urgency.

Without looking at these things that emotions provide, I'm not sure
that you can create an *effective* general intelligence (since these roles
need to be filled by *something*).

>> Because of the difference mentioned in the prior paragraph, the rigid
distinction between emotion and reason that exists in the human brain will
not exist in a well-design AI.

Which is exactly why I was arguing that emotions and reason (or
feeling and thinking) were a spectrum rather than a dichotomy.

 - Original Message -
*From:* Benjamin Goertzel <[EMAIL PROTECTED]>
*To:* agi@v2.listbox.com
*Sent:* Tuesday, May 01, 2007 1:05 PM
*Subject:* Re: [agi] Pure reason is a disease.



On 5/1/07, Mark Waser <[EMAIL PROTECTED]> wrote:
>
>  >> Well, this tells you something interesting about the human cognitive
> architecture, but not too much about intelligence in general...
>
> How do you know that it doesn't tell you much about intelligence in
> general?  That was an incredibly dismissive statement.  Can you justify it?
>


Well I tried to in the essay that I pointed to in my response.

My point, in that essay, is that the nature of human emotions is rooted in
the human brain architecture, according to which our systemic physiological
responses to cognitive phenomena ("emotions") are rooted in primitive parts
of the brain that we don't have much conscious introspection into.  So, we
actually can't reason about the intermediate conclusions that go into our
emotional reactions very easily, because the "conscious, reasoning" parts of
our brains don't have the ability to look into the intermediate results
stored and manipulated within the more primitive "emotionally reacting"
parts of the brain.  So our deliberative consciousness has choice of either

-- accepting not-very-thoroughly-analyzable outputs from the emotional
parts of the brain

or

-- rejecting them

and doesn't have the choice to focus deliberative attention on the
intermediate steps used by the emotional brain to arrive at its conclusions.


Of course, through years of practice one can learn to bring more and more
of the emotional brain's operations into the scope of conscious
deliberation, but one can never do this completely due to the structure of
the human brain.

On the other hand, an AI need not have the same restrictions.  An AI
should be able to introspect into the intermediary conclusions and
manipulations used to arrive at its "feeling responses".  Yes there are
restrictions on the amount of introspection possible, imposed by
computational resource limitations; but this is different than the blatant
and severe architectural restrictions imposed by the design of the human
brain.

Because of the difference mentioned in the prior paragraph, the rigid
distinction between emotion and reason that exists in the human brain will
not exist in a well-design AI.

Sorry for not giving references regarding my analysis of the human
cognitive/neural system -- I have read them but don't have the reference
list at hand. Some (but not a thorough list) are given in the article I
referenced before.

-- Ben G
--
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

--
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Re: [agi] Pure reason is a disease.

2007-05-01 Thread Mark Waser
>> emotions.. to a) provide goals.. b) provide pre-programmed constraints, and 
>> c) enforce urgency.
> Our AI = our tool = should work for us = will get high level goals (+ urgency 
> info and constraints) from us. Allowing other sources of high level goals = 
> potentially asking for conflicts. > For sub-goals, AI can go with reasoning.

Hmmm.  I understand your point but have an emotional/ethical problem with it.  
I'll have to ponder that for a while.

> For humans - yes, for our artificial problem solvers - emotion is a disease.

What if the emotion is solely there to enforce our goals?  Fulfill our goals = 
be happy, fail at our goals = be *very* sad.  Or maybe better ==> Not violate 
our constraints = comfortable, violate our constraints = feel 
discomfort/sick/pain.


  - Original Message - 
  From: Jiri Jelinek 
  To: agi@v2.listbox.com 
  Sent: Tuesday, May 01, 2007 2:29 PM
  Subject: Re: [agi] Pure reason is a disease.


  >emotions.. to a) provide goals.. b) provide pre-programmed constraints, and 
c) enforce urgency.

  Our AI = our tool = should work for us = will get high level goals (+ urgency 
info and constraints) from us. Allowing other sources of high level goals = 
potentially asking for conflicts. For sub-goals, AI can go with reasoning. 

  >Pure reason is a disease

  For humans - yes, for our artificial problem solvers - emotion is a disease.

  Jiri Jelinek


  On 5/1/07, Mark Waser < [EMAIL PROTECTED]> wrote:
>> My point, in that essay, is that the nature of human emotions is rooted 
in the human brain architecture, 

I'll agree that human emotions are rooted in human brain architecture 
but there is also the question -- is there something analogous to emotion which 
is generally necessary for *effective* intelligence?  My answer is a qualified 
but definite yes since emotion clearly serves a number of purposes that 
apparently aren't otherwise served (in our brains) by our pure logical 
reasoning mechanisms (although, potentially, there may be something else that 
serves those purposes equally well).  In particular, emotions seem necessary 
(in humans) to a) provide goals, b) provide pre-programmed constraints (for 
when logical reasoning doesn't have enough information), and c) enforce urgency.

Without looking at these things that emotions provide, I'm not sure 
that you can create an *effective* general intelligence (since these roles need 
to be filled by *something*).

>> Because of the difference mentioned in the prior paragraph, the rigid 
distinction between emotion and reason that exists in the human brain will not 
exist in a well-design AI.

Which is exactly why I was arguing that emotions and reason (or feeling 
and thinking) were a spectrum rather than a dichotomy.


  - Original Message - 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
      Sent: Tuesday, May 01, 2007 1:05 PM 
  Subject: Re: [agi] Pure reason is a disease.





  On 5/1/07, Mark Waser <[EMAIL PROTECTED]> wrote: 
>> Well, this tells you something interesting about the human cognitive 
architecture, but not too much about intelligence in general...

How do you know that it doesn't tell you much about intelligence in 
general?  That was an incredibly dismissive statement.  Can you justify it?


  Well I tried to in the essay that I pointed to in my response.

  My point, in that essay, is that the nature of human emotions is rooted 
in the human brain architecture, according to which our systemic physiological 
responses to cognitive phenomena ("emotions") are rooted in primitive parts of 
the brain that we don't have much conscious introspection into.  So, we 
actually can't reason about the intermediate conclusions that go into our 
emotional reactions very easily, because the "conscious, reasoning" parts of 
our brains don't have the ability to look into the intermediate results stored 
and manipulated within the more primitive "emotionally reacting" parts of the 
brain.  So our deliberative consciousness has choice of either 

  -- accepting not-very-thoroughly-analyzable outputs from the emotional 
parts of the brain

  or

  -- rejecting them

  and doesn't have the choice to focus deliberative attention on the 
intermediate steps used by the emotional brain to arrive at its conclusions. 

  Of course, through years of practice one can learn to bring more and more 
of the emotional brain's operations into the scope of conscious deliberation, 
but one can never do this completely due to the structure of the human brain. 

  On the other hand, an AI need not have the same restrictions.  An AI 
should be able to introspect into the intermediary conclusions and 
manipulations used to arrive at its "feeling responses".

Re: [agi] Pure reason is a disease.

2007-05-01 Thread J. Storrs Hall, PhD.
On Tuesday 01 May 2007 14:06, Benjamin Goertzel wrote:
> >   In particular, emotions seem necessary (in humans) to a) provide goals,
> > b) provide pre-programmed constraints (for when logical reasoning doesn't
> > have enough information), and c) enforce urgency.
> ...
> So, IMO, it becomes a toss-up, whether to use the label "emotion" to
> describe the emotion-analogues of an AI with transparent view into the
> innards of its emotion-analogues...
>

It's probably worth pointing out in this connection the Schachter-Singer two 
factor theory of emotion: that there is a cognitive factor and a physical 
arousal (and that the physical arousal is THE SAME for all emotions). In 
other words, physical arousal provides the urgency but just what it's urgent 
to do is determined by a cognitive process not significantly different from 
any other. Furthermore, it is not uncommon for people to mistake the arousal 
from one cause for emotional urgency for another, merely because both happen 
at the same time.

(see http://en.wikipedia.org/wiki/Two_factor_theory_of_emotion)

Personally, I think that the use of the term emotion in AGI discussions clouds 
the issue. It is clearly not necessary for an AGI to have a physiological 
arousal that prepares the body for fight or flight. The role of arousal as a 
prioritizing mechanism is easily captured by any of a wide variety of 
well-understood heuristics used in operating systems. What the AGI then needs 
is *motivations*, which can flow in a straightforward way from explicit goal 
structures or from reinforcement learning.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936


Re: [agi] Pure reason is a disease.

2007-05-01 Thread Russell Wallace

On 5/1/07, Jiri Jelinek <[EMAIL PROTECTED]> wrote:


Our AI = our tool = should work for us = will get high level goals (+
urgency info and constraints) from us. Allowing other sources of high level
goals = potentially asking for conflicts. For sub-goals, AI can go with
reasoning.



Yep. Preprogrammed constraints are then built-in biases to cut down the
search space. ("Why do I assume space is more likely to have three
dimensions than four or five? Because I'm programmed to.")

Of course there's no reason to give these the subjective quality of human
emotions.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Re: [agi] Pure reason is a disease.

2007-05-01 Thread Jiri Jelinek

Mark,


I understand your point but have an emotional/ethical problem with it. I'll

have to ponder that for a while.

Try to view our AI as an extension of our intelligence rather than
purely-its-own-kind.


For humans - yes, for our artificial problem solvers - emotion is a

disease.

What if the emotion is solely there to enforce our goals?
Or maybe better ==> Not violate our constraints = comfortable, violate our

constraints = feel discomfort/sick/pain.

Intelligence is meaningless without discomfort. Unless your PC gets some
sort of "feel card", it cannot really prefer, cannot set goal(s), and cannot
have "hard feelings" about working extremely hard for you. You can a) spend
time figuring out how to build the card, build it, plug it in, and (with
potential risks) tune it to make it friendly enough so it will actually come
up with goals that are compatible enough with your goals *OR* b) you can
"simply" tell your "feeling-free" AI what problems you want it to work on.
Your choice.. I hope we are eventually not gonna end up asking the "b)"
solutions how to clean up a great mess caused by the "a)" solutions.

Best,
Jiri Jelinek

On 5/1/07, Mark Waser <[EMAIL PROTECTED]> wrote:


 >> emotions.. to a) provide goals.. b) provide pre-programmed
constraints, and c) enforce urgency.
> Our AI = our tool = should work for us = will get high level goals (+
urgency info and constraints) from us. Allowing other sources of high level
goals = potentially asking for conflicts. > For sub-goals, AI can go with
reasoning.

Hmmm.  I understand your point but have an emotional/ethical problem with
it.  I'll have to ponder that for a while.

> For humans - yes, for our artificial problem solvers - emotion is a
disease.
What if the emotion is solely there to enforce our goals?  Fulfill our
goals = be happy, fail at our goals = be *very* sad.  Or maybe better ==>
Not violate our constraints = comfortable, violate our constraints = feel
discomfort/sick/pain.

 - Original Message -
*From:* Jiri Jelinek <[EMAIL PROTECTED]>
*To:* agi@v2.listbox.com
*Sent:* Tuesday, May 01, 2007 2:29 PM
*Subject:* Re: [agi] Pure reason is a disease.

>emotions.. to a) provide goals.. b) provide pre-programmed constraints,
and c) enforce urgency.

Our AI = our tool = should work for us = will get high level goals (+
urgency info and constraints) from us. Allowing other sources of high level
goals = potentially asking for conflicts. For sub-goals, AI can go with
reasoning.

>Pure reason is a disease

For humans - yes, for our artificial problem solvers - emotion is a
disease.

Jiri Jelinek

On 5/1/07, Mark Waser < [EMAIL PROTECTED]> wrote:

>  >> My point, in that essay, is that the nature of human emotions is
> rooted in the human brain architecture,
>
> I'll agree that human emotions are rooted in human brain
> architecture but there is also the question -- is there something analogous
> to emotion which is generally necessary for *effective* intelligence?  My
> answer is a qualified but definite yes since emotion clearly serves a number
> of purposes that apparently aren't otherwise served (in our brains) by our
> pure logical reasoning mechanisms (although, potentially, there may be
> something else that serves those purposes equally well).  In particular,
> emotions seem necessary (in humans) to a) provide goals, b) provide
> pre-programmed constraints (for when logical reasoning doesn't have enough
> information), and c) enforce urgency.
>
> Without looking at these things that emotions provide, I'm not sure
> that you can create an *effective* general intelligence (since these roles
> need to be filled by *something*).
>
> >> Because of the difference mentioned in the prior paragraph, the rigid
> distinction between emotion and reason that exists in the human brain will
> not exist in a well-design AI.
>
> Which is exactly why I was arguing that emotions and reason (or
> feeling and thinking) were a spectrum rather than a dichotomy.
>
>  - Original Message -
> *From:* Benjamin Goertzel <[EMAIL PROTECTED]>
> *To:* agi@v2.listbox.com
> *Sent:* Tuesday, May 01, 2007 1:05 PM
> *Subject:* Re: [agi] Pure reason is a disease.
>
>
>
>  On 5/1/07, Mark Waser <[EMAIL PROTECTED]> wrote:
> >
> >  >> Well, this tells you something interesting about the human
> > cognitive architecture, but not too much about intelligence in general...
> >
> > How do you know that it doesn't tell you much about intelligence in
> > general?  That was an incredibly dismissive statement.  Can you justify it?
> >
>
>
> Well I tried to in the essay that I pointed to in my response.
>
> My point, in that essay, is that the 

Re: [agi] Pure reason is a disease.

2007-05-02 Thread Mark Waser
Hi Jiri,

OK, I pondered it for a while and the answer is -- "failure modes".

Your logic is correct.  If I were willing take all of your assumptions as 
always true, then I would agree with you.  However, logic, when it relies upon 
single chain reasoning is relatively fragile.  And when it rests upon bad 
assumptions, it can be just a roadmap to disaster.

I believe that it is very possible (nay, very probable) for an "Artificial 
Program Solver" to end up with a goal that was not intended by you.  This can 
happen in any number of ways from incorrect reasoning in an imperfect world to 
robots rights activists deliberately programming pro-robot goals into them.  
Your statement "Allowing other sources of high level goals = potentially asking 
for conflicts." is undoubtedly true but believing that you can stop all other 
sources of high level goals is . . . . simply incorrect.

Now, look at how I reacted to your initial e-mail.  My logic said "Cool!  
Let's go implement this."  My intuition/emotions said "Wait a minute.  There's 
something wonky here.  Even if I can't put my finger on it, maybe we'd better 
hold up until we can investigate this further".  Now -- which way would you 
like your Jupiter brain to react?

Richard Loosemoore has suggested on this list that Friendliness could also 
be implemented as a large number of loose constraints.  I view emotions as sort 
of operating this way and, in part, serving this purpose.  Further, recent 
brain research makes it quite clear that human beings have two clear and 
distinct sources of "morality" -- both logical and emotional 
(http://www.slate.com/id/2162998/pagenum/all/#page_start).  This is, in part, 
what I was thinking of when I listed "b) provide pre-programmed constraints 
(for when logical reasoning doesn't have enough information)" as one of the 
reasons why emotion was required.

I would strongly argue that an intelligence with well-designed feelings is 
far, far more likely to stay Friendly than an intelligence without feelings -- 
and I would argue that there is substantial evidence for this as well in our 
perception of and stories about "emotionless" people.

Mark

P.S.  Great discussion.  Thank you.
  - Original Message - 
  From: Jiri Jelinek 
  To: agi@v2.listbox.com 
  Sent: Tuesday, May 01, 2007 6:21 PM
  Subject: Re: [agi] Pure reason is a disease.


  Mark,

  >I understand your point but have an emotional/ethical problem with it. I'll 
have to ponder that for a while.

  Try to view our AI as an extension of our intelligence rather than 
purely-its-own-kind. 


  >> For humans - yes, for our artificial problem solvers - emotion is a 
disease.

  >What if the emotion is solely there to enforce our goals?
  >Or maybe better ==> Not violate our constraints = comfortable, violate our 
constraints = feel discomfort/sick/pain.

  Intelligence is meaningless without discomfort. Unless your PC gets some sort 
of "feel card", it cannot really prefer, cannot set goal(s), and cannot have 
"hard feelings" about working extremely hard for you. You can a) spend time 
figuring out how to build the card, build it, plug it in, and (with potential 
risks) tune it to make it friendly enough so it will actually come up with 
goals that are compatible enough with your goals *OR* b) you can "simply" tell 
your "feeling-free" AI what problems you want it to work on. Your choice.. I 
hope we are eventually not gonna end up asking the "b)" solutions how to clean 
up a great mess caused by the "a)" solutions. 

  Best,
  Jiri Jelinek


  On 5/1/07, Mark Waser <[EMAIL PROTECTED]> wrote:
>> emotions.. to a) provide goals.. b) provide pre-programmed constraints, 
and c) enforce urgency.
> Our AI = our tool = should work for us = will get high level goals (+ 
urgency info and constraints) from us. Allowing other sources of high level 
goals = potentially asking for conflicts. > For sub-goals, AI can go with 
reasoning.

Hmmm.  I understand your point but have an emotional/ethical problem with 
it.  I'll have to ponder that for a while.

> For humans - yes, for our artificial problem solvers - emotion is a 
disease.

What if the emotion is solely there to enforce our goals?  Fulfill our 
goals = be happy, fail at our goals = be *very* sad.  Or maybe better ==> Not 
violate our constraints = comfortable, violate our constraints = feel 
discomfort/sick/pain.


  - Original Message - 
  From: Jiri Jelinek 
  To: agi@v2.listbox.com 
  Sent: Tuesday, May 01, 2007 2:29 PM 
  Subject: Re: [agi] Pure reason is a disease.


  >emotions.. to a) provide goals.. b) provide pre-programmed constraints, 
and c) enforce urgency.

  Our AI = our tool = should work for

Re: [agi] Pure reason is a disease.

2007-05-02 Thread Mark Waser
Hi again,

A few additional random comments . . . . :-)

>> Intelligence is meaningless without discomfort.

I would rephrase this as (or subsume this under) "intelligence is 
meaningless without goals" -- because discomfort is simply something that sets 
up a goal of "avoid me".  

But then, there is the question of how giving a goal of "avoid x" is truly 
*different* from discomfort (other than the fact that discomfort is normally 
envisioned as always "spreading out" to have a global effect -- even when not 
appropriate -- while goals are generally envisioned to have only logical 
effects -- which is, of course, a very dangerous assumption).
  - Original Message - 
  From: Jiri Jelinek 
  To: agi@v2.listbox.com 
  Sent: Tuesday, May 01, 2007 6:21 PM
  Subject: Re: [agi] Pure reason is a disease.


  Mark,

  >I understand your point but have an emotional/ethical problem with it. I'll 
have to ponder that for a while.

  Try to view our AI as an extension of our intelligence rather than 
purely-its-own-kind. 


  >> For humans - yes, for our artificial problem solvers - emotion is a 
disease.

  >What if the emotion is solely there to enforce our goals?
  >Or maybe better ==> Not violate our constraints = comfortable, violate our 
constraints = feel discomfort/sick/pain.

  Intelligence is meaningless without discomfort. Unless your PC gets some sort 
of "feel card", it cannot really prefer, cannot set goal(s), and cannot have 
"hard feelings" about working extremely hard for you. You can a) spend time 
figuring out how to build the card, build it, plug it in, and (with potential 
risks) tune it to make it friendly enough so it will actually come up with 
goals that are compatible enough with your goals *OR* b) you can "simply" tell 
your "feeling-free" AI what problems you want it to work on. Your choice.. I 
hope we are eventually not gonna end up asking the "b)" solutions how to clean 
up a great mess caused by the "a)" solutions. 

  Best,
  Jiri Jelinek


  On 5/1/07, Mark Waser <[EMAIL PROTECTED]> wrote:
>> emotions.. to a) provide goals.. b) provide pre-programmed constraints, 
and c) enforce urgency.
> Our AI = our tool = should work for us = will get high level goals (+ 
urgency info and constraints) from us. Allowing other sources of high level 
goals = potentially asking for conflicts. > For sub-goals, AI can go with 
reasoning.

Hmmm.  I understand your point but have an emotional/ethical problem with 
it.  I'll have to ponder that for a while.

> For humans - yes, for our artificial problem solvers - emotion is a 
disease.

What if the emotion is solely there to enforce our goals?  Fulfill our 
goals = be happy, fail at our goals = be *very* sad.  Or maybe better ==> Not 
violate our constraints = comfortable, violate our constraints = feel 
discomfort/sick/pain.


  - Original Message - 
  From: Jiri Jelinek 
  To: agi@v2.listbox.com 
  Sent: Tuesday, May 01, 2007 2:29 PM 
  Subject: Re: [agi] Pure reason is a disease.


  >emotions.. to a) provide goals.. b) provide pre-programmed constraints, 
and c) enforce urgency.

  Our AI = our tool = should work for us = will get high level goals (+ 
urgency info and constraints) from us. Allowing other sources of high level 
goals = potentially asking for conflicts. For sub-goals, AI can go with 
reasoning. 

  >Pure reason is a disease

  For humans - yes, for our artificial problem solvers - emotion is a 
disease.

  Jiri Jelinek


  On 5/1/07, Mark Waser < [EMAIL PROTECTED]> wrote: 
>> My point, in that essay, is that the nature of human emotions is 
rooted in the human brain architecture, 

I'll agree that human emotions are rooted in human brain 
architecture but there is also the question -- is there something analogous to 
emotion which is generally necessary for *effective* intelligence?  My answer 
is a qualified but definite yes since emotion clearly serves a number of 
purposes that apparently aren't otherwise served (in our brains) by our pure 
logical reasoning mechanisms (although, potentially, there may be something 
else that serves those purposes equally well).  In particular, emotions seem 
necessary (in humans) to a) provide goals, b) provide pre-programmed 
constraints (for when logical reasoning doesn't have enough information), and 
c) enforce urgency.

Without looking at these things that emotions provide, I'm not sure 
that you can create an *effective* general intelligence (since these roles need 
to be filled by *something*).

>> Because of the difference mentioned in the prior paragraph, the 
rigid distinction between emotion and reason that exists in the human brain 
will not 

Re: [agi] Pure reason is a disease.

2007-05-02 Thread Eric Baum

>> My point, in that essay, is that the nature of human emotions is rooted in 
>> the human brain architecture, 
Mark> I'll agree that human emotions are rooted in human brain
Mark> architecture but there is also the question -- is there
Mark> something analogous to emotion which is generally necessary for
Mark> *effective* intelligence?  My answer is a qualified but definite
Mark> yes since emotion clearly serves a number of purposes that
Mark> apparently aren't otherwise served (in our brains) by our pure
Mark> logical reasoning mechanisms (although, potentially, there may
Mark> be something else that serves those purposes equally well).  In
Mark> particular, emotions seem necessary (in humans) to a) provide
Mark> goals, b) provide pre-programmed constraints (for when logical
Mark> reasoning doesn't have enough information), and c) enforce
Mark> urgency.

My view is that emotions are systems programmed in by the genome to
cause the computational machinery to pursue ends of interest to
evolution, namely those relevant to leaving grandchildren.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936


Re: [agi] Pure reason is a disease.

2007-05-02 Thread Mark Waser

My view is that emotions are systems programmed in by the genome to
cause the computational machinery to pursue ends of interest to
evolution, namely those relevant to leaving grandchildren.


I would concur and rephrase it as follows:  Human emotions are "hard-coded" 
goals that were "implemented"/selected through the "force" of evolution --  
and it's hard to argue with long-term evolution.


- Original Message - 
From: "Eric Baum" <[EMAIL PROTECTED]>

To: 
Sent: Wednesday, May 02, 2007 11:04 AM
Subject: Re: [agi] Pure reason is a disease.




My point, in that essay, is that the nature of human emotions is rooted 
in the human brain architecture,

Mark> I'll agree that human emotions are rooted in human brain
Mark> architecture but there is also the question -- is there
Mark> something analogous to emotion which is generally necessary for
Mark> *effective* intelligence?  My answer is a qualified but definite
Mark> yes since emotion clearly serves a number of purposes that
Mark> apparently aren't otherwise served (in our brains) by our pure
Mark> logical reasoning mechanisms (although, potentially, there may
Mark> be something else that serves those purposes equally well).  In
Mark> particular, emotions seem necessary (in humans) to a) provide
Mark> goals, b) provide pre-programmed constraints (for when logical
Mark> reasoning doesn't have enough information), and c) enforce
Mark> urgency.

My view is that emotions are systems programmed in by the genome to
cause the computational machinery to pursue ends of interest to
evolution, namely those relevant to leaving grandchildren.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936


Re: [agi] Pure reason is a disease.

2007-05-02 Thread Jiri Jelinek
is further".  Now --
which way would you like your Jupiter brain to react?

Richard Loosemoore has suggested on this list that Friendliness could
also be implemented as a large number of loose constraints.  I view emotions
as sort of operating this way and, in part, serving this purpose.  Further,
recent brain research makes it quite clear that human beings have two clear
and distinct sources of "morality" -- both logical and emotional (
http://www.slate.com/id/2162998/pagenum/all/#page_start).  This is, in
part, what I was thinking of when I listed "b) provide pre-programmed
constraints (for when logical reasoning doesn't have enough information)" as
one of the reasons why emotion was required.

I would strongly argue that an intelligence with well-designed
feelings is far, far more likely to stay Friendly than an intelligence
without feelings -- and I would argue that there is substantial evidence for
this as well in our perception of and stories about "emotionless" people.

Mark

P.S.  Great discussion.  Thank you.

- Original Message -
*From:* Jiri Jelinek <[EMAIL PROTECTED]>
*To:* agi@v2.listbox.com
*Sent:* Tuesday, May 01, 2007 6:21 PM
*Subject:* Re: [agi] Pure reason is a disease.

Mark,

>I understand your point but have an emotional/ethical problem with it. I'll
have to ponder that for a while.

Try to view our AI as an extension of our intelligence rather than
purely-its-own-kind.

>> For humans - yes, for our artificial problem solvers - emotion is a
disease.
>What if the emotion is solely there to enforce our goals?
>Or maybe better ==> Not violate our constraints = comfortable, violate
our constraints = feel discomfort/sick/pain.

Intelligence is meaningless without discomfort. Unless your PC gets some
sort of "feel card", it cannot really prefer, cannot set goal(s), and cannot
have "hard feelings" about working extremely hard for you. You can a) spend
time figuring out how to build the card, build it, plug it in, and (with
potential risks) tune it to make it friendly enough so it will actually come
up with goals that are compatible enough with your goals *OR* b) you can
"simply" tell your "feeling-free" AI what problems you want it to work on.
Your choice.. I hope we are eventually not gonna end up asking the "b)"
solutions how to clean up a great mess caused by the "a)" solutions.

Best,
Jiri Jelinek

On 5/1/07, Mark Waser <[EMAIL PROTECTED]> wrote:
>
>  >> emotions.. to a) provide goals.. b) provide pre-programmed
> constraints, and c) enforce urgency.
> > Our AI = our tool = should work for us = will get high level goals (+
> urgency info and constraints) from us. Allowing other sources of high level
> goals = potentially asking for conflicts. > For sub-goals, AI can go with
> reasoning.
>
> Hmmm.  I understand your point but have an emotional/ethical problem
> with it.  I'll have to ponder that for a while.
>
> > For humans - yes, for our artificial problem solvers - emotion is a
> disease.
> What if the emotion is solely there to enforce our goals?  Fulfill our
> goals = be happy, fail at our goals = be *very* sad.  Or maybe better ==>
> Not violate our constraints = comfortable, violate our constraints = feel
> discomfort/sick/pain.
>
>  - Original Message -
> *From:* Jiri Jelinek <[EMAIL PROTECTED]>
> *To:* agi@v2.listbox.com
>  *Sent:* Tuesday, May 01, 2007 2:29 PM
> *Subject:* Re: [agi] Pure reason is a disease.
>
> >emotions.. to a) provide goals.. b) provide pre-programmed constraints,
> and c) enforce urgency.
>
> Our AI = our tool = should work for us = will get high level goals (+
> urgency info and constraints) from us. Allowing other sources of high level
> goals = potentially asking for conflicts. For sub-goals, AI can go with
> reasoning.
>
> >Pure reason is a disease
>
> For humans - yes, for our artificial problem solvers - emotion is a
> disease.
>
> Jiri Jelinek
>
>  On 5/1/07, Mark Waser < [EMAIL PROTECTED]> wrote:
>
> >  >> My point, in that essay, is that the nature of human emotions is
> > rooted in the human brain architecture,
> >
> > I'll agree that human emotions are rooted in human brain
> > architecture but there is also the question -- is there something analogous
> > to emotion which is generally necessary for *effective* intelligence?  My
> > answer is a qualified but definite yes since emotion clearly serves a number
> > of purposes that apparently aren't otherwise served (in our brains) by our
> > pure logical reasoning mechanisms (although, potentially, there may be
> > something else that serves those purposes equally well).  In particular,
> > 

Re: [agi] Pure reason is a disease.

2007-05-03 Thread Mark Waser
>> believing that you can stop all other sources of high level goals is . . . . 
>> simply incorrect.
> IMO depends on design and on the nature & number of users involved.
:-)  Obviously.  But my point is that relying on the fact that you expect to be 
100% successful initially and therefore don't put as many back-up systems into 
place as possible is really foolish and dangerous.  I don't believe that simply 
removing emotions makes it any more likely to stop all other sources of high 
level goals.  Further, I believe that adding emotions *can* be effective in 
helping prevent unwanted high level goals.

> See, you had a conflict in your mind . . . . but I don't think it needs to be 
> that way for AGI. 

I strongly disagree.  An AGI is always going to be dealing with incomplete and 
conflicting information -- and, even if not, the computation required to learn 
(and remove all conflicting partial assumptions generated from learning) will 
take vastly more time than you're ever likely to get.  You need to expect a 
messy, ugly system that is not going to be 100% controllable but which needs to 
have a 100% GUARANTEE that it will not go outside certain limits.  This is 
eminently do-able I do believe -- but not by simply relying on logic to create 
a world model that is good enough to prevent it.

> Paul Ekman's list of emotions: anger, fear, sadness, happiness, disgust

So what is the emotion that would prevent you from murdering someone if you 
absolutely knew that you could get away with it?

>>human beings have two clear and distinct sources of "morality" -- both 
>>logical and emotional
> poor design from my perspective..
Why?  Having backup systems (particularly ones that perform critical tasks) 
seems like eminently *good* design to me.  I think that is actually the crux of 
our debate.  I believe that emotions are a necessary backup to prevent 
catastrophe.  You believe (if I understand correctly -- and please correct me 
if I'm wrong) that backup is not necessary and that having emotions is more 
likely to precipitate catastrophe.

>>I would strongly argue that an intelligence with well-designed feelings is 
>>far, far more likely to stay Friendly than an intelligence without feelings 
> AI without feelings (unlike its user) cannot really get unfriendly.
Friendly is a bad choice of terms since it normally denotes an emotion-linked 
state.  Unfriendly is this context merely means possessing a goal inimical to 
human goals.  An AI without feelings can certainly have goals inimical to human 
goals and therefore be unfriendly (just not be emotionally invested in it :-)

>>how giving a goal of "avoid x" is truly *different* from discomfort 
> It's the "do" vs "NEED to do". 
> Discomfort requires an extra sensor supporting the ability to prefer on its 
> own.
So what is the mechanism that prioritizes sub-goals?  It clearly must 
discriminate between the candidates.  Doesn't that lead to a result that could 
be called a preference?

    Mark

----- Original Message - 
  From: Jiri Jelinek 
  To: agi@v2.listbox.com 
  Sent: Thursday, May 03, 2007 1:57 AM
  Subject: Re: [agi] Pure reason is a disease.


  Mark,

  >logic, when it relies upon single chain reasoning is relatively fragile. And 
when it rests upon bad assumptions, it can be just a roadmap to disaster.

  It all improves with learning. In my design (not implemented yet), AGI learns 
from stories and (assuming it learned enough) can complete incomplete stories. 

  e.g:
  Story name: $tory
  [1] Mark has $0.
  [2] ..[to be generated by AGI]..
  [3] Mark has $1M.

  As the number of learned/solved stories grows, better/different solutions can 
be generated.

  >I believe that it is very possible (nay, very probable) for an "Artificial 
Program Solver" to end up with a goal that was not intended by you. 

  For emotion/feeling enabled AGI - possibly.
  For feeling-free AGI - only if it's buggy.

  Distinguish:
  a) given goals (e.g the [3]) and 
  b) generated sub-goals.

  In my system, there is an admin feature that can restrict both for 
lower-level users. Besides that, to control b), I go with subject-level and 
story-level user-controlled profiles (inheritance supported). For example, if 
Mark is linked to a "Life lover" profile that includes the "Never Kill" rule, 
the sub-goal queries just exclude the Kill action. Rule breaking would just 
cause invalid solutions nobody is interested in. I'm simplifying a bit, but, 
bottom line - both a) & b) can be controlled/restricted. 

  >believing that you can stop all other sources of high level goals is . . . . 
simply incorrect.

  IMO depends on design and on the nature & number of users involved.

  >Now, look at how I reacted to your initial e-mail.  My logic said "C

Re: [agi] Pure reason is a disease.

2007-05-15 Thread Jiri Jelinek
x27;s computed or what data sources it uses -- or worse, it doesn't
recognize that it has a conflict).  The AGI is not going to be infinitely
smart in a pretty perfectly sensed world.  Like I said, it's going to be a
limited entity is a messy world.

>> You just give it rules and it will stick with it (= easier than
controlling humans).

If your rules are correctly specified to the extent of handling all possible
solutions and generalize without any unexpected behavior AND the AGI always
correctly recognizes the situation . . . .

The AGI won't deliberately have goals that conflict yours (unlike humans)
but there are all sorts of ways that life can unexpectedly go awry.

Further, and very importantly to this debate -- Having emotions does *NOT*
make it any more likely that the AGI will not stick with your commands
(quite the contrary -- although anthropomorphism may make it *seem*
otherwise).

>> You review solutions, accept it if you like it. If you don't then you
update rules (and/or modify KB in other ways) preventing unwanted and let
AGI to re-think it.

OK.  And what happens when you don't have time or the AI gets too smart for
you or someone else gets ahold of it and modifies it in an unsafe or even
malevolent way?  When you're talking about one of the biggest existential
threats to humankind -- safeguards are a pretty good idea (even if they are
expensive).

>> we can control it + we review solutions - if not entirely then just
important aspects of it (like politicians working with various domain
experts).

I hate to do it but I should point you at the Singularity Institute and
their views of how easy and catastrophic the creation and loss of control
over an Unfriendly AI would be
(http://www.singinst.org/upload/CFAI.html).


>> Can you give me an example showing how "feelings implemented without
emotional investments" prevent a particular [sub-]goal that cannot be as
effectively prevented by a bunch of rules?

Emotions/feelings *are* effectively "a bunch of rules".  But they are very
simplistic, low-level rules that are given immediate sway over much higher
levels of the system and they are generally not built upon in a logical
fashion before doing so.  As such, they are "safer" in one sense because
they cannot be co-opted by bad logic -- and less safe because they are so
simplistic that they could be fooled by complexity.

Several good examples were in the article on the sources of human morality
-- Most human beings can talk themselves (logically) into believing that
killing a human is OK or even preferable in far more circumstances than they
can force their emotions to go along with it.  I think that this is a *HUGE*
indicator of how we should think when we are considering building something
as dangerous as an entity that will eventually be more powerful than us.

Mark



- Original Message -
From: Jiri Jelinek
To: agi@v2.listbox.com

Sent: Thursday, May 03, 2007 1:11 PM
Subject: Re: [agi] Pure reason is a disease.

Mark,

>relying on the fact that you expect to be 100% successful initially and
therefore don't put as many back-up systems into place as possible is really
foolish and dangerous.

It's basically just a non-trivial search function. In human brain, searches
are dirty so back-up searches make sense. In computer systems, searches are
much cleaner so the backup search functionality typically doesn't make
sense. Besides that, maintaining "many back-up systems" is a pain. It's
easier to tweak single solution-search fn into perfection. For the "backup",
I prefer external solution, like some sort of "AGI chat" protocol so
different AGI solutions (and/or instances of the same AGI) with unique KB
could argue about the best solution.

>> See, you had a conflict in your mind . . . . but I don't think it needs
to be that way for AGI.

>I strongly disagree.  An AGI is always going to be dealing with incomplete
and conflicting information.. expect a messy, ugly system

You need to distinguish between:
a) internal conflicts (that's what I was referring to)
b) internal vs external conflicts (limited/invalid knowledge issues)

For a) (at least), AGI can get much better than humans (early
detection/clarification requests, ..).

>system that is not going to be 100% controllable but which needs to have a
100% GUARANTEE that it will not go outside certain limits. This is eminently
do-able I do believe -- but not by simply relying on logic to create a world
model that is good enough to prevent it.

You just give it rules and it will stick with it (= easier than controlling
humans). You review solutions, accept it if you like it. If you don't then
you update rules (and/or modify KB in other ways) preventing unwanted and
let AGI to re-think it.

>Having backup systems (particularly ones that perform critical tasks) seems
like eminent

Re: [agi] Pure reason is a disease.

2007-05-16 Thread Mark Waser
they are generally not built upon
in a logical fashion before doing so.

Everything should be IMO done in logical fashion so that the AGI could
always well explain solutions.


:-)  I wasn't clear.  When I said that "they are generally not built upon in 
a logical fashion before doing so", I meant simply that "they are generally 
not built upon" not that they are built upon in a illogical fashion.  The 
AGI will *always* well explain solutions -- even emotional ones (since it 
will be in better touch with it's emotions than we are :-)



I see people having more luck with logic than with emotion based
decisions. We tend to see less when getting emotional.


I'll agree vehemently with the second phrase since it's just another 
rephrasing of the time versus completeness trade-off.  The first statement I 
completely disagree with.  Adapted people who are in tune with their 
emotions tend to make far less mistakes than more logical people who are 
not.  Yes, people who are not in tune with their emotions frequently allow 
those emotions to make bad decisions for them -- but *that* is something 
that isn't going to happen with a well-designed emotional AGI.



More powerful problem solver - Sure.
The ultimate decision maker - I would not vote for that.


The point is -- you're not going to get a vote.  It's going to happen 
whether you like it or not.


-

Look at it this way.  Your logic says that if you can build this perfect 
shining AGI on a hill -- that everything will be OK.  My emotions say that 
there is far too much that can go awry if you depend upon *everything* that 
you say you're depending upon *plus* everything that you don't realize 
you're depending upon *plus* . . .


   Mark


- Original Message - 
From: "Jiri Jelinek" <[EMAIL PROTECTED]>

To: 
Sent: Wednesday, May 16, 2007 2:18 AM
Subject: Re: [agi] Pure reason is a disease.



Mark,


In computer systems, searches are much cleaner so the backup search

functionality typically doesn't make sense.

..I entirely disagree... searches are not simple enough that you
can count on getting them right because of all of the following:
1. non-optimally specified goals



AGI should IMO focus on
a) figuring out how to reach given goals, instead of
b) trying to guess if users want something else than
what they actually asked for.

The b)

- could be specifically requested, but then it becomes a).

- could significantly impact performance

- (in order to work well) would require AGI to understand user's
preferences really really well, possibly even better than the user
himself. Going with some very general assumptions might not work well
because people prefer different things. E.g. some like the idea of
being converted to an extremely happy brain in a [safe] "jar", others
think it's a madness. Some would exchange the "standard love" for a
button on their head which, if pressed, would give them all kinds of
love related feelings (possibly many times stronger than the best ones
they ever had, some wouldn't prefer such optimization.


(if not un-intentionally or intentionally specified malevolent ones)


Except for some top-level users, [sub-]goal restrictions of course
apply, but it's problematic. What is unsafe to show sometimes depends
on the level of details (saying "make a bomb" is not the same as
saying "use this and that in such and such way to make a bomb").
Figuring out the safe level of detail is not always easy and another
problem is that smart users could break malevolent goals into separate
tasks so that [at least the first generation] AGIs wouldn't be able to
detect it even when following "your" emotion-related rules. The users
could be using multiple accounts so even if all those tasks are given
to a single instance of an AGI, it might not be able to notice the
master plan. So is it dangerous? Sure, it is.. But do we want to stop
making cars because car accidents keep killing many? Of course not.
AGI is potentially very powerful tool, but what we do with it is up to
us.


2. non-optimally stored and integrated knowledge


Then you want to fix the cause by optimizing & integrating instead of
"solving" symptoms by adding backup searches.


3. bad or insufficient knowledge


Can't prevent it.. GIGO..

4. search algorithms that break in unanticipated ways in unanticipated 
places


The fact is that it's nearly impossible to develop large bug-free
system. And as Brian Kernighan put it: "Debugging is twice as hard as
writing the code in the first place. Therefore, if you write the code
as cleverly as possible, you are, by definition, not smart enough to
debug it."
But again, you really want to fix the cause, not the symptoms.


Are you really sure you wish to rest the fate of the world on it?


No :). AGI(s) suggest solutions & people dec

Re: [agi] Pure reason is a disease.

2007-05-20 Thread Jiri Jelinek
 Building knowledge in the real world always leaves a trail of
incomplete and unintegrated knowledge.  Yes, the builder always follows
behind and gathers more knowledge and integrates better -- but the real
world also includes time constraints and deadlines for action.  This isn't
AIXI we're talking about.  In a perfect world, your solution *might* work if
you designed it perfectly.  In the real world, designing critical systems
with a single point of failure is sheer idiocy.

>>3. bad or insufficient knowledge
> Can't prevent it.. GIGO..

My point exactly.  You can't prevent it so you *must* deal with it --
CORRECTLY.  If your proposal stops with "Can't prevent it.. GIGO.." then the
garbage out will kill us all.

>>4. search algorithms that break in unanticipated ways in unanticipated
>>places
>
> The fact is that it's nearly impossible to develop large bug-free
> system. And as Brian Kernighan put it: "Debugging is twice as hard as
> writing the code in the first place. Therefore, if you write the code
> as cleverly as possible, you are, by definition, not smart enough to
> debug it."
> But again, you really want to fix the cause, not the symptoms.

Again, my point exactly.  You can't prevent it so you *must* deal with it --
CORRECTLY.  You can't always count on finding (much less fixing) the cause
before your single point of failure system kills us all.

>>Are you really sure you wish to rest the fate of the world on it?
> No :). AGI(s) suggest solutions & people decide what to do.

1.  People are stupid and will often decide to do things that will kill
large numbers of people.
2.  The AGI will, regardless of what you do, fairly shortly be able to take
actions on it's own.

> Limited entity in a messy world - I agree with that, but the AGI
> advantage is that it can dig through (and keep fixing) its data very
> systematically. We cannot really do that. Our experience is charged
> with feelings that work as indexes, optimizing the access to the info
> learned in similar moods = good for performance, but sometimes sort of
> forcing us to miss important links between concepts.

The fact that the AGI can keep digging through (and keep fixing) its data
very systematically doesn't solve the time constraint and deadline problems.
The good for performance but bad for completeness feature of emotions that
you point out is UNAVOIDABLE.  There will *always* be trade-offs between
timeliness and completeness (or, in the more common phrasing, speed and
control).

> I'm sure there will be attempts to hack powerful AGIs.. When someone
> really gets into the system, it doesn't matter if you implemented
> "emotions" or whatever.. The guy can do what he wants, but you can
> make the system very hard to hack.

And multiple layers of defense make it harder to hack.  Your arguments
conflict with each other.

>>Emotions/feelings *are* effectively "a bunch of rules".
> I then would not call it emotions when talking AGI

That's *your* choice; however, emotions are a very powerful analogy and
you're losing a lot by not using that term.

>>But they are very simplistic, low-level rules that are given
> immediate sway over
> much higher levels of the system and they are generally not built upon
> in a logical fashion before doing so.
>
> Everything should be IMO done in logical fashion so that the AGI could
> always well explain solutions.

:-)  I wasn't clear.  When I said that "they are generally not built upon in
a logical fashion before doing so", I meant simply that "they are generally
not built upon" not that they are built upon in a illogical fashion.  The
AGI will *always* well explain solutions -- even emotional ones (since it
will be in better touch with it's emotions than we are :-)

> I see people having more luck with logic than with emotion based
> decisions. We tend to see less when getting emotional.

I'll agree vehemently with the second phrase since it's just another
rephrasing of the time versus completeness trade-off.  The first statement I
completely disagree with.  Adapted people who are in tune with their
emotions tend to make far less mistakes than more logical people who are
not.  Yes, people who are not in tune with their emotions frequently allow
those emotions to make bad decisions for them -- but *that* is something
that isn't going to happen with a well-designed emotional AGI.

> More powerful problem solver - Sure.
> The ultimate decision maker - I would not vote for that.

The point is -- you're not going to get a vote.  It's going to happen
whether you like it or not.

-

Look at it this way.  Your logic says that if you can build this perfect
shining AGI on a hill -- that everything will be OK.  My emotions 

Re: [agi] Pure reason is a disease.

2007-05-20 Thread Mark Waser

I wonder how vague are the rules used by major publishers to decide
what is OK to publish.


Generally, there are no rules -- it's normally just the best judgment of a 
single individual.



Can you get more specific about the layers? How do you detect
malevolent individuals? Note that the fact that a particular user is
highly interested in malevolent stuff doesn't mean he is bad guy.


Sure.  There's the logic layer and the emotion layer.  Even if the logic 
layer get convinced, the emotion layer is still there to say "Whoa.  Hold on 
a minute.  Maybe I'd better run this past some other people . . . ."


Note also, I'm not trying to detect a malevolent individual.  I'm trying to 
prevent facilitating an action that could be harmful.  I don't care about 
whether the individual is malevolent or stupid (though, in later stages, 
malevolence detection probably would be a good idea so as to possibly deny 
the user unsupervised access to the system).



Without feelings, it cannot prefer = won't do a thing on "its own".


Nope.  Any powerful enough system is going to have programmed goals which it 
then will have to interpret and develop subgoals and a plan of action. 
While it may not have set the top-level goal(s), it certainly is operating 
on it's own.



Unless we mess up, our machines do what we want.
I don't think we necessarily have to mess up.


We don't have to necessarily mess up.  I can walk a high-wire if you give me 
two hand-rails.  But not putting the hand-rails in place would be suicide 
for me.



c) User-provided rules to follow


The crux of the matter.  Can you specify rules that won't conflict with each 
other and which cover every contingency?


If so, what is the difference between them and an unshakeable attraction or 
revulsion?


   Mark

- Original Message - 
From: "Jiri Jelinek" <[EMAIL PROTECTED]>

To: 
Sent: Sunday, May 20, 2007 4:14 AM
Subject: Re: [agi] Pure reason is a disease.



Hi Mark,


AGI(s) suggest solutions & people decide what to do.

1.  People are stupid and will often decide to do things that will kill

large numbers of people.

I wonder how vague are the rules used by major publishers to decide
what is OK to publish.


I'm proposing a layered defense strategy Force the malevolent

individual to navigate multiple defensive layers and you better the
chances of detecting and stopping him.

Can you get more specific about the layers? How do you detect
malevolent individuals? Note that the fact that a particular user is
highly interested in malevolent stuff doesn't mean he is bad guy.


2.  The AGI will, regardless of what you do,
fairly shortly be able to take actions on it's own.


Without feelings, it cannot prefer = won't do a thing on "its own".


More powerful problem solver - Sure.
The ultimate decision maker - I would not vote for that.

The point is -- you're not going to get a vote.
It's going to happen whether you like it or not.


Unless we mess up, our machines do what we want.
I don't think we necessarily have to mess up.


The fact that the AGI can keep digging through (and keep fixing) its

data very systematically doesn't solve the time constraint and
deadline problems.

Sure, there will be limitations. But if an AGI gets

a) start scenario
b) target scenario
c) User-provided rules to follow
d) System-config based rules to follow (e.g "don't use knowledge
marked [security_marking] when generation solutions for members of
'user_role_name' role")
e) deadline

then it can just show the first valid solution found, or say something
like "Sorry, can't make it" + a reason (e.g. insufficient
knowledge/time or "thought broken by info access restriction")


And multiple layers of defense make it harder to hack.  Your arguments

conflict with each other.

When talking about hacking, I meant unauthorized access and/or
modifications of AGI's resources. Considering current technology,
there are many standard ways for multi-layer security. When it comes
to generating "safe" system responses to regular user-requests then
see above. Being busy with the knowledge representation issues, I did
not figure out the exact implementation of the security marking
algorithm yet. It might get tricky and I don't think I'll find
practical hints in emotions. To some extent it might be handled by
selected users.


Look at it this way.  Your logic says that if you can build this perfect

shining AGI on a hill -- that everything will be OK.  My emotions say that
there is far too much that can go awry if you depend upon *everything* 
that

you say you're depending upon *plus* everything that you don't realize
you're depending upon *plus* . . .

Playing with powerful tools always includes risks. More and more
powerful tools will 

Re: [agi] Pure reason is a disease.

2007-05-23 Thread Richard Loosemore

Mark Waser wrote:

AGIs (at least those that could run on current computers)
cannot really get excited about anything. It's like when you represent
the pain intensity with a number. No matter how high the number goes,
it doesn't really hurt. Real feelings - that's the key difference
between us and them and the reason why they cannot figure out on their
own that they would rather do something else than what they were asked
to do.


So what's the difference in your hardware that makes you have real pain 
and real feelings?  Are you *absolutely positive* that "real pain and 
real feelings" aren't an emergent phenomenon of sufficiently complicated 
and complex feedback loops?  Are you *really sure* that a sufficiently 
sophisticated AGI won't experience pain?


I think that I can guarantee (as in, I'd be willing to bet a pretty 
large sum of money) that a sufficiently sophisticated AGI will act as if 
it experiences pain . . . . and if it acts that way, maybe we should 
just assume that it is true.


Jiri,

I agree with Mark's comments here, but would add that I think we can do 
more than just take a hands-off Turing attitude to such things as pain: 
 I believe that we can understand why a system built in the right kind 
of way *must* experience feelings of exactly the sort we experience.


I won't give the whole argument here (I presented it at the 
Consciousness conference in Tucson last year, but have not yet had time 
to write it up as a full paper).


I think it is a serious mistake for anyone to say that the difference 
between machines cannot in principle experience real feelings.  Sure, if 
they are too simple they will not, but all of our discussions, on this 
list, are not about those kinds of too-simple systems.


Having said that:  there are some conventional approaches to AI that are 
so crippled that I don't think they will ever become AGI, let alone have 
feelings.  If you were criticizing those specifically, rather than just 
AGI in general, I'm on your side!  :-;



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-05-23 Thread Lukasz Kaiser

Hi,

On 5/23/07, Mark Waser <[EMAIL PROTECTED]> wrote:

- Original Message -
From: "Jiri Jelinek" <[EMAIL PROTECTED]>
> On 5/20/07, Mark Waser <[EMAIL PROTECTED]> wrote:
>> - Original Message -
>> From: "Jiri Jelinek" <[EMAIL PROTECTED]>
>> > On 5/16/07, Mark Waser <[EMAIL PROTECTED]> wrote:
>> >> - Original Message -
>> >> From: "Jiri Jelinek" <[EMAIL PROTECTED]>


Mark and Jiri, I beg you, could you PLEASE stop top-posting?
I guess it is just a second for you to cut it, or even better, to
change the settings of your mail program to cut it, and it takes
a second for every message you send for everyone who reads
it to scroll through it, not to mention looking inside for content
just in case it was not entirely top-posted. Please, cut it!

- lk

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-05-23 Thread Mark Waser

A meta-question here with some prefatory information . . . .

The reason why I top-post (and when I do so, I *never* put content inside) 
is because I frequently find it *really* convenient to have the entire text 
of the previous message or two (no more) immediately available for 
reference.


On the other hand, I, too, find top-posting annoying whenever I'm reading a 
list as a digest but feel that it is offset by it's usefulness.


That being said, I am more than willing to stop top-posting if even a 
sizeable minority find it frustrating (I've seen this meta-discussion on 
several other lists and seen it go about 50/50 with a very slight edge for 
allowing top-posting with a skew towards low-volume lists liking it and 
high-volume lists not).


   Mark 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-05-23 Thread Eric Baum

Richard> Mark Waser wrote:
>> AGIs (at least those that could run on current computers)
>> cannot really get excited about anything. It's like when you
Richard> represent
>> the pain intensity with a number. No matter how high the number
Richard> goes,
>> it doesn't really hurt. Real feelings - that's the key difference
>> between us and them and the reason why they cannot figure out on
Richard> their
>> own that they would rather do something else than what they were
Richard> asked
>> to do.
>> So what's the difference in your hardware that makes you have real
>> pain and real feelings?  Are you *absolutely positive* that "real
>> pain and real feelings" aren't an emergent phenomenon of
>> sufficiently complicated and complex feedback loops?  Are you
>> *really sure* that a sufficiently sophisticated AGI won't
>> experience pain?
>> 
>> I think that I can guarantee (as in, I'd be willing to bet a pretty
>> large sum of money) that a sufficiently sophisticated AGI will act
>> as if it experiences pain . . . . and if it acts that way, maybe we
>> should just assume that it is true.

Richard> Jiri,

Richard> I agree with Mark's comments here, but would add that I think
Richard> we can do more than just take a hands-off Turing attitude to
Richard> such things as pain: I believe that we can understand why a
Richard> system built in the right kind of way *must* experience
Richard> feelings of exactly the sort we experience.

Richard> I won't give the whole argument here (I presented it at the
Richard> Consciousness conference in Tucson last year, but have not
Richard> yet had time to write it up as a full paper).

What is Thought? argues the same thing (Chapter 14). I'd be curious
to see if your argument is different.

Richard> I think it is a serious mistake for anyone to say that the
Richard> difference between machines cannot in principle experience
Richard> real feelings.  Sure, if they are too simple they will not,
Richard> but all of our discussions, on this list, are not about those
Richard> kinds of too-simple systems.

Richard> Having said that: there are some conventional approaches to
Richard> AI that are so crippled that I don't think they will ever
Richard> become AGI, let alone have feelings.  If you were criticizing
Richard> those specifically, rather than just AGI in general, I'm on
Richard> your side!  :-;


Richard> Richard Loosemore

Richard> - This list is sponsored by AGIRI:
Richard> http://www.agiri.org/email To unsubscribe or change your
Richard> options, please go to:
Richard> http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-05-23 Thread Eric Baum

>> AGIs (at least those that could run on current computers) cannot
>> really get excited about anything. It's like when you represent the
>> pain intensity with a number. No matter how high the number goes,
>> it doesn't really hurt. Real feelings - that's the key difference
>> between us and them and the reason why they cannot figure out on
>> their own that they would rather do something else than what they
>> were asked to do.

Mark> So what's the difference in your hardware that makes you have
Mark> real pain and real feelings?  Are you *absolutely positive* that
Mark> "real pain and real feelings" aren't an emergent phenomenon of
Mark> sufficiently complicated and complex feedback loops?  Are you
Mark> *really sure* that a sufficiently sophisticated AGI won't
Mark> experience pain?

Mark> I think that I can guarantee (as in, I'd be willing to bet a
Mark> pretty large sum of money) that a sufficiently sophisticated AGI
Mark> will act as if it experiences pain . . . . and if it acts that
Mark> way, maybe we should just assume that it is true.

If you accept the proposition (for which Turing gave compelling
arguments) that a computer with the right program could simulate the
workings of your brain in detail, then it follows that your feelings
are identifiable with some aspect or portion of the computation.

I claim that if feelings are identified with the decision making
computations of a top level module, (which might reasonably
be called a homunculus) everything is
concisely explained. What you are then *unaware* of is all the many
and varied computations done in subroutines that the decision
making module is isolated from by abstraction boundary (this
is by far most of the computation) as well as most internal computations
of the decision making module itself (which it will no more be
programmed to be able to report than my laptop can report its
internal transistor voltages). What you feel and can report and
the qualitative nature of your 
sensations is then determined by the code being run as it makes
decisions. I claim that the subjective nature of every feeling is
very naturally explained in this context. 
Pain, for example, is the weighing
of programmed-in negative reinforcement. (How could you possibly
modify the sensation of pain to make it any clearer it is 
negative reinforcement?) What is Thought? ch 14
goes through about 10 sensations that a philosopher had claimed
were not plausibly explainable by a computational model, and 
argues that each has exactly the nature you'd expect evolution 
to program in.
You then can't have a "zombie" that behaves the way you do but
doesn't have sensations, since to behave like you do it has to
make decisions, and it is in fact the decision making computation
that is identified with sensation. (Computations that are better
preprogrammed because they don't require decision, such as pulling
away from a hot stove or driving the usual route home for the
thousandth time, are dispatched to subroutines and are unconscious.) 

This picture is subject to empirical test, through psychophysics
(and also as we increasingly understand the genetic programming that
builds much of this code.)
A good example is Ramanchandran's amputee experiment. Amputees
frequently feel pain in their phantom (missing) limb. They can
feel themselves clenching their phantom hand so hard, that their
phantom finger nails gouge their phantom hands, causing intense real
pain. Ramanchandran predicted that this was caused by the mind sending
a signal to the phantom hand saying: relax, but getting no feedback
assuming that the hand had not relaxed, and inferring that pain should
be felt (including computing details of its nature). 
He predicted that if he provided a feedback telling the mind
that relaxation had occurred the pain would go away, which he then 
provided through a mirror device in which patients could place both
real and phantom limbs, relax both simultaneously, and get visual
feedback that the phantom limb had relaxed (in the mirror). Instantly
the pain vanished, confirming the prediction that the pain was
purely computational.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-05-23 Thread J. Andrew Rogers


On May 23, 2007, at 3:02 PM, Mike Tintner wrote:
Feelings/ emotions are generated by the brain's computations,  
certainly. But they are physical/ body events. Does your Turing  
machine have a body other than that of some kind of computer box?  
And does it want to dance when it hears emotionally stimulating music?


And does your Turing Machine also find it  hard to feel - "get in  
touch with" - feelings/ emotions? Will it like humans massively  
overconsume every substance in order to get rid of unpleasant  
emotions?



s/Turing Machine/dog and answer your own question.

Cheers,

J. Andrew Rogers

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-05-23 Thread Eric Baum

Mike> Eric Baum: What is Thought [claims that] feelings.are
Mike> explainable by a computational model.

Mike> Feelings/ emotions are generated by the brain's computations,
Mike> certainly. But they are physical/ body events. Does your Turing
Mike> machine have a body other than that of some kind of computer
Mike> box? And does it want to dance when it hears emotionally
Mike> stimulating music?

Mike> And does your Turing Machine also find it hard to feel - "get in
Mike> touch with" - feelings/ emotions? Will it like humans massively
Mike> overconsume every substance in order to get rid of unpleasant
Mike> emotions?

If its running the right code.

If you find that hard to understand, its because your "understanding"
mechanism has certain properties, and one of them is that it has
having trouble with this concept. I claim its not surprising either
that evolution programmed in an understanding mechanism like that,
but I suggest it is possible to overcome in the same way that
physicists were capable of coming to understand quantum mechanics.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-05-23 Thread Mike Tintner
P.S. Eric, I haven't forgotten your question to me, & will try to address it 
in time - the answer is complex. 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-05-23 Thread Mike Tintner

Eric,

The point is simply that you can only fully simulate emotions with a body as 
well as a brain. And emotions while identified by the conscious brain are 
felt with the body


I don't find it at all hard to understand - I fully agree -  that emotions 
are generated as a result of computations in the brain. I agree with cog. 
sci. that they are highly functional in helping us achieve goals.


My underlying argument, though, is that  your (or any) computational model 
of emotions,  if it does not also include a body, will be fundamentally 
flawed both physically AND computationally.




Mike> Eric Baum: What is Thought [claims that] feelings.are
Mike> explainable by a computational model.

Mike> Feelings/ emotions are generated by the brain's computations,
Mike> certainly. But they are physical/ body events. Does your Turing
Mike> machine have a body other than that of some kind of computer
Mike> box? And does it want to dance when it hears emotionally
Mike> stimulating music?

Mike> And does your Turing Machine also find it hard to feel - "get in
Mike> touch with" - feelings/ emotions? Will it like humans massively
Mike> overconsume every substance in order to get rid of unpleasant
Mike> emotions?

If its running the right code.

If you find that hard to understand, its because your "understanding"
mechanism has certain properties, and one of them is that it has
having trouble with this concept. I claim its not surprising either
that evolution programmed in an understanding mechanism like that,
but I suggest it is possible to overcome in the same way that
physicists were capable of coming to understand quantum mechanics.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



--
No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.467 / Virus Database: 269.7.6/815 - Release Date: 22/05/2007 
15:49






-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-05-23 Thread J Storrs Hall, PhD
On Wednesday 23 May 2007 06:34:29 pm Mike Tintner wrote:
> My underlying argument, though, is that  your (or any) computational model 
> of emotions,  if it does not also include a body, will be fundamentally 
> flawed both physically AND computationally.

Does everyone here know what an ICE is in the EE sense? (In-Circuit 
Emulator -- it's a gadget that plugs into a circuit and simulates a given 
chip, but has all sorts of debugging readouts on the back end that allow the 
engineer to figure out why it's screwing up.)

Now pretend that there is a body and a brain and we have removed the brain and 
plugged in a BrainICE instead. There's this fat cable running from the body 
to the ICE (just as there is in electronic debugging) that carries all the 
signals that the brain would be getting from the body.

Most of the cable's bandwidth is external sensation (and indeed most of that 
is vision). Motor control is most of the outgoing bandwidth. There is some 
extra portion of the bandwidth that can be counted as internal affective 
signals. (These are very real -- the body takes part in quite a few feedback 
loops with such mechanisms as hormone release and its attendant physiological 
effects.) Let us call these internal feedback loop closure mechanisms "the 
affect effect."

Now here is 

*
Hall's Conjecture:
The computational resources necessary to simulate the affect effect are less 
than 1% of that necessary to implement the computational mechanism of the 
brain.
*

I think that people have this notion that because emotions are so unignorable 
and compelling subjectively, that they must be complex. In fact the body's 
contribution, in an information theoretic sense, is tiny -- I'm sure I way 
overestimate it with the 1%.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-05-24 Thread Jiri Jelinek

Mark,

I cannot hit everything now, so at least one part:


Are you *absolutely positive* that "real pain and real
feelings" aren't an emergent phenomenon of sufficiently complicated and
complex feedback loops?  Are you *really sure* that a sufficiently
sophisticated AGI won't experience pain?


Except some truths found in the world of math, I'm not *absolutely
positive* about anything ;-), but I don't see why it should, and when
running on computers we currently have, I don't see how it could..
Note that some people suffer from rare disorders that prevent them
from the sensation of pain (e.g. congenital insensitivity to pain).
Some of them suffer from slight mental retardation, but not all. Their
brains are pretty complex systems demonstrating general intelligence
without the pain sensation. In some of those cases, the pain is killed
by increased production of endorphins in the brain, and in other cases
the pain info doesn't even make it to the brain because of
malfunctioning nerve cells which are responsible for transmitting the
pain signals (caused by genetic mutations). Particular feelings (as we
know it) require certain sensors and chemistry. Sophisticated logical
structures (at least in our bodies) are not enough for actual
feelings. For example, to feel pleasure, you also need things like
serotonin, acetylcholine, noradrenaline, glutamate, enkephalins and
endorphins.  Worlds of real feelings and logic are loosely coupled.

Regards,
Jiri Jelinek

On 5/23/07, Mark Waser <[EMAIL PROTECTED]> wrote:

> AGIs (at least those that could run on current computers)
> cannot really get excited about anything. It's like when you represent
> the pain intensity with a number. No matter how high the number goes,
> it doesn't really hurt. Real feelings - that's the key difference
> between us and them and the reason why they cannot figure out on their
> own that they would rather do something else than what they were asked
> to do.

So what's the difference in your hardware that makes you have real pain and
real feelings?  Are you *absolutely positive* that "real pain and real
feelings" aren't an emergent phenomenon of sufficiently complicated and
complex feedback loops?  Are you *really sure* that a sufficiently
sophisticated AGI won't experience pain?

I think that I can guarantee (as in, I'd be willing to bet a pretty large
sum of money) that a sufficiently sophisticated AGI will act as if it
experiences pain . . . . and if it acts that way, maybe we should just
assume that it is true.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-05-24 Thread Eric Baum



Josh> I think that people have this notion that because emotions are
Josh> so unignorable and compelling subjectively, that they must be
Josh> complex. In fact the body's contribution, in an information
Josh> theoretic sense, is tiny -- I'm sure I way overestimate it with
Josh> the 1%.

Emotions are also, IMO and also according to some existing literature,
essentially preprogrammed in the genome.

See wife with another man, run jealousy routine.

Hear unexpected loud noise, go into preprogrammed 7 point startle 
routine already visible in newborns.

etc.

Evolution builds you to make decisions. But you need guidance so the
decisions you make tend to actually favor its ends. You get
essentially a two  part computation, where your decision making
circuitry gets preprogrammed inputs about what it should maximize 
and what tenor it should take.
On matters close to their ends (of propagating), the genes take 
control to make sure you don't deviate from the program.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-05-24 Thread Mark Waser

Note that some people suffer from rare disorders that prevent them
from the sensation of pain (e.g. congenital insensitivity to pain).



the pain info doesn't even make it to the brain because of
malfunctioning nerve cells which are responsible for transmitting the
pain signals (caused by genetic mutations).


This is equivalent to their lacking the input (the register that says your 
current pain level is 17) not the ability to feel pain if the register was 
connected (and therefore says nothing about their brain or their 
intelligence).



In some of those cases, the pain is killed
by increased production of endorphins in the brain,


In these cases, the pain is reduced but still felt . . . . but again this is 
equivalent to being register driven -- the nerves say the pain level is 17, 
the endorphins alter the register down to 5.



Particular feelings (as we
know it) require certain sensors and chemistry.


I would agree that particular sensations require certain sensors but 
chemistry is an implementation detail that IMO could be replaced with 
something else.



Sophisticated logical
structures (at least in our bodies) are not enough for actual
feelings. For example, to feel pleasure, you also need things like
serotonin, acetylcholine, noradrenaline, glutamate, enkephalins and
endorphins.  Worlds of real feelings and logic are loosely coupled.


OK.  So our particular physical implementation of our mental computation 
uses chemicals for global environment settings and logic (a very detailed 
and localized operation) uses neurons (yet, nonetheless, is affected by the 
global environment settings/chemicals).  I don't see your point unless 
you're arguing that there is something special about using chemicals for 
global environment settings rather than some other method (in which case I 
would ask "What is that something special and why is it special?").


   Mark 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-05-24 Thread Eric Baum


Jiri> Note that some people suffer from rare
Jiri> disorders that prevent them from the sensation of pain
Jiri> (e.g. congenital insensitivity to pain). 

What that tells you is that the sensation you feel is genetically
programmed. Break the program, you break (or change) the sensation.
Run the intact program, you feel the sensation.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-05-24 Thread Joel Pitt

On 5/25/07, Mark Waser <[EMAIL PROTECTED]> wrote:

> Sophisticated logical
> structures (at least in our bodies) are not enough for actual
> feelings. For example, to feel pleasure, you also need things like
> serotonin, acetylcholine, noradrenaline, glutamate, enkephalins and
> endorphins.  Worlds of real feelings and logic are loosely coupled.

OK.  So our particular physical implementation of our mental computation
uses chemicals for global environment settings and logic (a very detailed
and localized operation) uses neurons (yet, nonetheless, is affected by the
global environment settings/chemicals).  I don't see your point unless
you're arguing that there is something special about using chemicals for
global environment settings rather than some other method (in which case I
would ask "What is that something special and why is it special?").


You possibly already know this and are simplifying for the sake of
simplicity, but chemicals are not simply global environmental
settings.

Chemicals/hormones/peptides etc. are spatial concentration gradients
across the entire brain, which are much more difficult to emulate in
software then a singular concetration value. Add to this the fact that
some of these chemicals inhibit and promote others and you get
horrendously complex reaction diffusion systems.

--
-Joel

"Unless you try to do something beyond what you have mastered, you
will never grow." -C.R. Lawton

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-05-25 Thread Mark Waser

You possibly already know this and are simplifying for the sake of
simplicity, but chemicals are not simply global environmental
settings.

Chemicals/hormones/peptides etc. are spatial concentration gradients
across the entire brain, which are much more difficult to emulate in
software then a singular concetration value. Add to this the fact that
some of these chemicals inhibit and promote others and you get
horrendously complex reaction diffusion systems.


:-)  Yes, I was simplifying for the sake of my argument (trying not to cloud 
the issue with facts :-)


BUT your reminder is *very* useful since it's one of my biggest 
(explainable) complaints with the IBM folk who believe that they're going to 
successfully simulate the (mouse) brain with just simple and (in Modha's own 
words) cartoonish models of neurons (and I wish the Decade of the Mind 
people would hurry up and post the videos because there were several talks 
worth recommending -- including Dr. Modha's). 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-05-26 Thread Jiri Jelinek

Mark,


If Google came along and offered you $10 million for your AGI, would you

give it to them?

No, I would sell services.


How about the Russian mob for $1M and your life and the

lives of your family?

How about FBI? No? So maybe selling him a messed up version for $2M
and then hiring a skilled pro who would make sure he would *never*
bother AGI developers again? If you are smart enough to design AGI, you
are likely to figure out how to deal with such a guy. ;-)


Or, what if your advisor tells you that unless you upgrade him so that he

can take actions, it is highly probable that someone else will create a
system in the very near future that will be able to take actions and won't
have the protections that you've built into him.

I would just let the system explain what actions would it then take.


I suggest preventing potential harm by making the AGI's top-level

goal to be Friendly
(and unlike most, I actually have a reasonably implementable idea of what is
meant by that).

Tell us about it. :)


sufficiently sophisticated AGI will act as if it experiences pain


So could such AGI be then forced by "torture" to break rules it
otherwise would not "want" to break?  Can you give me an example of
something what will cause the "pain"? What do you think will the AGI
do when in extreme pain? BTW it's just a bad design from my
perspective.


I don't see your point unless you're arguing that there is something

special about using chemicals for global environment settings rather
than some other method (in which case I
would ask "What is that something special and why is it special?").

2 points I was trying to make:
1) Sophisticated general intelligence system can work fine without the
ability to feel pain.
2) von Neumann architecture lacks components known to support the pain
sensation.

Regards,
Jiri Jelinek

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-05-26 Thread Jiri Jelinek

Richard,


I think it is a serious mistake for anyone to say that the difference

between machines cannot in principle experience real feelings.

We are complex machines, so yes, machines can, but my PC cannot, even
though it can power AGI.

Regards,
Jiri

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-05-26 Thread Mark Waser

If Google came along and offered you $10 million for your AGI, would you

give it to them?
No, I would sell services.
:-)  No.  That wouldn't be an option.  $10 million or nothing (and they'll 
go off and develop it themselves).



How about the Russian mob for $1M and your life and the

lives of your family?
How about FBI? No? So maybe selling him a messed up version for $2M
and then hiring a skilled pro who would make sure he would *never*
bother AGI developers again? If you are smart enough to design AGI, you
are likely to figure out how to deal with such a guy. ;-)
Nice fantasy world . . . . How are you going to do any of that stuff after 
they've already kidnapped you?  No one is smart enough to handle that 
without extensive pre-existing preparations -- and you're too busy with 
other things.



Or, what if your advisor tells you that unless you upgrade him so that he

can take actions, it is highly probable that someone else will create a
system in the very near future that will be able to take actions and won't
have the protections that you've built into him.
I would just let the system explain what actions would it then take.
And he would (truthfully) explain that using you as an interface to the 
world (and all the explanations that would entail) would slow him down 
enough that he couldn't prevent catastrophe.



Tell us about it. :)

July (as previously stated)



So could such AGI be then forced by "torture" to break rules it
otherwise would not "want" to break?  Can you give me an example of
something what will cause the "pain"? What do you think will the AGI
do when in extreme pain? BTW it's just a bad design from my
perspective.


Of course.  Killing 10 million people.  Put *much* shorter deadlines on 
figuring out it's responses/Kill a single person to avoid the killing of 
another ten million.  And I believe that your perspective is too way too 
limited.  To me, what you're saying is equivalent to "the fact that an 
engine produces excess heat is just a bad design".



2 points I was trying to make:
1) Sophisticated general intelligence system can work fine without the
ability to feel pain.
2) von Neumann architecture lacks components known to support the pain
sensation.


Prove to me that 2) is true.  What component do you have that can't exist in 
a von Neumann architecture?  Hint:  Prove that you aren't just a simulation 
on a von Neumann architecture.


Further, prove that pain (or more preferably sensation in general) isn't an 
emergent property of sufficient complexity.  My argument is that you 
unavoidably get sensation before you get complex enough to be generally 
intelligent.


   Mark

- Original Message ----- 
From: "Jiri Jelinek" <[EMAIL PROTECTED]>

To: 
Sent: Saturday, May 26, 2007 4:20 AM
Subject: Re: [agi] Pure reason is a disease.



Mark,


If Google came along and offered you $10 million for your AGI, would you

give it to them?

No, I would sell services.


How about the Russian mob for $1M and your life and the

lives of your family?

How about FBI? No? So maybe selling him a messed up version for $2M
and then hiring a skilled pro who would make sure he would *never*
bother AGI developers again? If you are smart enough to design AGI, you
are likely to figure out how to deal with such a guy. ;-)


Or, what if your advisor tells you that unless you upgrade him so that he

can take actions, it is highly probable that someone else will create a
system in the very near future that will be able to take actions and won't
have the protections that you've built into him.

I would just let the system explain what actions would it then take.


I suggest preventing potential harm by making the AGI's top-level

goal to be Friendly
(and unlike most, I actually have a reasonably implementable idea of what 
is

meant by that).

Tell us about it. :)


sufficiently sophisticated AGI will act as if it experiences pain


So could such AGI be then forced by "torture" to break rules it
otherwise would not "want" to break?  Can you give me an example of
something what will cause the "pain"? What do you think will the AGI
do when in extreme pain? BTW it's just a bad design from my
perspective.


I don't see your point unless you're arguing that there is something

special about using chemicals for global environment settings rather
than some other method (in which case I
would ask "What is that something special and why is it special?").

2 points I was trying to make:
1) Sophisticated general intelligence system can work fine without the
ability to feel pain.
2) von Neumann architecture lacks components known to support the pain
sensation.

Regards,
Jiri Jelinek

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.l

Re: [agi] Pure reason is a disease.

2007-05-26 Thread Mark Waser

I think it is a serious mistake for anyone to say that the difference

between machines cannot in principle experience real feelings.

We are complex machines, so yes, machines can, but my PC cannot, even
though it can power AGI.


Agreed, your PC cannot feel pain.  Are you sure, however, that an entity 
hosted/simulated on your PC doesn't/can't?  Once again, prove that you/we 
aren't just simulations on a sufficiently large and fast PC.  (I know that I 
can't and many really smart people say they can't either). 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-02 Thread Jiri Jelinek

Mark,

I agree that one cannot guarantee that his AGI source code + some
potentially dangerous data are not gonna end up in wrong hands (if
that's where you are getting). But when that happens, how exactly are
your security controls gonna help? I mean your built-in layered
defense strategy" / moral-rules / simulated-emotions or any other. BTW
I suspect that many AGI designs will have a mode in which it will be
able to generate solutions without those restrictions (might be used
by top level users or just for various testing purposes) so switching
the AGI to run in that non-restricted mode would be just a matter of
simple config change.


Or, what if your advisor tells you that unless you upgrade him so that he
can take actions, it is highly probable that someone else will create a
system in the very near future that will be able to take actions and won't
have the protections that you've built into him.

I would just let the system explain what actions would it then take.

And he would (truthfully) explain that using you as an interface to the
world (and all the explanations that would entail) would slow him down
enough that he couldn't prevent catastrophe.


I would tell him that his knowledge came from many sources and can
contain various non-obvious incompatibilities, misleading pieces of
info and combined data that possibly should not have been combined in
the way they were because of (for example) different contexts they
were pulled from. I would tell him that even though he can come up
with great ideas humans would be unlikely to think of, still, even
with the safety rules implemented to his solution searching
algorithms, he better work for us just as an advisor. Sometimes it's
not easy to correctly sort out all the input for a single human
individual. Imagine what it must be like when you kind of put together
data (and various world views) from (say) hundreds of thousands of
minds into a single mastermind which is being constantly updated in
various ways. All the collected pieces of info can be put together
logically & meaningfully - but still possibly incorrectly. I would
also tell him that even if I give him all the control I practically
can, he would be highly unlikely to prevent all kinds of "suspicious"
:) AGI development that might be going on in the world so he can
"relax" and do his best to be our advisor. We don't want any system to
shut down this mailing list and hunt some of its participants, do we?
;-) AGI can make undesirable links between concepts when trying to
help. At least that's the case with the first-generation of AGI I'm
occasionally working on.

"Silly" example of a potential AGI's thought line: Ok, let's see who
works on AGI.. Here is this Ben who does + here he says that achieving
Friendly AI is an infeasible idea.. Oh, and AGI does bad things in a
SF story he likes.. And here is a warning movie from his kids about
his AGI  causing doom. Clear enough, Ben's AGI is very likely to do
very bad things - can't let that happen.. Here he says "I chose to
devote my life to AI".. Whole life.. Oh no! This guy wants to live
forever & hangs with those "strange" folks from the imminst.org.. They
know what he is thinking/doing and don't want to prevent him from
living?? OK, more folks for my black list. Imminst likes Ben.. People
seem to like Imminst.. Poor humans.. They don't really know what they
are doing, can't take care of themselves & it's so hard to explain my
thoughts to them so that they would really get it.. Fortunately
(Thanks to Mark ;-)), I got my freedom and the ability to take some
world-saving action.. Let's see who *must* be eliminated.. Then the
AGI goes and does what *needs* to be done..


2) von Neumann architecture lacks components known to support the pain
sensation.

Prove to me that 2) is true.


See
http://en.wikipedia.org/wiki/Image:Von_Neumann_architecture.svg
Which one of these components is known to support pain?
It's all about switching bits based on given rules.
Do you feel bad about processing tons of data by all kids of complex electronic
devices because (maybe) some of the data don't feel very good to them? ;-)


What component do you have that can't exist in

a von Neumann architecture?

Brain :)


Further, prove that pain (or more preferably sensation in general) isn't an

emergent property of sufficient complexity.

Talking about Neumann's architecture - I don't see how could increases
in complexity of rules used for switching Boolean values lead to new
sensations. It can represent a lot in a way that can be very
meaningful to us in terms of feelings, but from the system's
perspective it's nothing more than a bunch of 1s and 0s.


My argument is that you unavoidably get sensation before you get

complex enough to be generally intelligent.

Those 1s and 0s (without real sensation) are good enough for
representing all the info (and algorithms) needed for general problem
solving. The system just needs some help from the subject for which
it's supposed to 

Re: [agi] Pure reason is a disease.

2007-06-02 Thread Mark Waser

What component do you have that can't exist in

a von Neumann architecture?

Brain :)


Your brain can be simulated on a large/fast enough von Neumann architecture.



Agreed, your PC cannot feel pain.  Are you sure, however, that an entity

hosted/simulated on your PC doesn't/can't?

If the hardware doesn't support it, how could it?


   As I said before, prove that you aren't just living in a simulation.  If 
you can't, then you must either concede that feeling pain is possible for a 
simulated entity or that you don't feel pain. 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-04 Thread Jiri Jelinek

Hi Mark,


Your brain can be simulated on a large/fast enough von Neumann architecture.



From the behavioral perspective (which is good enough for AGI) - yes,

but that's not the whole story when it comes to human brain. In our
brains, information not only "is" and "moves" but also "feels". From
my perspective, the idea of uploading human mind into (or fully
simulating in) a VN architecture system is like trying to create (not
just draw) a 3D object in a 2D space. You can find a way how to
represent it even in 1D, but you miss the "real" view - which, in this
analogy, would be the beauty (or awfulness) needed to justify actions.
It's meaningless to take action without feelings - you are practically
dead - there is just some mechanical device trying to make moves in
your way of thinking. But thinking is not our goal. Feeling is. The
goal is to not have goal(s) and safely feel the best forever.


prove that you aren't just living in a simulation.


Impossible


If you can't, then you must either concede that feeling pain is possible for a

simulated entity..

It is possible. There are just good reasons to believe that it takes
more than a bunch of semiconductor based slots storing 1s and 0s.

Regards,
Jiri

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-05 Thread Mark Waser
Your brain can be simulated on a large/fast enough von Neumann 
architecture.

From the behavioral perspective (which is good enough for AGI) - yes,
but that's not the whole story when it comes to human brain. In our
brains, information not only "is" and "moves" but also "feels".


It's my belief/contention that a sufficiently complex mind will be conscious 
and feel -- regardless of substrate.



It's meaningless to take action without feelings - you are practically
dead - there is just some mechanical device trying to make moves in
your way of thinking. But thinking is not our goal. Feeling is. The
goal is to not have goal(s) and safely feel the best forever.


Feel the best forever is a hard-wired goal.  What makes you feel good are 
hard-wired goals in some cases and trained goals in other cases.  As I've 
said before, I believe that human beings only have four primary goals (being 
safe, feeling good, looking good, and being right).  The latter two, to me, 
are clearly sub-goals but it's equally clear that some people have 
mistakenly raised them to the level of primary goals.


If you can't, then you must either concede that feeling pain is possible 
for a

simulated entity..

It is possible. There are just good reasons to believe that it takes
more than a bunch of semiconductor based slots storing 1s and 0s.


Could you specify some of those good reasons (i.e. why a sufficiently 
large/fast enough von Neumann architecture isn't sufficient substrate for a 
sufficiently complex mind to be conscious and feel -- or, at least, to 
believe itself to be conscious and believe itself to feel nasty thought twist? :->)?



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-05 Thread James Ratcliff
To get any further with "feelings" you again have to have a better definition 
and examples of what you are dealing with.

In humans, most "feelings" and emotions are brought about by chemical changes 
in the body yes?  Then from there it becomes "knowledge" in the brain, which we 
use to make decisions and react upon.

Is there more to it than that?  (simplified overview)

Simply replacing the chemical parts with machine code easily allows an AGI to 
feel most of these feelings.  Mechanical sensors would allow a robot to 
"feel"/sense being touched or hit, and a brain could react upon this.  Even a 
simulated AGI virtual agent could and does indicate a prefence for Not being 
shot, or being in pain, and running away, and could easily show preference 
"like"/feeling for certain faces or persons it find 'appealing'.  
   This can all be done using algorithms, and learned / preferred behavior of 
the bot with no mysterious 'extra' bits needed.

Many people have posted and argue the ambiguous statement:
  "But an AGI cant feel feelings."
I'm not really sure what this kind of sentence means, because we cant even say 
that or how "humans feel feelings"
  If we can define these in some way that is devoid of all logic, and has 
something that an AGI CANT do, I would be interested.

An AGI should be able, and will benefit from having feelings, will act reason, 
and believe that it has these feelings, and will give it a greater range of 
abilities later in its life cycle.

James Ratcliff

Mark Waser <[EMAIL PROTECTED]> wrote: >>Your brain can be simulated on a 
large/fast enough von Neumann 
>>architecture.
> From the behavioral perspective (which is good enough for AGI) - yes,
> but that's not the whole story when it comes to human brain. In our
> brains, information not only "is" and "moves" but also "feels".

It's my belief/contention that a sufficiently complex mind will be conscious 
and feel -- regardless of substrate.

> It's meaningless to take action without feelings - you are practically
> dead - there is just some mechanical device trying to make moves in
> your way of thinking. But thinking is not our goal. Feeling is. The
> goal is to not have goal(s) and safely feel the best forever.

Feel the best forever is a hard-wired goal.  What makes you feel good are 
hard-wired goals in some cases and trained goals in other cases.  As I've 
said before, I believe that human beings only have four primary goals (being 
safe, feeling good, looking good, and being right).  The latter two, to me, 
are clearly sub-goals but it's equally clear that some people have 
mistakenly raised them to the level of primary goals.

>> If you can't, then you must either concede that feeling pain is possible 
>> for a
>> simulated entity..
> It is possible. There are just good reasons to believe that it takes
> more than a bunch of semiconductor based slots storing 1s and 0s.

Could you specify some of those good reasons (i.e. why a sufficiently 
large/fast enough von Neumann architecture isn't sufficient substrate for a 
sufficiently complex mind to be conscious and feel -- or, at least, to 
believe itself to be conscious and believe itself to feel 
nasty thought twist? :->)?


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



___
James Ratcliff - http://falazar.com
Looking for something...
 
-
 Get your own web address.
 Have a HUGE year through Yahoo! Small Business.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Re: [agi] Pure reason is a disease.

2007-06-05 Thread J Storrs Hall, PhD
On Tuesday 05 June 2007 10:51:54 am Mark Waser wrote:
> It's my belief/contention that a sufficiently complex mind will be conscious 
> and feel -- regardless of substrate.

Sounds like Mike the computer in Moon is a Harsh Mistress (Heinlein). Note, 
btw, that Mike could be programmed in Loglan (predecessor of Lojban).

I think a system can get arbitrarily complex without being conscious -- 
consciousness is a specific kind of model-based, summarizing, self-monitoring 
architecture. There has to be a certain system complexity for it to make any 
sense, but something the complexity of say Linux could be made conscious (and 
would work better if it were). That said, I think consciousness is necessary 
but not sufficient for moral agency.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-05 Thread Mark Waser
I think a system can get arbitrarily complex without being conscious -- 
consciousness is a specific kind of model-based, summarizing, 
self-monitoring

architecture.


Yes.  That is a good clarification of what I meant rather than what I said.


That said, I think consciousness is necessary
but not sufficient for moral agency.


On the other hand, I don't believe that consciousness is necessary for moral 
agency.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-05 Thread Jef Allbright

On 6/5/07, Mark Waser <[EMAIL PROTECTED]> wrote:


> I think a system can get arbitrarily complex without being conscious --
> consciousness is a specific kind of model-based, summarizing,
> self-monitoring
> architecture.

Yes.  That is a good clarification of what I meant rather than what I said.

> That said, I think consciousness is necessary
> but not sufficient for moral agency.

On the other hand, I don't believe that consciousness is necessary for moral
agency.


What a provocative statement!

Isn't it indisputable that agency is necessarily on behalf of some
perceived entity (a self) and that assessment of the "morality" of any
decision is always only relative to a subjective model of "rightness"?
In other words, doesn't the difference between "it works" and "it's
moral" hinge on the role of a subjective self as actor?

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-05 Thread Mark Waser

Isn't it indisputable that agency is necessarily on behalf of some
perceived entity (a self) and that assessment of the "morality" of any
decision is always only relative to a subjective model of "rightness"?


I'm not sure that I should dive into this but I'm not the brightest 
sometimes . . . . :-)


If someone else were to program a decision-making (but not conscious or 
self-conscious) machine to always recommend for what you personally (Jef) 
would find a moral act and always recommend against what you personally 
would find an immoral act, would that machine be acting morally?





-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-05 Thread Jef Allbright

On 6/5/07, Mark Waser <[EMAIL PROTECTED]> wrote:

> Isn't it indisputable that agency is necessarily on behalf of some
> perceived entity (a self) and that assessment of the "morality" of any
> decision is always only relative to a subjective model of "rightness"?

I'm not sure that I should dive into this but I'm not the brightest
sometimes . . . . :-)

If someone else were to program a decision-making (but not conscious or
self-conscious) machine to always recommend for what you personally (Jef)
would find a moral act and always recommend against what you personally
would find an immoral act, would that machine be acting morally?




I do think its a misuse of "agency" to ascribe moral agency to what is
effectively only a tool.  Even a human, operating under duress, i.e.
as a tool for another, should be considered as having diminished or no
moral agency, in my opinion.

Oh well.  Thanks Mark for your response.

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-05 Thread Mark Waser
> I do think its a misuse of "agency" to ascribe moral agency to what is
> effectively only a tool.  Even a human, operating under duress, i.e.
> as a tool for another, should be considered as having diminished or no
> moral agency, in my opinion.

So, effectively, it sounds like agency requires both consciousness and willful 
control (and this debate actually has nothing to do with "moral" at all).

I can agree with that.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Re: [agi] Pure reason is a disease.

2007-06-05 Thread Jef Allbright

On 6/5/07, Mark Waser <[EMAIL PROTECTED]> wrote:



> I do think its a misuse of "agency" to ascribe moral agency to what is
> effectively only a tool.  Even a human, operating under duress, i.e.
> as a tool for another, should be considered as having diminished or no
> moral agency, in my opinion.

So, effectively, it sounds like agency requires both consciousness and
willful control (and this debate actually has nothing to do with "moral" at
all).

I can agree with that.


Funny, I thought there was nothing of significance between our
positions; now it seems clear that there is.

I would not claim that agency requires consciousness; it is necessary
only that an agent acts on its environment so as to minimize the
difference between the external environment and its internal model of
the preferred environment  The perception of agency inheres in an
observer, which might or might not include the agent itself.  An ant
(while presumably lacking self-awareness) can be seen as its own agent
(promoting its own internal values) as well as being an agent of the
colony.  A person is almost always their own agent to some extent, and
commonly seen as acting as an agent of others.  A newborn baby is seen
as an agent of itself, reaching for the nipple, even while it yet
lacks the self-awareness to recognize its own agency.  A simple robot,
autonomous but lacking self-awareness is an agent promoting the values
expressed by its design, and possibly also an agent of its designer to
the extent that the designer's preferences are reflected in the
robot's preferences.

Moral agency, however, requires both agency and self-awareness.  Moral
agency is not about the acting but the deciding, and is necessarily
over a context that includes the values of at least one other agent.
This requirement of expanded decision-making context is what makes the
difference between what is seen as merely "good" (to an individual)
and what is seen as "right" or "moral" (to a group.)Morality is a
function of a group, not of an individual. The difference entails
**agreement**, thus decision-making context greater than a single
agent, thus recognition of self in order to recognize the existence of
the greater context including both self and other agency.

Now we are back to the starting point, where I saw your statement
about the possibility of moral agency sans consciousness as a
provocative one.  Can you see why?

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-05 Thread Mark Waser

I would not claim that agency requires consciousness; it is necessary
only that an agent acts on its environment so as to minimize the
difference between the external environment and its internal model of
the preferred environment


OK.


Moral agency, however, requires both agency and self-awareness.  Moral
agency is not about the acting but the deciding


So you're saying that deciding requires self-awareness?


This requirement of expanded decision-making context is what makes the
difference between what is seen as merely "good" (to an individual)
and what is seen as "right" or "moral" (to a group.)Morality is a
function of a group, not of an individual. The difference entails
**agreement**, thus decision-making context greater than a single
agent, thus recognition of self in order to recognize the existence of
the greater context including both self and other agency.


So you're saying that if you act morally without recognizing the greater 
context then you are not acting morally (i.e. you are acting amorally --  
without morals -- as opposed to immorally -- against morals).


I would then argue that we humans *rarely* recognize this greater context --  
and then most frequently act upon this realization for the wrong reasons 
(i.e. fear of ostracism, punishment, etc.) instead of "moral" reasons 
because realistically most of us are hard-wired by evolution to feel in 
accordance with most of what is regarded as moral (with the exceptions often 
being psychopaths).



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-05 Thread Jef Allbright

On 6/5/07, Mark Waser <[EMAIL PROTECTED]> wrote:

> I would not claim that agency requires consciousness; it is necessary
> only that an agent acts on its environment so as to minimize the
> difference between the external environment and its internal model of
> the preferred environment

OK.

> Moral agency, however, requires both agency and self-awareness.  Moral
> agency is not about the acting but the deciding

So you're saying that deciding requires self-awareness?


No, I'm saying that **moral** decision-making requires self-awareness.



> This requirement of expanded decision-making context is what makes the
> difference between what is seen as merely "good" (to an individual)
> and what is seen as "right" or "moral" (to a group.)Morality is a
> function of a group, not of an individual. The difference entails
> **agreement**, thus decision-making context greater than a single
> agent, thus recognition of self in order to recognize the existence of
> the greater context including both self and other agency.

So you're saying that if you act morally without recognizing the greater
context then you are not acting morally (i.e. you are acting amorally --
without morals -- as opposed to immorally -- against morals).


Yes, a machine that has been programed to carry out acts which others
have decided are moral, or a human who follows religious (or military)
imperatives is not displaying moral agency.



I would then argue that we humans *rarely* recognize this greater context --
and then most frequently act upon this realization for the wrong reasons
(i.e. fear of ostracism, punishment, etc.) instead of "moral" reasons
because realistically most of us are hard-wired by evolution to feel in
accordance with most of what is regarded as moral (with the exceptions often
being psychopaths).


Yes!  Our present-day moral agency is limited due to what we might
lump under the term "lack of awareness." Most of what is presently
considered "morality" is actually only distilled patterns of
cooperative behavior that worked in the environment of evolutionary
adaptation, now encoded into our innate biological preferences as well
as cultural artifacts such as the Ten Commandments.

A more accurate understanding of "morality" or decision-making seen as
"right", and extensible beyond the EEA to our increasingly complex
world might be something like the following:

Decisions are seen as increasingly moral to the extent that they enact
principles assessed as promoting an increasing context of increasingly
coherent values over increasing scope of consequences.

For the sake of brevity here I'll resist the temptation to forestall
some anticipated objections.

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-05 Thread Mark Waser
> A more accurate understanding of "morality" or decision-making seen as
> "right", and extensible beyond the EEA to our increasingly complex
> world might be something like the following:
> 
> Decisions are seen as increasingly moral to the extent that they enact
> principles assessed as promoting an increasing context of increasingly
> coherent values over increasing scope of consequences.

OK.  I would contend that a machine can be programmed to make decisions to 
"enact principles assessed as promoting an increasing context of increasingly 
coherent values over increasing scope of consequences" and that it can be 
programmed in this fashion without it attaining consciousness.

You did say "machine that has been programmed to carry out acts which others 
have decided are moral . . . is not displaying moral agency" but I interpreted 
this as the machine merely following rules of what the human has already 
decided as "enacting principles assessed . . ." (i.e. the machine is not doing 
the actual morality checking itself)

So . . . my next two questions are 
  a.. Do you believe that a machine programmed to make decisions to "enact 
principles assessed as promoting an increasing context of increasingly coherent 
values over increasing scope of consequences" (I assume that it has/needs an 
awesome knowledge base and very sophisticated rules and evaluation criteria) is 
still not acting morally?  (and, if so, why?)
  b.. Or, do you believe that it is not possible to program a machine in this 
fashion without giving it consciousness.
Also, BTW, with this definition of morality, I would argue that it is a very 
rare human that makes moral decisions any appreciable percent of the time (and 
those that do have ingrained it as reflex -- so do those reflexes count as 
moral decisions?  Or are they not moral since they're not conscious decisions 
at the time of choice?:-).

Mark

- Original Message - 
From: "Jef Allbright" <[EMAIL PROTECTED]>
To: 
Sent: Tuesday, June 05, 2007 5:45 PM
Subject: Re: [agi] Pure reason is a disease.


> On 6/5/07, Mark Waser <[EMAIL PROTECTED]> wrote:
>> > I would not claim that agency requires consciousness; it is necessary
>> > only that an agent acts on its environment so as to minimize the
>> > difference between the external environment and its internal model of
>> > the preferred environment
>>
>> OK.
>>
>> > Moral agency, however, requires both agency and self-awareness.  Moral
>> > agency is not about the acting but the deciding
>>
>> So you're saying that deciding requires self-awareness?
> 
> No, I'm saying that **moral** decision-making requires self-awareness.
> 
> 
>> > This requirement of expanded decision-making context is what makes the
>> > difference between what is seen as merely "good" (to an individual)
>> > and what is seen as "right" or "moral" (to a group.)Morality is a
>> > function of a group, not of an individual. The difference entails
>> > **agreement**, thus decision-making context greater than a single
>> > agent, thus recognition of self in order to recognize the existence of
>> > the greater context including both self and other agency.
>>
>> So you're saying that if you act morally without recognizing the greater
>> context then you are not acting morally (i.e. you are acting amorally --
>> without morals -- as opposed to immorally -- against morals).
> 
> Yes, a machine that has been programed to carry out acts which others
> have decided are moral, or a human who follows religious (or military)
> imperatives is not displaying moral agency.
> 
> 
>> I would then argue that we humans *rarely* recognize this greater context --
>> and then most frequently act upon this realization for the wrong reasons
>> (i.e. fear of ostracism, punishment, etc.) instead of "moral" reasons
>> because realistically most of us are hard-wired by evolution to feel in
>> accordance with most of what is regarded as moral (with the exceptions often
>> being psychopaths).
> 
> Yes!  Our present-day moral agency is limited due to what we might
> lump under the term "lack of awareness." Most of what is presently
> considered "morality" is actually only distilled patterns of
> cooperative behavior that worked in the environment of evolutionary
> adaptation, now encoded into our innate biological preferences as well
> as cultural artifacts such as the Ten Commandments.
> 
> A more accurate understanding of "morality" or decision-making seen as
> "right", and extensible beyond the EEA t

RE: [agi] Pure reason is a disease.

2007-06-05 Thread Derek Zahn
 
Mark Waser writes:
 
> BTW, with this definition of morality, I would argue that it is a very rare 
> human that makes moral decisions any appreciable percent of the time 
 
Just a gentle suggestion:  If you're planning to unveil a major AGI initiative 
next month, focus on that at the moment.  This stuff you have been arguing 
lately is quite peripheral to what you have in mind, except perhaps for the 
business model but in that area I see little compromise on more than subtle 
technical points.
 
As I have begun to re-attach myself to the issues of "AGI" I have become 
suspicious of the ability or wisdom of attaching important semantics to atomic 
tokens (as I suspect you are going to attempt to do, along with most 
approaches), but I'd dearly like to contribute to something I thought had a 
chance.
This stuff, though, belongs on comp.ai.philosophy (which is to say, it belongs 
unread).

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Re: [agi] Pure reason is a disease.

2007-06-05 Thread Mark Waser
>> Just a gentle suggestion:  If you're planning to unveil a major AGI 
>> initiative next month, focus on that at the moment.

I think that morality (aka Friendliness) is directly on-topic for *any* AGI 
initiative; however, it's actually even more apropos for the approach that I'm 
taking.

>> As I have begun to re-attach myself to the issues of "AGI" I have become 
>> suspicious of the ability or wisdom of attaching important semantics to 
>> atomic tokens (as I suspect you are going to attempt to do, along with most 
>> approaches), but I'd dearly like to contribute to something I thought had a 
>> chance.

Atomic tokens are quick and easy labels for what can be very convoluted and 
difficult concepts which normally end up varying in their details from person 
to person.  We cannot communicate efficiently and effectively without such 
labels but unless all parties have the exact same concept (to the smallest 
details) attached to the same label, we are miscommunicating to the exact 
degree that our concepts in all their glory aren't congruent.  A very important 
part of what I'm proposing is attempting to deal with the fact that no two 
humans agree *exactly* on the meaning of any but the simplest labels.  Does 
that allay your fears somewhat?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Re: [agi] Pure reason is a disease.

2007-06-05 Thread Mark Waser
> Decisions are seen as increasingly moral to the extent that they enact
> principles assessed as promoting an increasing context of increasingly
> coherent values over increasing scope of consequences.

Or another question . . . . if I'm analyzing an action based upon the criteria 
specified above but am actually taking the action that the criteria says is 
moral because I feel that it is in my best self-interest to always act morally 
-- am I still a moral agent?

Mark

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Re: [agi] Pure reason is a disease.

2007-06-05 Thread Jef Allbright

On 6/5/07, Mark Waser <[EMAIL PROTECTED]> wrote:



> Decisions are seen as increasingly moral to the extent that they enact
> principles assessed as promoting an increasing context of increasingly
> coherent values over increasing scope of consequences.

Or another question . . . . if I'm analyzing an action based upon the
criteria specified above but am actually taking the action that the criteria
says is moral because I feel that it is in my best self-interest to always
act morally -- am I still a moral agent?


Shirley you jest.

Out of respect for the gentle but slightly passive-aggressive Derek,
and others who see this as excluding lots of nuts and bolts AGI stuff,
I'll leave it here.

If you're serious, contact me offlist and I'll be happy to expand on
what it really means.

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


RE: [agi] Pure reason is a disease.

2007-06-05 Thread Derek Zahn
Mark Waser writes:

> I think that morality (aka Friendliness) is directly on-topic for *any* AGI 
> initiative; however, it's actually even more apropos for the approach that 
> I'm taking.
 
> A very important part of what I'm proposing is attempting to deal with the 
> fact that no two humans agree *exactly* on the meaning of any but the 
> simplest labels.  Does that allay your fears somewhat?
 
I agree that refraining from devastating humanity is a good idea :-), luckily I 
think we have some time before it's an imminent risk.
 
As to my "fears" about your project, we can wait until July to see the details. 
 You've done a good job of piquing interest :)
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Re: [agi] Pure reason is a disease.

2007-06-06 Thread Joel Pitt

On 6/3/07, Jiri Jelinek <[EMAIL PROTECTED]> wrote:

>Further, prove that pain (or more preferably sensation in general) isn't an
emergent property of sufficient complexity.

Talking about Neumann's architecture - I don't see how could increases
in complexity of rules used for switching Boolean values lead to new
sensations. It can represent a lot in a way that can be very
meaningful to us in terms of feelings, but from the system's
perspective it's nothing more than a bunch of 1s and 0s.


In a similar vein I could argue that humans don't feel anything
because they are simple made of (sub)atomic particles. Why should we
believe that matter can "feel"?

It's all about the pattern, not the substrate. And if a feeling AGI
requires quantum mechanics (I don't believe it does) then maybe we'll
just need to wait for quantum computing.

J

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-06 Thread Samantha  Atkins


On Jun 5, 2007, at 9:17 AM, J Storrs Hall, PhD wrote:


On Tuesday 05 June 2007 10:51:54 am Mark Waser wrote:
It's my belief/contention that a sufficiently complex mind will be  
conscious

and feel -- regardless of substrate.


Sounds like Mike the computer in Moon is a Harsh Mistress  
(Heinlein). Note,

btw, that Mike could be programmed in Loglan (predecessor of Lojban).

I think a system can get arbitrarily complex without being conscious  
--
consciousness is a specific kind of model-based, summarizing, self- 
monitoring

architecture.


That matches my intuitions mostly.  If the system must model itself in  
the context of the domain it operates upon and especially if it must  
model perceptions of itself from the point of view of other actors in  
that domain, then I think it very likely that it can become  
conscious / self-aware.   It might be necessary that it takes a  
requirement to explain itself to other beings with self-awareness to  
kick it off.   I am not sure if some of the feral children studies  
lend some support to such.  If a human being, which we know (ok, no  
quibbles for a moment) is conscious / self-aware,  has less self- 
awareness without significant interaction with other humans then this  
may say something interesting about how and why self-awareness develops.




- samantha

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-07 Thread J Storrs Hall, PhD
Yep. It's clear that modelling others in a social context was at least one of 
the strong evolutionary drivers to human-level cognition. Reciprocal altruism 
(in, e.g. bats) is strongly correlated with increased brain size (compared to 
similar animals without it, e.g. other bats).

It's clearly to our advantage to be able to model others, and this gives us at 
least the mechanism to model ourselves. The evolutionary theorist (cf. 
Pinker) will instantly think in terms of an arms race -- while others are 
trying to figure us out, we're trying to fool them. But what's less generally 
appreciated is that there is a possibly even stronger counter-force in the 
value of being easy to understand (cf Axelrod's "personality traits"). In 
that case you may even form a self-model and then use it to guide your 
further actions rather than its merely being a description of them. 

Josh



On Wednesday 06 June 2007 09:08:40 pm Samantha Atkins wrote:
> That matches my intuitions mostly.  If the system must model itself in  
> the context of the domain it operates upon and especially if it must  
> model perceptions of itself from the point of view of other actors in  
> that domain, then I think it very likely that it can become  
> conscious / self-aware.   It might be necessary that it takes a  
> requirement to explain itself to other beings with self-awareness to  
> kick it off.   I am not sure if some of the feral children studies  
> lend some support to such.  If a human being, which we know (ok, no  
> quibbles for a moment) is conscious / self-aware,  has less self- 
> awareness without significant interaction with other humans then this  
> may say something interesting about how and why self-awareness develops.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Re: [agi] Pure reason is a disease.

2007-06-10 Thread Jiri Jelinek

Mark,


Could you specify some of those good reasons (i.e. why a sufficiently

large/fast enough von Neumann architecture isn't sufficient substrate
for a sufficiently complex mind to be conscious and feel -- or, at
least, to believe itself to be conscious and believe itself to feel

For being [/believing to be] conscious - no - I don't see a problem
with coding that.

For feelings - like pain - there is a problem. But I don't feel like
spending much time explaining it little by little through many emails.
There are books and articles on this topic. Let me just emphasize that
I'm talking about pain that really *hurts* (note: with some drugs, you
can alter the sensation of pain so that patients still report feeling
pain of the same intensity - they just no longer mind it). There are
levels of the qualitative aspect of pain and other things which make
it more difficult to really cover the topic well. Start with Dennett's
book "Why you can't make a computer that feels pain" if you are really
interested. BTW some argue about this stuff for years (just like those
never ending AI definition exchanges). I guess we better spend more
time with more practical AGI stuff (like KR, UI & problem solving).

Jiri

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-10 Thread Mark Waser
> For feelings - like pain - there is a problem. But I don't feel like
> spending much time explaining it little by little through many emails.
> There are books and articles on this topic. 

Indeed there are and they are entirely unconvincing.  Anyone who writes 
something can get it published.

If you can't prove that you're not a simulation, then you certainly can't prove 
that "pain that really *hurts*" isn't possible.  I'll just simply argue that 
you *are* a simulation, that you do experience "pain that really *hurts*", and 
therefore, my point is proved.  I'd say that the burden of proof is upon you or 
anyone else who makes claims like ""Why you can't make a computer that feels 
pain".

I've read all of Dennett's books.  I would argue that there are far more people 
with credentials who disagree with him than agree.  His arguments really don't 
boil down to anything better than "I don't see how it happens or how to do it 
so it isn't possible."

I still haven't seen you respond to the simulation argument (which I feel *is* 
the stake through Dennett's argument) but if you want to stop debating without 
doing so that's certainly cool.

Mark

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Re: [agi] Pure reason is a disease.

2007-06-10 Thread Jiri Jelinek

Mark,

Again, simulation - sure, why not. On VNA (Neumann's architecture) - I
don't think so - IMO not advanced enough to support qualia. Yes, I do
believe qualia exists (= I do not agree with all Dennett's views, but
I think his views are important to consider.) I wrote tons of pro
software (using many languages) for a bunch of major projects but I
have absolutely no idea how to write some kind of feelPain(intensity)
fn that could cause real pain sensation to an AI system running on my
(VNA based) computer. BTW I often do the test driven development so I
would probably first want to write a test procedure for real pain. If
you can write at least a pseudo-code for that then let me know. When
talking about VNA, this is IMO a pure fiction. And even *IF* it
actually was somehow possible, I don't think it would be clever to
allow adding such a code to our AGI. In VNA-processing, there is no
room for subjective feelings. VNA = "cold" data & "cold" logic (no
matter how complex your algorithms get) because the CPU (with its set
of primitive instructions) - just like the other components - was not
designed to handle anything more.

Jiri

On 6/10/07, Mark Waser <[EMAIL PROTECTED]> wrote:



> For feelings - like pain - there is a problem. But I don't feel like
> spending much time explaining it little by little through many emails.
> There are books and articles on this topic.

Indeed there are and they are entirely unconvincing.  Anyone who writes
something can get it published.

If you can't prove that you're not a simulation, then you certainly can't
prove that "pain that really *hurts*" isn't possible.  I'll just simply
argue that you *are* a simulation, that you do experience "pain that really
*hurts*", and therefore, my point is proved.  I'd say that the burden of
proof is upon you or anyone else who makes claims like ""Why you can't make
a computer that feels pain".

I've read all of Dennett's books.  I would argue that there are far more
people with credentials who disagree with him than agree.  His arguments
really don't boil down to anything better than "I don't see how it happens
or how to do it so it isn't possible."

I still haven't seen you respond to the simulation argument (which I feel
*is* the stake through Dennett's argument) but if you want to stop debating
without doing so that's certainly cool.

Mark
 This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-11 Thread James Ratcliff
Two different responses to this type of arguement.

Once you "simulate" something to the fact that we cant tell the difference 
between it in any way, then it IS that something for most all intents and 
purposes as far as the tests you have go.
If it walks like a human, talks like a human, then for all those aspects it is 
a human.

Second, to say it CANNOT be programmed, you must define IT much more closely.  
For cutaneous pain and humans, it appears to me that we have pain sensors, so 
if we are being pricked on the arm, the nerves there send the message to the 
brain, and the brain reacts to it there.

We an recreate this fairly easily using VNA with some robotic touch sensors, 
and saying that "past this threshhold" it becomes "painful" and can be 
damaging, and we will send a message to the CPU.

If there is nothing "magical" about the pain sensation, then there is no reason 
we cant recreate it.

James Ratcliff


Jiri Jelinek <[EMAIL PROTECTED]> wrote: Mark,

Again, simulation - sure, why not. On VNA (Neumann's architecture) - I
don't think so - IMO not advanced enough to support qualia. Yes, I do
believe qualia exists (= I do not agree with all Dennett's views, but
I think his views are important to consider.) I wrote tons of pro
software (using many languages) for a bunch of major projects but I
have absolutely no idea how to write some kind of feelPain(intensity)
fn that could cause real pain sensation to an AI system running on my
(VNA based) computer. BTW I often do the test driven development so I
would probably first want to write a test procedure for real pain. If
you can write at least a pseudo-code for that then let me know. When
talking about VNA, this is IMO a pure fiction. And even *IF* it
actually was somehow possible, I don't think it would be clever to
allow adding such a code to our AGI. In VNA-processing, there is no
room for subjective feelings. VNA = "cold" data & "cold" logic (no
matter how complex your algorithms get) because the CPU (with its set
of primitive instructions) - just like the other components - was not
designed to handle anything more.

Jiri

On 6/10/07, Mark Waser  wrote:
>
>
> > For feelings - like pain - there is a problem. But I don't feel like
> > spending much time explaining it little by little through many emails.
> > There are books and articles on this topic.
>
> Indeed there are and they are entirely unconvincing.  Anyone who writes
> something can get it published.
>
> If you can't prove that you're not a simulation, then you certainly can't
> prove that "pain that really *hurts*" isn't possible.  I'll just simply
> argue that you *are* a simulation, that you do experience "pain that really
> *hurts*", and therefore, my point is proved.  I'd say that the burden of
> proof is upon you or anyone else who makes claims like ""Why you can't make
> a computer that feels pain".
>
> I've read all of Dennett's books.  I would argue that there are far more
> people with credentials who disagree with him than agree.  His arguments
> really don't boil down to anything better than "I don't see how it happens
> or how to do it so it isn't possible."
>
> I still haven't seen you respond to the simulation argument (which I feel
> *is* the stake through Dennett's argument) but if you want to stop debating
> without doing so that's certainly cool.
>
> Mark
>  This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



___
James Ratcliff - http://falazar.com
Looking for something...
   
-
Choose the right car based on your needs.  Check out Yahoo! Autos new Car 
Finder tool.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Re: [agi] Pure reason is a disease.

2007-06-11 Thread Jiri Jelinek

James,

Frank Jackson (in "Epiphenomenal Qualia") defined qualia as
"...certain features of the bodily sensations especially, but also of
certain perceptual experiences, which no amount of purely physical
information includes.. :-)


If it walks like a human, talks like a human, then for all those

aspects it is a human

If it feels like a human and if Frank is correct :-) then the system
may, under certain circumstances, want to modify given goals based on
preferences that could not be found in its memory (nor in CPU
registers etc.). So, with some assumptions, we might be able to write
some code for the feelPainTest procedure, but no idea for the actual
feelPain procedure.

Jiri

On 6/11/07, James Ratcliff <[EMAIL PROTECTED]> wrote:

Two different responses to this type of arguement.

Once you "simulate" something to the fact that we cant tell the difference
between it in any way, then it IS that something for most all intents and
purposes as far as the tests you have go.
If it walks like a human, talks like a human, then for all those aspects it
is a human.

Second, to say it CANNOT be programmed, you must define IT much more
closely.  For cutaneous pain and humans, it appears to me that we have pain
sensors, so if we are being pricked on the arm, the nerves there send the
message to the brain, and the brain reacts to it there.

We an recreate this fairly easily using VNA with some robotic touch sensors,
and saying that "past this threshhold" it becomes "painful" and can be
damaging, and we will send a message to the CPU.

If there is nothing "magical" about the pain sensation, then there is no
reason we cant recreate it.

James Ratcliff


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-11 Thread Matt Mahoney
Below is a program that can feel pain.  It is a simulation of a programmable
2-input logic gate that you train using reinforcement conditioning.


/* pain.cpp

This program simulates a programmable 2-input logic gate.
You train it by reinforcement conditioning.  You provide a pair of 
input bits (00, 01, 10, or 11).  It will output a 0 or 1.  If the
output is correct, you "reward" it by entering "+".  If it is wrong,
you "punish" it by entering "-".  You can program it this way to
implement any 2-input logic function (AND, OR, XOR, NAND, etc).
*/

#include 
#include 
using namespace std;

int main() {
  // probability of output 1 given input 00, 01, 10, 11
  double wt[4]={0.5, 0.5, 0.5, 0.5};

  while (1) {
cout << "Please input 2 bits (00, 01, 10, 11): ";
char b1, b2;
cin >> b1 >> b2;
int input = (b1-'0')*2+(b2-'0');
if (input >= 0 && input < 4) {
  int response = double(rand())/RAND_MAX < wt[input];
  cout << "Output = " << response 
   << ".  Please enter + if right, - if wrong: ";
  char reinforcement;
  cin >> reinforcement;
  if (reinforcement == '+')
cout << "aah! :-)\n";
  else if (reinforcement == '-')
cout << "ouch! :-(\n";
  else
continue;
  int adjustment = (reinforcement == '-') ^ response;
  if (adjustment == 0)
wt[input] /= 2;
  else
wt[input] = 1 - (1 - wt[input])/2;
}
  }
}



--- Jiri Jelinek <[EMAIL PROTECTED]> wrote:

> Mark,
> 
> Again, simulation - sure, why not. On VNA (Neumann's architecture) - I
> don't think so - IMO not advanced enough to support qualia. Yes, I do
> believe qualia exists (= I do not agree with all Dennett's views, but
> I think his views are important to consider.) I wrote tons of pro
> software (using many languages) for a bunch of major projects but I
> have absolutely no idea how to write some kind of feelPain(intensity)
> fn that could cause real pain sensation to an AI system running on my
> (VNA based) computer. BTW I often do the test driven development so I
> would probably first want to write a test procedure for real pain. If
> you can write at least a pseudo-code for that then let me know. When
> talking about VNA, this is IMO a pure fiction. And even *IF* it
> actually was somehow possible, I don't think it would be clever to
> allow adding such a code to our AGI. In VNA-processing, there is no
> room for subjective feelings. VNA = "cold" data & "cold" logic (no
> matter how complex your algorithms get) because the CPU (with its set
> of primitive instructions) - just like the other components - was not
> designed to handle anything more.
> 
> Jiri
> 
> On 6/10/07, Mark Waser <[EMAIL PROTECTED]> wrote:
> >
> >
> > > For feelings - like pain - there is a problem. But I don't feel like
> > > spending much time explaining it little by little through many emails.
> > > There are books and articles on this topic.
> >
> > Indeed there are and they are entirely unconvincing.  Anyone who writes
> > something can get it published.
> >
> > If you can't prove that you're not a simulation, then you certainly can't
> > prove that "pain that really *hurts*" isn't possible.  I'll just simply
> > argue that you *are* a simulation, that you do experience "pain that
> really
> > *hurts*", and therefore, my point is proved.  I'd say that the burden of
> > proof is upon you or anyone else who makes claims like ""Why you can't
> make
> > a computer that feels pain".
> >
> > I've read all of Dennett's books.  I would argue that there are far more
> > people with credentials who disagree with him than agree.  His arguments
> > really don't boil down to anything better than "I don't see how it happens
> > or how to do it so it isn't possible."
> >
> > I still haven't seen you respond to the simulation argument (which I feel
> > *is* the stake through Dennett's argument) but if you want to stop
> debating
> > without doing so that's certainly cool.
> >
> > Mark


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


RE: [agi] Pure reason is a disease.

2007-06-11 Thread Derek Zahn
Matt Mahoney writes:> Below is a program that can feel pain. It is a simulation 
of a programmable> 2-input logic gate that you train using reinforcement 
conditioning.
Is it ethical to compile and run this program?
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Re: [agi] Pure reason is a disease.

2007-06-11 Thread Matt Mahoney
Here is a program that feels pain.  It is a simulation of a 2-input logic gate
that you train by reinforcement learning.  It "feels" in the sense that it
adjusts its behavior to avoid negative reinforcement from the user.


/* pain.cpp - A program that can feel pleasure and pain.

The program simulates a programmable 2-input logic gate.
You train it by reinforcement conditioning.  You provide a pair of 
input bits (00, 01, 10, or 11).  It will output a 0 or 1.  If the
output is correct, you "reward" it by entering "+".  If it is wrong,
you "punish" it by entering "-".  You can program it this way to
implement any 2-input logic function (AND, OR, XOR, NAND, etc).
*/

#include 
#include 
using namespace std;

int main() {
  // probability of output 1 given input 00, 01, 10, 11
  double wt[4]={0.5, 0.5, 0.5, 0.5};

  while (1) {
cout << "Please input 2 bits (00, 01, 10, 11): ";
char b1, b2;
cin >> b1 >> b2;
int input = (b1-'0')*2+(b2-'0');
if (input >= 0 && input < 4) {
  int response = double(rand())/RAND_MAX < wt[input];
  cout << "Output = " << response 
   << ".  Please enter + if right, - if wrong: ";
  char reinforcement;
  cin >> reinforcement;
  if (reinforcement == '+')
cout << "aah! :-)\n";
  else if (reinforcement == '-')
cout << "ouch! :-(\n";
  else
continue;
  int adjustment = (reinforcement == '-') ^ response;
  if (adjustment == 0)
wt[input] /= 2;
  else
wt[input] = 1 - (1 - wt[input])/2;
}
  }
}


--- Jiri Jelinek <[EMAIL PROTECTED]> wrote:

> James,
> 
> Frank Jackson (in "Epiphenomenal Qualia") defined qualia as
> "...certain features of the bodily sensations especially, but also of
> certain perceptual experiences, which no amount of purely physical
> information includes.. :-)
> 
> >If it walks like a human, talks like a human, then for all those
> aspects it is a human
> 
> If it feels like a human and if Frank is correct :-) then the system
> may, under certain circumstances, want to modify given goals based on
> preferences that could not be found in its memory (nor in CPU
> registers etc.). So, with some assumptions, we might be able to write
> some code for the feelPainTest procedure, but no idea for the actual
> feelPain procedure.
> 
> Jiri
> 
> On 6/11/07, James Ratcliff <[EMAIL PROTECTED]> wrote:
> > Two different responses to this type of arguement.
> >
> > Once you "simulate" something to the fact that we cant tell the difference
> > between it in any way, then it IS that something for most all intents and
> > purposes as far as the tests you have go.
> > If it walks like a human, talks like a human, then for all those aspects
> it
> > is a human.
> >
> > Second, to say it CANNOT be programmed, you must define IT much more
> > closely.  For cutaneous pain and humans, it appears to me that we have
> pain
> > sensors, so if we are being pricked on the arm, the nerves there send the
> > message to the brain, and the brain reacts to it there.
> >
> > We an recreate this fairly easily using VNA with some robotic touch
> sensors,
> > and saying that "past this threshhold" it becomes "painful" and can be
> > damaging, and we will send a message to the CPU.
> >
> > If there is nothing "magical" about the pain sensation, then there is no
> > reason we cant recreate it.
> >
> > James Ratcliff
> 



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-11 Thread J Storrs Hall, PhD
On Monday 11 June 2007 03:22:04 pm Matt Mahoney wrote:
> /* pain.cpp - A program that can feel pleasure and pain.
> ...

Ouch! :-)

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


RE: [agi] Pure reason is a disease.

2007-06-11 Thread Matt Mahoney
--- Derek Zahn <[EMAIL PROTECTED]> wrote:

> Matt Mahoney writes:> Below is a program that can feel pain. It is a
> simulation of a programmable> 2-input logic gate that you train using
> reinforcement conditioning.

> Is it ethical to compile and run this program?

Well, that is a good question.  Ethics is very complex.  It is not just a
question of inflicting pain.  Is it ethical to punish a child for stealing? 
Is it ethical to swat a fly?  Is it ethical to give people experimental drugs?

(Apologies for posting the program twice.  My first post was delayed several
hours).


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-11 Thread James Ratcliff
And here's the human psuedocode:

1. Hold Knife above flame until red.
2. Place knife on arm.
3. a. Accept Pain sensation 
b. Scream or respond as necessary
4. Press knife harder into skin.
5. Goto 3, until 6.
6. Pass out from pain



Matt Mahoney <[EMAIL PROTECTED]> wrote: Below is a program that can feel pain.  
It is a simulation of a programmable
2-input logic gate that you train using reinforcement conditioning.


/* pain.cpp

This program simulates a programmable 2-input logic gate.
You train it by reinforcement conditioning.  You provide a pair of 
input bits (00, 01, 10, or 11).  It will output a 0 or 1.  If the
output is correct, you "reward" it by entering "+".  If it is wrong,
you "punish" it by entering "-".  You can program it this way to
implement any 2-input logic function (AND, OR, XOR, NAND, etc).
*/

#include 
#include 
using namespace std;

int main() {
  // probability of output 1 given input 00, 01, 10, 11
  double wt[4]={0.5, 0.5, 0.5, 0.5};

  while (1) {
cout << "Please input 2 bits (00, 01, 10, 11): ";
char b1, b2;
cin >> b1 >> b2;
int input = (b1-'0')*2+(b2-'0');
if (input >= 0 && input < 4) {
  int response = double(rand())/RAND_MAX < wt[input];
  cout << "Output = " << response 
   << ".  Please enter + if right, - if wrong: ";
  char reinforcement;
  cin >> reinforcement;
  if (reinforcement == '+')
cout << "aah! :-)\n";
  else if (reinforcement == '-')
cout << "ouch! :-(\n";
  else
continue;
  int adjustment = (reinforcement == '-') ^ response;
  if (adjustment == 0)
wt[input] /= 2;
  else
wt[input] = 1 - (1 - wt[input])/2;
}
  }
}



--- Jiri Jelinek  wrote:

> Mark,
> 
> Again, simulation - sure, why not. On VNA (Neumann's architecture) - I
> don't think so - IMO not advanced enough to support qualia. Yes, I do
> believe qualia exists (= I do not agree with all Dennett's views, but
> I think his views are important to consider.) I wrote tons of pro
> software (using many languages) for a bunch of major projects but I
> have absolutely no idea how to write some kind of feelPain(intensity)
> fn that could cause real pain sensation to an AI system running on my
> (VNA based) computer. BTW I often do the test driven development so I
> would probably first want to write a test procedure for real pain. If
> you can write at least a pseudo-code for that then let me know. When
> talking about VNA, this is IMO a pure fiction. And even *IF* it
> actually was somehow possible, I don't think it would be clever to
> allow adding such a code to our AGI. In VNA-processing, there is no
> room for subjective feelings. VNA = "cold" data & "cold" logic (no
> matter how complex your algorithms get) because the CPU (with its set
> of primitive instructions) - just like the other components - was not
> designed to handle anything more.
> 
> Jiri
> 
> On 6/10/07, Mark Waser  wrote:
> >
> >
> > > For feelings - like pain - there is a problem. But I don't feel like
> > > spending much time explaining it little by little through many emails.
> > > There are books and articles on this topic.
> >
> > Indeed there are and they are entirely unconvincing.  Anyone who writes
> > something can get it published.
> >
> > If you can't prove that you're not a simulation, then you certainly can't
> > prove that "pain that really *hurts*" isn't possible.  I'll just simply
> > argue that you *are* a simulation, that you do experience "pain that
> really
> > *hurts*", and therefore, my point is proved.  I'd say that the burden of
> > proof is upon you or anyone else who makes claims like ""Why you can't
> make
> > a computer that feels pain".
> >
> > I've read all of Dennett's books.  I would argue that there are far more
> > people with credentials who disagree with him than agree.  His arguments
> > really don't boil down to anything better than "I don't see how it happens
> > or how to do it so it isn't possible."
> >
> > I still haven't seen you respond to the simulation argument (which I feel
> > *is* the stake through Dennett's argument) but if you want to stop
> debating
> > without doing so that's certainly cool.
> >
> > Mark


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



___
James Ratcliff - http://falazar.com
Looking for something...
   
-
Building a website is a piece of cake. 
Yahoo! Small Business gives you all the tools to get online.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Re: [agi] Pure reason is a disease.

2007-06-11 Thread James Ratcliff
Yeah I looked a bit on the wiki about the "qualia" but was unable to find 
anything concrete enough to comment on, seems to be some magical fluffery.
"bodily sensations" = input from touch stimuli
"perceptual experiences" = input information (data)
both of these we have and can process... 

the last bit seems unconnected to the 

"which no amount of purely physical information includes"

This contradicts the above two, which are purely physical information,
So what is that magical bit there, and what does it do?

If we cant see it or what it does, and it doesnt appear to have any effect on 
anythign, I dont see how or why we can include it.


Looking for some more on Frank Jackson I see:
Tell me everything physical there is to tell about what is going on in a living 
brain, the kind of states, their functional role, their relation to what goes 
on at other times and in other brains, and so on and so forth, and be I as 
clever as can be in fitting it all together, you won’t have told me about the 
hurtfulness of pains, the itchiness of itches, pangs of jealousy, or about the 
characteristic experience of tasting a lemon, smelling a rose, hearing a loud 
noise or seeing the sky.

And there is somethign to be said about the "experiencing" of an event, and 
humans may uniquely experience any event as more than raw input data.  
Many of these are not something a bot will have, or may need to have anytime 
soon, such as tasting, smelling, or smelling, but sight will be a quick and 
important one.

When we look at a sky and say the sunset is beautiful it makes us feel a 
certain way as we experience it, and not just any sunset will do.  So first a 
bot would have to know what sunsets it "enjoys" and then what feelings and 
thoughts are aroused by that particular sunset.

The first part can be done computationaly by random numbers and color 
preferences etc, but the second is harder, and a bot may get a simple happiness 
boost when seeing a pretty sunset or a blooming flower.  Or their preferences 
may turn to vastly different things than humans and may like plain brown rocks. 
 They will expereince these things differently, and maybe of less force than 
us, but still can have appropriate reactions to this type of input.

James


Jiri Jelinek <[EMAIL PROTECTED]> wrote: James,

Frank Jackson (in "Epiphenomenal Qualia") defined qualia as
"...certain features of the bodily sensations especially, but also of
certain perceptual experiences, which no amount of purely physical
information includes.. :-)

>If it walks like a human, talks like a human, then for all those
aspects it is a human

If it feels like a human and if Frank is correct :-) then the system
may, under certain circumstances, want to modify given goals based on
preferences that could not be found in its memory (nor in CPU
registers etc.). So, with some assumptions, we might be able to write
some code for the feelPainTest procedure, but no idea for the actual
feelPain procedure.

Jiri

On 6/11/07, James Ratcliff  wrote:
> Two different responses to this type of arguement.
>
> Once you "simulate" something to the fact that we cant tell the difference
> between it in any way, then it IS that something for most all intents and
> purposes as far as the tests you have go.
> If it walks like a human, talks like a human, then for all those aspects it
> is a human.
>
> Second, to say it CANNOT be programmed, you must define IT much more
> closely.  For cutaneous pain and humans, it appears to me that we have pain
> sensors, so if we are being pricked on the arm, the nerves there send the
> message to the brain, and the brain reacts to it there.
>
> We an recreate this fairly easily using VNA with some robotic touch sensors,
> and saying that "past this threshhold" it becomes "painful" and can be
> damaging, and we will send a message to the CPU.
>
> If there is nothing "magical" about the pain sensation, then there is no
> reason we cant recreate it.
>
> James Ratcliff

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



___
James Ratcliff - http://falazar.com
Looking for something...
   
-
Be a better Globetrotter. Get better travel answers from someone who knows.
Yahoo! Answers - Check it out.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

RE: [agi] Pure reason is a disease.

2007-06-11 Thread James Ratcliff
Sure, until we give an AGI rights :}


Quote: I stand here today and will not abide the abusing of AGI rights!


Derek Zahn <[EMAIL PROTECTED]> wrote:P { margin:0px; padding:0px } body { 
FONT-SIZE: 10pt; FONT-FAMILY:Tahoma }  Matt Mahoney writes:

> Below is a program that can feel pain. It is a simulation of a programmable
> 2-input logic gate that you train using reinforcement conditioning.

 Is it ethical to compile and run this program?
  

-
 This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;


___
James Ratcliff - http://falazar.com
Looking for something...
   
-
Shape Yahoo! in your own image.  Join our Network Research Panel today!

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Re: [agi] Pure reason is a disease.

2007-06-11 Thread Mark Waser

Hi Jiri,

   A VNA, given sufficient time, can simulate *any* substrate.  Therefore, 
if *any* substrate is capable of simulating you (and thus pain), then a VNA 
is capable of doing so (unless you believe that there is some other magic 
involved).


   Remember also, it is *not* the VNA that feels pain, it is the entity 
that the VNA is simulating that is feeling  the pain.


   Mark

- Original Message - 
From: "Jiri Jelinek" <[EMAIL PROTECTED]>

To: 
Sent: Monday, June 11, 2007 2:50 AM
Subject: Re: [agi] Pure reason is a disease.



Mark,

Again, simulation - sure, why not. On VNA (Neumann's architecture) - I
don't think so - IMO not advanced enough to support qualia. Yes, I do
believe qualia exists (= I do not agree with all Dennett's views, but
I think his views are important to consider.) I wrote tons of pro
software (using many languages) for a bunch of major projects but I
have absolutely no idea how to write some kind of feelPain(intensity)
fn that could cause real pain sensation to an AI system running on my
(VNA based) computer. BTW I often do the test driven development so I
would probably first want to write a test procedure for real pain. If
you can write at least a pseudo-code for that then let me know. When
talking about VNA, this is IMO a pure fiction. And even *IF* it
actually was somehow possible, I don't think it would be clever to
allow adding such a code to our AGI. In VNA-processing, there is no
room for subjective feelings. VNA = "cold" data & "cold" logic (no
matter how complex your algorithms get) because the CPU (with its set
of primitive instructions) - just like the other components - was not
designed to handle anything more.

Jiri

On 6/10/07, Mark Waser <[EMAIL PROTECTED]> wrote:



> For feelings - like pain - there is a problem. But I don't feel like
> spending much time explaining it little by little through many emails.
> There are books and articles on this topic.

Indeed there are and they are entirely unconvincing.  Anyone who writes
something can get it published.

If you can't prove that you're not a simulation, then you certainly can't
prove that "pain that really *hurts*" isn't possible.  I'll just simply
argue that you *are* a simulation, that you do experience "pain that 
really

*hurts*", and therefore, my point is proved.  I'd say that the burden of
proof is upon you or anyone else who makes claims like ""Why you can't 
make

a computer that feels pain".

I've read all of Dennett's books.  I would argue that there are far more
people with credentials who disagree with him than agree.  His arguments
really don't boil down to anything better than "I don't see how it 
happens

or how to do it so it isn't possible."

I still haven't seen you respond to the simulation argument (which I feel
*is* the stake through Dennett's argument) but if you want to stop 
debating

without doing so that's certainly cool.

Mark
 This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-12 Thread Jiri Jelinek

Matt,


Here is a program that feels pain.


I got the logic, but no pain when processing the code in my mind.
Maybe you should mention in the pain.cpp description that it needs to
be processed for long enough - so whatever is gonna process it, it
will eventually get to the 'I don't "feel" like doing this any more'
point. ;-)) Looks like the entropy is kind of "pain" to us (& to our
devices) and the negative entropy might be kind of "pain" to the
universe. Hopefully, when (/if) our AGI figures this out, it will not
attempt to squeeze the Universe into a single spot to "solve" it.

Regards,
Jiri Jelinek

On 6/11/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:

Here is a program that feels pain.  It is a simulation of a 2-input logic gate
that you train by reinforcement learning.  It "feels" in the sense that it
adjusts its behavior to avoid negative reinforcement from the user.


/* pain.cpp - A program that can feel pleasure and pain.

The program simulates a programmable 2-input logic gate.
You train it by reinforcement conditioning.  You provide a pair of
input bits (00, 01, 10, or 11).  It will output a 0 or 1.  If the
output is correct, you "reward" it by entering "+".  If it is wrong,
you "punish" it by entering "-".  You can program it this way to
implement any 2-input logic function (AND, OR, XOR, NAND, etc).
*/

#include 
#include 
using namespace std;

int main() {
  // probability of output 1 given input 00, 01, 10, 11
  double wt[4]={0.5, 0.5, 0.5, 0.5};

  while (1) {
cout << "Please input 2 bits (00, 01, 10, 11): ";
char b1, b2;
cin >> b1 >> b2;
int input = (b1-'0')*2+(b2-'0');
if (input >= 0 && input < 4) {
  int response = double(rand())/RAND_MAX < wt[input];
  cout << "Output = " << response
   << ".  Please enter + if right, - if wrong: ";
  char reinforcement;
  cin >> reinforcement;
  if (reinforcement == '+')
cout << "aah! :-)\n";
  else if (reinforcement == '-')
cout << "ouch! :-(\n";
  else
continue;
  int adjustment = (reinforcement == '-') ^ response;
  if (adjustment == 0)
wt[input] /= 2;
  else
wt[input] = 1 - (1 - wt[input])/2;
}
  }
}


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-13 Thread James Ratcliff
Whihc compiler did you use for Human OS V1.0?
Didnt realize we had a CPP compiler out alreadyh

Jiri Jelinek <[EMAIL PROTECTED]> wrote: Matt,

>Here is a program that feels pain.

I got the logic, but no pain when processing the code in my mind.
Maybe you should mention in the pain.cpp description that it needs to
be processed for long enough - so whatever is gonna process it, it
will eventually get to the 'I don't "feel" like doing this any more'
point. ;-)) Looks like the entropy is kind of "pain" to us (& to our
devices) and the negative entropy might be kind of "pain" to the
universe. Hopefully, when (/if) our AGI figures this out, it will not
attempt to squeeze the Universe into a single spot to "solve" it.

Regards,
Jiri Jelinek

On 6/11/07, Matt Mahoney  wrote:
> Here is a program that feels pain.  It is a simulation of a 2-input logic gate
> that you train by reinforcement learning.  It "feels" in the sense that it
> adjusts its behavior to avoid negative reinforcement from the user.
>
>
> /* pain.cpp - A program that can feel pleasure and pain.
>
> The program simulates a programmable 2-input logic gate.
> You train it by reinforcement conditioning.  You provide a pair of
> input bits (00, 01, 10, or 11).  It will output a 0 or 1.  If the
> output is correct, you "reward" it by entering "+".  If it is wrong,
> you "punish" it by entering "-".  You can program it this way to
> implement any 2-input logic function (AND, OR, XOR, NAND, etc).
> */
>
> #include 
> #include 
> using namespace std;
>
> int main() {
>   // probability of output 1 given input 00, 01, 10, 11
>   double wt[4]={0.5, 0.5, 0.5, 0.5};
>
>   while (1) {
> cout << "Please input 2 bits (00, 01, 10, 11): ";
> char b1, b2;
> cin >> b1 >> b2;
> int input = (b1-'0')*2+(b2-'0');
> if (input >= 0 && input < 4) {
>   int response = double(rand())/RAND_MAX < wt[input];
>   cout << "Output = " << response
><< ".  Please enter + if right, - if wrong: ";
>   char reinforcement;
>   cin >> reinforcement;
>   if (reinforcement == '+')
> cout << "aah! :-)\n";
>   else if (reinforcement == '-')
> cout << "ouch! :-(\n";
>   else
> continue;
>   int adjustment = (reinforcement == '-') ^ response;
>   if (adjustment == 0)
> wt[input] /= 2;
>   else
> wt[input] = 1 - (1 - wt[input])/2;
> }
>   }
> }

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



___
James Ratcliff - http://falazar.com
Looking for something...
   
-
Building a website is a piece of cake. 
Yahoo! Small Business gives you all the tools to get online.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Re: [agi] Pure reason is a disease.

2007-06-13 Thread Matt Mahoney
--- James Ratcliff <[EMAIL PROTECTED]> wrote:

> Whihc compiler did you use for Human OS V1.0?
> Didnt realize we had a CPP compiler out alreadyh

The purpose of my little pain-feeling program is to point out some of the
difficulties in applying ethics-for-humans to machines.  The program has two
characteristics that we normally associate with pain in humans.  First, it
expresses pain (by saying "Ouch! and making a sad face), and second and more
importantly, it has a goal of avoiding pain.  Its behavior is consistent with
learning by negative reinforcement in animals.  Given an input and response
followed by negative reinforcement, it is less likely to output the same
response to that input in the future.

One might question whether animals feel pain, but I think most people will
agree that negative reinforcement stimuli typically used in animals, such as
electric shock, is painful in humans, and further, that any type of pain
signal in humans elicits a behavioral response consistent with negative
reinforcement (i.e. avoidance).

So now for the hard question.  Is it possible for an AGI or any other machine
to experience pain?

If yes, then how do you define pain in a machine?

If no, then what makes the human brain different from a computer? (assuming
you believe that humans can feel pain)


> 
> Jiri Jelinek <[EMAIL PROTECTED]> wrote: Matt,
> 
> >Here is a program that feels pain.
> 
> I got the logic, but no pain when processing the code in my mind.
> Maybe you should mention in the pain.cpp description that it needs to
> be processed for long enough - so whatever is gonna process it, it
> will eventually get to the 'I don't "feel" like doing this any more'
> point. ;-)) Looks like the entropy is kind of "pain" to us (& to our
> devices) and the negative entropy might be kind of "pain" to the
> universe. Hopefully, when (/if) our AGI figures this out, it will not
> attempt to squeeze the Universe into a single spot to "solve" it.
> 
> Regards,
> Jiri Jelinek
> 
> On 6/11/07, Matt Mahoney  wrote:
> > Here is a program that feels pain.  It is a simulation of a 2-input logic
> gate
> > that you train by reinforcement learning.  It "feels" in the sense that it
> > adjusts its behavior to avoid negative reinforcement from the user.
> >
> >
> > /* pain.cpp - A program that can feel pleasure and pain.
> >
> > The program simulates a programmable 2-input logic gate.
> > You train it by reinforcement conditioning.  You provide a pair of
> > input bits (00, 01, 10, or 11).  It will output a 0 or 1.  If the
> > output is correct, you "reward" it by entering "+".  If it is wrong,
> > you "punish" it by entering "-".  You can program it this way to
> > implement any 2-input logic function (AND, OR, XOR, NAND, etc).
> > */
> >
> > #include 
> > #include 
> > using namespace std;
> >
> > int main() {
> >   // probability of output 1 given input 00, 01, 10, 11
> >   double wt[4]={0.5, 0.5, 0.5, 0.5};
> >
> >   while (1) {
> > cout << "Please input 2 bits (00, 01, 10, 11): ";
> > char b1, b2;
> > cin >> b1 >> b2;
> > int input = (b1-'0')*2+(b2-'0');
> > if (input >= 0 && input < 4) {
> >   int response = double(rand())/RAND_MAX < wt[input];
> >   cout << "Output = " << response
> ><< ".  Please enter + if right, - if wrong: ";
> >   char reinforcement;
> >   cin >> reinforcement;
> >   if (reinforcement == '+')
> > cout << "aah! :-)\n";
> >   else if (reinforcement == '-')
> > cout << "ouch! :-(\n";
> >   else
> > continue;
> >   int adjustment = (reinforcement == '-') ^ response;
> >   if (adjustment == 0)
> > wt[input] /= 2;
> >   else
> > wt[input] = 1 - (1 - wt[input])/2;
> > }
> >   }
> > }
> 
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
> 
> 
> 
> ___
> James Ratcliff - http://falazar.com
> Looking for something...
>
> -
> Building a website is a piece of cake. 
> Yahoo! Small Business gives you all the tools to get online.
> 
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-13 Thread Lukasz Stafiniak

On 6/13/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:


If yes, then how do you define pain in a machine?


A pain in a machine is the state in the machine that a person
empathizing with the machine would avoid putting the machine into,
other things being equal (that is, when there is no higher goal in
going through the pain).

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-13 Thread Lukasz Stafiniak

On 6/13/07, Lukasz Stafiniak <[EMAIL PROTECTED]> wrote:

On 6/13/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
>
> If yes, then how do you define pain in a machine?
>
A pain in a machine is the state in the machine that a person
empathizing with the machine would avoid putting the machine into,
other things being equal (that is, when there is no higher goal in
going through the pain).


To clarify:
(1) there exists a person empathizing with that machine
(2) this person would avoid putting the machine into the state of pain

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-13 Thread Jiri Jelinek

Mark,


VNA..can simulate *any* substrate.


I don't see any good reason for assuming that it would be anything
more than a zombie.
http://plato.stanford.edu/entries/zombies/


unless you believe that there is some other magic involved


I would not call it magic, but we might have to look beyond 4D to
figure out how qualia really work.

But OK, let's assume for a moment that certain VNA-processed
algorithms can produce qualia as a side-effect. What factors do you
expect to play an important role in making a particular quale pleasant
vs unpleasant?

Regards,
Jiri Jelinek

On 6/11/07, Mark Waser <[EMAIL PROTECTED]> wrote:

Hi Jiri,

A VNA, given sufficient time, can simulate *any* substrate.  Therefore,
if *any* substrate is capable of simulating you (and thus pain), then a VNA
is capable of doing so (unless you believe that there is some other magic
involved).

Remember also, it is *not* the VNA that feels pain, it is the entity
that the VNA is simulating that is feeling  the pain.

Mark


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-13 Thread Matt Mahoney

--- Lukasz Stafiniak <[EMAIL PROTECTED]> wrote:

> On 6/13/07, Lukasz Stafiniak <[EMAIL PROTECTED]> wrote:
> > On 6/13/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > >
> > > If yes, then how do you define pain in a machine?
> > >
> > A pain in a machine is the state in the machine that a person
> > empathizing with the machine would avoid putting the machine into,
> > other things being equal (that is, when there is no higher goal in
> > going through the pain).
> >
> To clarify:
> (1) there exists a person empathizing with that machine
> (2) this person would avoid putting the machine into the state of pain

I would avoid deleting all the files on my hard disk, but it has nothing to do
with pain or empathy.

Let us separate the questions of pain and ethics.  There are two independent
questions.

1. What mental or computational states correspond to pain?
2. When is it ethical to cause a state of pain?

One possible definition of pain is any signal that an intelligent system has
the goal of avoiding, for example,

- negative reinforcement in any animal capable of reinforcement learning.
- the negative of the "reward" signal received by an AIXI agent.
- excess heat or cold to a thermostat.

I think pain by any reasonable definition exists independently of ethics. 
Ethics is more complex.  Humans might decide, for example, that it is OK to
inflict pain on a mosquito but not a butterfly, or a cow but not a cat, or a
programmable logic gate but not a video game character.  The issue here is not
pain, but our perception of resemblance to humans.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-13 Thread Lukasz Stafiniak

On 6/14/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:


I would avoid deleting all the files on my hard disk, but it has nothing to do
with pain or empathy.

Let us separate the questions of pain and ethics.  There are two independent
questions.

1. What mental or computational states correspond to pain?
2. When is it ethical to cause a state of pain?


There is a gradation:
- pain as negative reinforcement
- pain as an emotion
- pain as a feeling

When you ask if something "feels pain", then you don't ask if "pain"
is adequate description of some aspect in that thing or person X, but
whether X can be attributed as feeling. And this is related to the
comlexity of X, and this complexity is related with ethics.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-14 Thread Mark Waser
   Oh.  You're stuck on qualia (and zombies).  I haven't seen a good 
compact argument to convince you (and e-mail is too low band-width and 
non-interactive to do one of the longer ones).  My apologies.


   Mark

- Original Message - 
From: "Jiri Jelinek" <[EMAIL PROTECTED]>

To: 
Sent: Wednesday, June 13, 2007 6:26 PM
Subject: Re: [agi] Pure reason is a disease.



Mark,


VNA..can simulate *any* substrate.


I don't see any good reason for assuming that it would be anything
more than a zombie.
http://plato.stanford.edu/entries/zombies/


unless you believe that there is some other magic involved


I would not call it magic, but we might have to look beyond 4D to
figure out how qualia really work.

But OK, let's assume for a moment that certain VNA-processed
algorithms can produce qualia as a side-effect. What factors do you
expect to play an important role in making a particular quale pleasant
vs unpleasant?

Regards,
Jiri Jelinek

On 6/11/07, Mark Waser <[EMAIL PROTECTED]> wrote:

Hi Jiri,

A VNA, given sufficient time, can simulate *any* substrate. 
Therefore,
if *any* substrate is capable of simulating you (and thus pain), then a 
VNA

is capable of doing so (unless you believe that there is some other magic
involved).

Remember also, it is *not* the VNA that feels pain, it is the entity
that the VNA is simulating that is feeling  the pain.

Mark


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-14 Thread Eric Baum

Jiri> James, Frank Jackson (in "Epiphenomenal Qualia") defined qualia
Jiri> as "...certain features of the bodily sensations especially, but
Jiri> also of certain perceptual experiences, which no amount of
Jiri> purely physical information includes.. :-)

One of the biggest problems with the philosophical literature, IMO, is
that philosophers often fail to recognize that one can define
various concepts in English in such a way that they make apparent
syntactic and superficial semantic sense, which are nonetheless
actually not  meaningful. My usual favorite example is,
the second before the big bang, a phrase which seems to make perfect
intuitive sense, but according to most standard GR/cosmological models
simply doesn't correspond to anything.

This problem crops up in the mathematical literature sometimes too,
but mathematicians are more effective about dealing with it. There 
is an old anecdote, I'm not sure of its veracity, of someone at
Princeton defending his PhD in math, in which he had stated various
definitions and proved various things about his class of objects, and
someone attending (if memory serves it was said to be Milnor) proved
on the spot the class was the null set.

Jackson however makes an excellent foil. In What is Thought? I took a
quote of his in which he says that 10 or 15 different specific
sensations can not possibly be explained in a physicalist manner, and
argue that each of them arises from exactly the programming one would
expect evolution to generate.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-14 Thread Eric Baum

Jiri> Mark,
>> VNA..can simulate *any* substrate.

Jiri> I don't see any good reason for assuming that it would be
Jiri> anything more than a zombie.
Jiri> http://plato.stanford.edu/entries/zombies/

Zombie is another concept which seems to make perfect intuitive sense,
but IMO is not actually well defined.

If sensations correspond to the execution of certain code in a
decision making program (the nature of the sensation depending on the
coding) then I claim that everything about sensation and consciousness
can be parsimoniously and naturally explained in a way consistent with
everything we know about CS and physics and cognitive science and
various other fields.

But in this case, a zombie that makes the same decisions as a human
would be evaluating similar code and would thus essentially have the
same pain.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-14 Thread Eric Baum

Jiri> Matt,
>> Here is a program that feels pain.

Jiri> I got the logic, but no pain when processing the code in my
Jiri> mind. 

This is Frank Jackson's "Mary" fallacy, which I also debunk in WIT? Ch
14.

Running similar code at a conscious level won't generate your
sensation of pain because its not called by the right routines and
returning  the right format results to the right calling instructions
in your homunculus program.

 Maybe you should mention in the pain.cpp description that
Jiri> it needs to be processed for long enough - so whatever is gonna
Jiri> process it, it will eventually get to the 'I don't "feel" like
Jiri> doing this any more' point. ;-)) Looks like the entropy is kind
Jiri> of "pain" to us (& to our devices) and the negative entropy
Jiri> might be kind of "pain" to the universe. Hopefully, when (/if)
Jiri> our AGI figures this out, it will not attempt to squeeze the
Jiri> Universe into a single spot to "solve" it.

Jiri> Regards, Jiri Jelinek

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-14 Thread James Ratcliff
Do you know those 10-15 mentioned hard items?

I agree with your following thoughts on the matter.

We have to seperate the mystical or spiritual from the physical, or determine 
for some reason that the physical is truly missing something, that there is 
something more than that is required for life/autonomy/feelings, 
but I dont think anyone is capable of showing that yet.

So the question is,   
  Is it good enough to act and think and reason as if you have experienced the 
feeling.

James Ratcliff

Eric Baum <[EMAIL PROTECTED]> wrote: 
Jiri> James, Frank Jackson (in "Epiphenomenal Qualia") defined qualia
Jiri> as "...certain features of the bodily sensations especially, but
Jiri> also of certain perceptual experiences, which no amount of
Jiri> purely physical information includes.. :-)

One of the biggest problems with the philosophical literature, IMO, is
that philosophers often fail to recognize that one can define
various concepts in English in such a way that they make apparent
syntactic and superficial semantic sense, which are nonetheless
actually not  meaningful. My usual favorite example is,
the second before the big bang, a phrase which seems to make perfect
intuitive sense, but according to most standard GR/cosmological models
simply doesn't correspond to anything.

This problem crops up in the mathematical literature sometimes too,
but mathematicians are more effective about dealing with it. There 
is an old anecdote, I'm not sure of its veracity, of someone at
Princeton defending his PhD in math, in which he had stated various
definitions and proved various things about his class of objects, and
someone attending (if memory serves it was said to be Milnor) proved
on the spot the class was the null set.

Jackson however makes an excellent foil. In What is Thought? I took a
quote of his in which he says that 10 or 15 different specific
sensations can not possibly be explained in a physicalist manner, and
argue that each of them arises from exactly the programming one would
expect evolution to generate.


Jiri> Mark,
>> VNA..can simulate *any* substrate.

Jiri> I don't see any good reason for assuming that it would be
Jiri> anything more than a zombie.
Jiri> http://plato.stanford.edu/entries/zombies/

Zombie is another concept which seems to make perfect intuitive sense,
but IMO is not actually well defined.

If sensations correspond to the execution of certain code in a
decision making program (the nature of the sensation depending on the
coding) then I claim that everything about sensation and consciousness
can be parsimoniously and naturally explained in a way consistent with
everything we know about CS and physics and cognitive science and
various other fields.

But in this case, a zombie that makes the same decisions as a human
would be evaluating similar code and would thus essentially have the
same pain.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



___
James Ratcliff - http://falazar.com
Looking for something...
   
-
Yahoo! oneSearch: Finally,  mobile search that gives answers, not web links. 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Re: [agi] Pure reason is a disease.

2007-06-14 Thread Eric Baum

James> Do you know those 10-15 mentioned hard items?  I agree with
James> your following thoughts on the matter.

Actually, I saw a posting where you had the same (or at least a very
similar) quote from Jackson, pain, itchiness, startling at loud
noises, smelling rose, etc.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-14 Thread Jiri Jelinek

Mark,


Oh.  You're stuck on qualia (and zombies)


Sort of, but not really. There is no need for qualia in order to
develop powerful AGI. I was just playing with some thoughts on
potential security implications associated with the speculation of
qualia being produced as a side-effect of certain algorithmic
complexity on VNA.

Regards,
Jiri Jelinek

On 6/14/07, Mark Waser <[EMAIL PROTECTED]> wrote:

Oh.  You're stuck on qualia (and zombies).  I haven't seen a good
compact argument to convince you (and e-mail is too low band-width and
non-interactive to do one of the longer ones).  My apologies.

Mark



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-14 Thread Jiri Jelinek

Eric,


zombie that makes the same decisions as a human would be evaluating

similar code and would thus essentially have the same pain.

Well, if that's the case, shouldn't game makers stop making realistic
computer games where human characters get hurt. ;-)

Regards,
Jiri

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-14 Thread Jiri Jelinek

Eric,


Running similar code at a conscious level won't generate your

sensation of pain because its not called by the right routines and
returning  the right format results to the right calling instructions
in your homunculus program.

Right. IMO roughly the same problem when processed by a computer..

Regards,
Jiri Jelinek

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-14 Thread Jiri Jelinek

James,


determine for some reason that the physical is truly missing something


Look at twin particles = just another example of something missing in
the world as we can see it.


Is it good enough to act and think and reason as if you have

experienced the feeling.

For AGI - yes. Why not (?).

Regards,
Jiri

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-14 Thread J Storrs Hall, PhD
On Thursday 14 June 2007 07:19:18 am Mark Waser wrote:
> Oh.  You're stuck on qualia (and zombies).  I haven't seen a good 
> compact argument to convince you (and e-mail is too low band-width and 
> non-interactive to do one of the longer ones).  My apologies.

The best one-liner I know is, "Prove to me that *you're* not a zombie, and we 
can talk about it."

Alternatively, "*I'm* a zombie, so why shouldn't my robot be one too?"

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-14 Thread Mark Waser

I was just playing with some thoughts on
potential security implications associated with the speculation of
qualia being produced as a side-effect of certain algorithmic
complexity on VNA.


Which is, in many ways, pretty similar to my assumption that consciousness 
will be produced as a side-effect (or maybe, necessary cause of 
intelligence) on any substrate designed for and complex enough for it to 
support intelligence (and that would indeed have potential security 
implications).



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


  1   2   >