Re: [agi] father figure

2002-12-01 Thread maitri
Title: Message



Gary,
 
All great points..
 
I did not mean to imply that *no* monitoring would 
take place.  That would be irresponsible, especially when the development 
team is getting the first NovaMente running and observing how it grows and 
develops.
 
I was more referring to a continual monitoring as 
being unrealistic as the volume of information will be overwhelming IMO.  
So patterns of unhealthy behavior will be the most reliable way to discern 
something has gone wrong.  
 
In the early stages, I doubt NM will be aware of 
its own deviance and therefore try to "hide" it.  Perhaps as it approaches 
human level AI, this is more of a possibility...
 
Regards,
Kevin
 
 

  - Original Message - 
  From: 
  Gary 
  Miller 
  To: [EMAIL PROTECTED] 
  Sent: Sunday, December 01, 2002 8:31 
  PM
  Subject: RE: [agi] father figure
  
  On 
  12/1 Matri said:
   
  << 1)  
  What determines a negative thought pattern?  Even amongst humans it is 
  hard to determine what is negative and what is not.  
  << Further, 
  there are almost no absolutes.  For instance, killing may even make sense 
  in certain cases.  Killing 1 personj may save 1000 << lives, so 
  maybe it makes sense then, I don't know...Is the thought "Suzy is stupid" a 
  negative thought?  Where do you draw the line?
   
  There are no 
  absolutes.  Parents know this.  But if their children start talking 
  about people being out to get them (parnoia) or talk about killing their 
  friends (psychotic behavior) they know they seriously need help.  And if 
  they're good parents they should realize something's wrong long 
  before that.  But kids can keep their thoughts and feelings quiet 
  and often times parents or psychologist don't realize there's a problem until 
  it's too late.  By being able to monitor the AI's internal dialog and 
  goal prioritizations developmental problems should become apparent much soon 
  that might be otherwise apparent.
   
  << 2) Who or 
  what is doing the monitoring?  Is it someone sitting there watching the 
  "thoughts" flow by?  I beleive that the volume of 
  << thoughts 
  would be too immense for someone to monitor in this way.  So is it done 
  programmatically?  Then I ask again, what program 
  << rules 
  would you write to determine what is wrong or right.  And if you were 
  going to do this, you might as well encode those rules in the 
  
  << core of 
  the product instead
   
  I think the point 
  is with some of these systems that the rules can't be encoded since the brain 
  map evolves and the antisocial behavior can not be readily identified in the 
  map.  Also without understanding how the original behavior evolved the AI 
  may simply replicate the steps that led it to that behavior in the first 
  place.
   
  
  << 
  3) Once you would encounter a "negative" thought, then what?  Kill 
  the Novamente?  Kill the thought?  Alter the section of code that 
  
  << led to the thought?  It is 
  not easy to decide how to deal with such an 
  occurence.
   
  I would restore to a backup immediately prior to when the AI 
  first exhibited the negative thought patterns and provide counseling to help 
  the AI deal with the events the precipitated the negative thought patterns in 
  a positive manner.   
   
  << If it starts to go nuts or to hold deviant 
  views, it will become apparent in how it acts, what goals it has, and the 
  paths it takes to fulfill 
  << those goals.
   
  Maybe so, but by then how will you ever isolate where 
  and why it first started to go wrong.  And if you wish to restore to an 
  earlier point you may not have gone far enough and again the AI is still 
  struggling with the same issues.
   
  And what if it learns merely to hide it's negative 
  nature from the human's who seek to teach it morality.  Like a convict 
  trying to get parole or a teenager with a drug problem it may learn to tell us 
  what it thinks we want to hear, waiting for the day we no longer have the 
  power to control it.
   
   
   
   
  

-Original Message-----From: 
    [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On Behalf Of 
maitriSent: Sunday, December 01, 2002 9:56 AMTo: 
[EMAIL PROTECTED]Subject: Re: [agi] father 
figure
While I am not a NovaMente developer, I can see 
many problems with the idea of "monitoring internal thought processes...to 
detect negative thought patterns..":
 
 
1)  What determines a negative thought 
pattern?  Even amongst humans it is hard to determine what is negative 
and what is not.  Further, there are almost no absolutes.  For 
instance, killing may even make sense in certain cases.  Killing 1 
personj may save 1000 lives, so maybe it makes sense then, I don't know...Is 
the thought "Suzy is s

RE: [agi] father figure

2002-12-01 Thread Gary Miller
Title: Message



On 
12/1 Matri said:
 
<< 1)  
What determines a negative thought pattern?  Even amongst humans it is hard 
to determine what is negative and what is not.  
<< Further, 
there are almost no absolutes.  For instance, killing may even make sense 
in certain cases.  Killing 1 personj may save 1000 << lives, so maybe 
it makes sense then, I don't know...Is the thought "Suzy is stupid" a negative 
thought?  Where do you draw the line?
 
There are no 
absolutes.  Parents know this.  But if their children start talking 
about people being out to get them (parnoia) or talk about killing their friends 
(psychotic behavior) they know they seriously need help.  And if they're 
good parents they should realize something's wrong long before that.  
But kids can keep their thoughts and feelings quiet and often times parents or 
psychologist don't realize there's a problem until it's too late.  By being 
able to monitor the AI's internal dialog and goal prioritizations developmental 
problems should become apparent much soon that might be otherwise 
apparent.
 
<< 2) Who or 
what is doing the monitoring?  Is it someone sitting there watching the 
"thoughts" flow by?  I beleive that the volume of 
<< thoughts 
would be too immense for someone to monitor in this way.  So is it done 
programmatically?  Then I ask again, what program 
<< rules would 
you write to determine what is wrong or right.  And if you were going to do 
this, you might as well encode those rules in the 
<< core of the 
product instead
 
I think the point is 
with some of these systems that the rules can't be encoded since the brain map 
evolves and the antisocial behavior can not be readily identified in the 
map.  Also without understanding how the original behavior evolved the AI 
may simply replicate the steps that led it to that behavior in the first 
place.
 

<< 
3) Once you would encounter a "negative" thought, then what?  Kill 
the Novamente?  Kill the thought?  Alter the section of code that 

<< 
led to the thought?  It is not easy to decide how to deal with 
such an occurence.
 
I would restore to a backup immediately prior to when the AI first 
exhibited the negative thought patterns and provide counseling to help the AI 
deal with the events the precipitated the negative thought patterns in a 
positive manner.   
 
<< If it starts to go nuts or to hold deviant 
views, it will become apparent in how it acts, what goals it has, and the paths 
it takes to fulfill 
<< those goals.
 
Maybe so, but by then how will you ever isolate where 
and why it first started to go wrong.  And if you wish to restore to an 
earlier point you may not have gone far enough and again the AI is still 
struggling with the same issues.
 
And what if it learns merely to hide it's negative 
nature from the human's who seek to teach it morality.  Like a convict 
trying to get parole or a teenager with a drug problem it may learn to tell us 
what it thinks we want to hear, waiting for the day we no longer have the power 
to control it.
 
 
 
 

  
  -Original Message-From: 
  [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On Behalf Of 
  maitriSent: Sunday, December 01, 2002 9:56 AMTo: 
  [EMAIL PROTECTED]Subject: Re: [agi] father 
  figure
  While I am not a NovaMente developer, I can see 
  many problems with the idea of "monitoring internal thought processes...to 
  detect negative thought patterns..":
   
   
  1)  What determines a negative thought 
  pattern?  Even amongst humans it is hard to determine what is negative 
  and what is not.  Further, there are almost no absolutes.  For 
  instance, killing may even make sense in certain cases.  Killing 1 
  personj may save 1000 lives, so maybe it makes sense then, I don't know...Is 
  the thought "Suzy is stupid" a negative thought?  Where do you draw the 
  line?
   
  2) Who or what is doing the monitoring?  Is 
  it someone sitting there watching the "thoughts" flow by?  I beleive that 
  the volume of thoughts would be too immense for someone to monitor in this 
  way.  So is it done programmatically?  Then I ask again, what 
  program rules would you write to determine what is wrong or right.  And 
  if you were going to do this, you might as well encode those rules in the core 
  of the product instead.  
   
  3) Once you would encounter a "negative" thought, 
  then what?  Kill the Novamente?  Kill the thought?  Alter the 
  section of code that led to the thought?  It is not easy to decide how to 
  deal with such an occurence.
   
  I tend to believe that Ben's approach is a smart 
  one.  Build the system up from the start without hard wiring for right or 
  wrong, and put strucutures in place for it to learn and "embody" concepts akin 
  to how humans do, and then train it 

RE: [agi] father figure

2002-12-01 Thread Ben Goertzel
Title: Message



Hi,
 
We 
have a bunch of languages that are tailored for particular 
purposes
 
To 
feed Novamente data right now, we use either nmsh (Novamente shell) scripts or 
XML (using special tags that map into Novamente nodes and links).  Psynese 
could be represented in either of these approaches
 
For 
chatting with novamente, XML and nmsh are kinda awkward.  A variant of KNOW 
(a language created for Webmind) will probably be better.  

 
www.goertzel.org/papers/KNOWSpecification.htm 

 
I 
envision early conversations with Novamente (it's not ready to chat with yet) as 
occurring in a mixture of English and KNOW.
 
-- Ben 
G

  -Original Message-From: [EMAIL PROTECTED] 
  [mailto:[EMAIL PROTECTED]]On Behalf Of John RoseSent: 
  Sunday, December 01, 2002 2:40 PMTo: 
  [EMAIL PROTECTED]Subject: RE: [agi] father 
  figure
  Ben,
   
  After reading your excerpt on Psynese it sounds like a 
  person might want to learn to communicate with a Novamente using a 
  Psynese variant or a Psynese constructor.  Otherwise the Novamente would 
  have to funnel it's "hunks o' mind" into English or whatever language is 
  enabled; kind of  like a lossy compression process.   
  Unless, that is, your Psynese to English translator is - really 
  good.
   
  Also, seems like if you wanted to feed a Novamente some data, say a 
  series of worldwide temperature readings, all you'd have to do is convert it 
  to Psynese and feed er on in!
   
  John
   
  

-Original Message-From: 
[EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On Behalf Of 
Ben GoertzelSent: Sunday, December 01, 2002 7:29 
AMTo: [EMAIL PROTECTED]Subject: RE: [agi] father 
figure
 
Hi 
Gary,
 
Translation from Psynese to English, I consider 
part of the general process by which Novamentes will translate their own 
thoughts to English.  "Language generation" is a difficult task, though 
on balance perhaps a little easier than language 
comprehension.
 
Monitoring of internal thought processes is a more 
general thing, not particularly to do with linguistic thought.  Of 
course it's important!!!  in fact, it's really a key part of thought, 
not a separate issue  Thought can only be made acceptably 
intelligent via ongoing context-sensitive parameter-tuning, which requires 
monitoring to succeed...
 
-- 
Ben
 
 

  -Original Message-From: [EMAIL PROTECTED] 
  [mailto:[EMAIL PROTECTED]]On Behalf Of Gary 
  MillerSent: Saturday, November 30, 2002 11:06 PMTo: 
  [EMAIL PROTECTED]Subject: RE: [agi] father 
  figure
  Couldn't a translation agent be written to 
  convert the internal thought process which occurs in Psynese to 
  English.  
   
  Since Psynese is as you say more precise and 
  Novamente will need to be able to convey final outputs in English anyway 
  to communicate to humans.
   
  I would think the monitoring of internal thought 
  process would be both highly interesting and necessary to detect negative 
  thought patterns and to help Novamente in the rule discovery 
  process.


RE: [agi] father figure

2002-12-01 Thread John Rose
Title: Message



Ben,
 
After 
reading your excerpt on Psynese it sounds like a person might want to 
learn to communicate with a Novamente using a Psynese variant or a Psynese 
constructor.  Otherwise the Novamente would have to funnel it's "hunks o' 
mind" into English or whatever language is enabled; kind of  like 
a lossy compression process.   Unless, that is, your Psynese to 
English translator is - really good.
 
Also, 
seems like if you wanted to feed a Novamente some data, say a series of 
worldwide temperature readings, all you'd have to do is convert it to Psynese 
and feed er on in!
 
John
 

  
  -Original Message-From: 
  [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On Behalf Of 
  Ben GoertzelSent: Sunday, December 01, 2002 7:29 
  AMTo: [EMAIL PROTECTED]Subject: RE: [agi] father 
  figure
   
  Hi 
  Gary,
   
  Translation from Psynese to English, I consider part 
  of the general process by which Novamentes will translate their own thoughts 
  to English.  "Language generation" is a difficult task, though on balance 
  perhaps a little easier than language comprehension.
   
  Monitoring of internal thought processes is a more 
  general thing, not particularly to do with linguistic thought.  Of course 
  it's important!!!  in fact, it's really a key part of thought, not a 
  separate issue  Thought can only be made acceptably intelligent via 
  ongoing context-sensitive parameter-tuning, which requires monitoring to 
  succeed...
   
  -- 
  Ben
   
   
  
-Original Message-From: [EMAIL PROTECTED] 
[mailto:[EMAIL PROTECTED]]On Behalf Of Gary 
MillerSent: Saturday, November 30, 2002 11:06 PMTo: 
    [EMAIL PROTECTED]Subject: RE: [agi] father 
figure
Couldn't a translation agent be written to convert 
the internal thought process which occurs in Psynese to English.  

 
Since Psynese is as you say more precise and 
Novamente will need to be able to convey final outputs in English anyway to 
communicate to humans.
 
I 
would think the monitoring of internal thought process would be both highly 
interesting and necessary to detect negative thought patterns and to help 
Novamente in the rule discovery 
process.


Re: [agi] father figure

2002-12-01 Thread maitri
Title: Message



While I am not a NovaMente developer, I can see 
many problems with the idea of "monitoring internal thought processes...to 
detect negative thought patterns..":
 
 
1)  What determines a negative thought 
pattern?  Even amongst humans it is hard to determine what is negative and 
what is not.  Further, there are almost no absolutes.  For instance, 
killing may even make sense in certain cases.  Killing 1 personj may save 
1000 lives, so maybe it makes sense then, I don't know...Is the thought "Suzy is 
stupid" a negative thought?  Where do you draw the line?
 
2) Who or what is doing the monitoring?  Is it 
someone sitting there watching the "thoughts" flow by?  I beleive that the 
volume of thoughts would be too immense for someone to monitor in this 
way.  So is it done programmatically?  Then I ask again, what program 
rules would you write to determine what is wrong or right.  And if you were 
going to do this, you might as well encode those rules in the core of the 
product instead.  
 
3) Once you would encounter a "negative" thought, 
then what?  Kill the Novamente?  Kill the thought?  Alter the 
section of code that led to the thought?  It is not easy to decide how to 
deal with such an occurence.
 
I tend to believe that Ben's approach is a smart 
one.  Build the system up from the start without hard wiring for right or 
wrong, and put strucutures in place for it to learn and "embody" concepts akin 
to how humans do, and then train it morality.  If it starts to go nuts or 
to hold deviant views, it will become apparent in how it acts, what goals it 
has, and the paths it takes to fulfill those goals.  No one knows what will 
emerge until it happens...
 
Peace,
Kevin
 
 
 
- Original Message - 

  From: 
  Gary 
  Miller 
  To: [EMAIL PROTECTED] 
  Sent: Saturday, November 30, 2002 11:06 
  PM
  Subject: RE: [agi] father figure
  
  Couldn't a translation agent be written to convert 
  the internal thought process which occurs in Psynese to English.  
  
   
  Since Psynese is as you say more precise and 
  Novamente will need to be able to convey final outputs in English anyway to 
  communicate to humans.
   
  I 
  would think the monitoring of internal thought process would be both highly 
  interesting and necessary to detect negative thought patterns and to help 
  Novamente in the rule discovery process.
   
   -Original 
  Message-From: [EMAIL PROTECTED] 
  [mailto:[EMAIL PROTECTED]] On Behalf Of Ben 
  GoertzelSent: Saturday, November 30, 2002 4:33 PMTo: 
  [EMAIL PROTECTED]Subject: RE: [agi] father 
  figure
  
 
Actually, I don't envision a Novamente doing much 
thinking in English.
 
The use of sequential utterances with grammars for 
communication, is a result of our limited capability for more direct 
mind-to-mind information transfer.  
 
Novamentes will be able to communicate with each 
other more directly, using a system I call Psynese, in which "hunk o' mind" 
are directly transferred, using "standard concept vocabularies" 
(PsyneseVocabularies) to translate from one  mind's internal language 
to another's.
 
I 
think that a significant bit of Novamente thinking may make use of 
PsyneseVocabulary concepts, which is the rough analogue of humans thinking 
in a human language.
 
But it's only a rough analogue.  Psynese is 
less restrictive than a linear language, which I believe is a good thing: it 
should be able to manifest the good aspects of language, without the 
painfully constricting aspects...
 
-- 
Ben G
 
 
 

  -Original Message-From: [EMAIL PROTECTED] 
  [mailto:[EMAIL PROTECTED]]On Behalf Of Gary 
  MillerSent: Saturday, November 30, 2002 3:09 PMTo: 
  [EMAIL PROTECTED]Subject: RE: [agi] father 
  figure
  It seems that a lot of human concious 
  thinking takes place in English and has corresponding 
  subvocalizations.  
   
  In doing higher order thinking would you AGI be 
  also subvocalizing it's musings and decisions internally to the point that 
  they could be logged and monitored for accuracy or dangerous paths of 
  thinking?  Symptons of paranoia, meglamania, and obsessive compulsive 
  and psychotic behavior could probably be caught at this level if it 
  exists.
   
  If so external guidance and positive input 
  material could be provided to the AGI in order to counteract negative 
  content or mental patterns it may have stumbled across or fallen 
  into.
  -Original Message-From: 
  [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On Behalf Of 
  maitriSent: Saturday, November 30, 2002 12:31 
  PMTo: [EMAIL PROTECTED]Subject: Re: [agi] father 
  figure
  
Ben,
   

RE: [agi] father figure

2002-12-01 Thread Ben Goertzel
Title: Message



 
Hi 
Gary,
 
Translation from Psynese to English, I consider part of 
the general process by which Novamentes will translate their own thoughts to 
English.  "Language generation" is a difficult task, though on balance 
perhaps a little easier than language comprehension.
 
Monitoring of internal thought processes is a more 
general thing, not particularly to do with linguistic thought.  Of course 
it's important!!!  in fact, it's really a key part of thought, not a 
separate issue  Thought can only be made acceptably intelligent via 
ongoing context-sensitive parameter-tuning, which requires monitoring to 
succeed...
 
-- 
Ben
 
 

  -Original Message-From: [EMAIL PROTECTED] 
  [mailto:[EMAIL PROTECTED]]On Behalf Of Gary 
  MillerSent: Saturday, November 30, 2002 11:06 PMTo: 
  [EMAIL PROTECTED]Subject: RE: [agi] father 
  figure
  Couldn't a translation agent be written to convert 
  the internal thought process which occurs in Psynese to English.  
  
   
  Since Psynese is as you say more precise and 
  Novamente will need to be able to convey final outputs in English anyway to 
  communicate to humans.
   
  I 
  would think the monitoring of internal thought process would be both highly 
  interesting and necessary to detect negative thought patterns and to help 
  Novamente in the rule discovery 
process.


RE: [agi] father figure

2002-11-30 Thread Gary Miller
Title: Message



Couldn't a translation agent be written to convert the 
internal thought process which occurs in Psynese to English.  

 
Since 
Psynese is as you say more precise and Novamente will need to be able to convey 
final outputs in English anyway to communicate to humans.
 
I 
would think the monitoring of internal thought process would be both highly 
interesting and necessary to detect negative thought patterns and to help 
Novamente in the rule discovery process.
 
 -Original Message-From: 
[EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On Behalf Of 
Ben GoertzelSent: Saturday, November 30, 2002 4:33 
PMTo: [EMAIL PROTECTED]Subject: RE: [agi] father 
figure

   
  Actually, I don't envision a Novamente doing much 
  thinking in English.
   
  The 
  use of sequential utterances with grammars for communication, is a result of 
  our limited capability for more direct mind-to-mind information 
  transfer.  
   
  Novamentes will be able to communicate with each 
  other more directly, using a system I call Psynese, in which "hunk o' mind" 
  are directly transferred, using "standard concept vocabularies" 
  (PsyneseVocabularies) to translate from one  mind's internal language to 
  another's.
   
  I 
  think that a significant bit of Novamente thinking may make use of 
  PsyneseVocabulary concepts, which is the rough analogue of humans thinking in 
  a human language.
   
  But 
  it's only a rough analogue.  Psynese is less restrictive than a linear 
  language, which I believe is a good thing: it should be able to manifest the 
  good aspects of language, without the painfully constricting 
  aspects...
   
  -- 
  Ben G
   
   
   
  
-Original Message-From: [EMAIL PROTECTED] 
[mailto:[EMAIL PROTECTED]]On Behalf Of Gary 
MillerSent: Saturday, November 30, 2002 3:09 PMTo: 
    [EMAIL PROTECTED]Subject: RE: [agi] father 
figure
It 
seems that a lot of human concious thinking takes place in English and 
has corresponding subvocalizations.  
 
In 
doing higher order thinking would you AGI be also subvocalizing it's musings 
and decisions internally to the point that they could be logged and 
monitored for accuracy or dangerous paths of thinking?  Symptons of 
paranoia, meglamania, and obsessive compulsive and psychotic behavior could 
probably be caught at this level if it exists.
 
If 
so external guidance and positive input material could be provided to the 
AGI in order to counteract negative content or mental patterns it may have 
stumbled across or fallen into.
-Original Message-From: 
[EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On Behalf Of 
maitriSent: Saturday, November 30, 2002 12:31 
    PMTo: [EMAIL PROTECTED]Subject: Re: [agi] father 
figure

  Ben,
   
  Thanks for the reasoned and clear 
  response.
   
  I was certain you had thought about these 
  issues deeply from a philosphical standpoint as well as an implementation 
  standpoint.  Nonetheless, I wanted to pose the questions for my own 
  clarification as well as for others on the board.
   
  From my very elementary understanding, I 
  agree that not too much will be know about outcomes once this thing starts 
  expanding on its own and reaches "chimp" level intelligence.  In 
  fact, it occured to me that given the potential ensuing complexity that 
  can rapidly emerge, it is so very important that the starting point be as 
  "good" as can be.  Its hard enough for the average programmer to 
  figure out what the hell happened when their program completes 
  running.  I imagine once NM gets cranking, it may be significantly 
  harder to trace back and figure out what the hell happened as 
  well.
   
  Kevin
   
  
- Original Message - 
From: 
Ben 
Goertzel 
To: [EMAIL PROTECTED] 
        Sent: Saturday, November 30, 2002 
11:25 AM
Subject: RE: [agi] father 
figure

Kevin wrote:
 
!!!
 
*In practice, it seems that an AGI is likely to 
have an "owner" or a handful of them, who will have the kind of power 
you describe.  For instance, if my team should succeed in creating 
a true Novamente AGI, then even if others participate in teaching the 
system, we will have overriding power to make the changes we want.  
This goes along with the fact that artificial minds are not initially 
going to be given any "legal rights" in our society (whereas children 
have some legal rights, though not as many as adults).
 
Would this overriding occur because the person 
carries more weight with Novamen

Re: [agi] father figure

2002-11-30 Thread Alan Grimes
re: brain-brain interfaces...

I don't think it makes very much sense to think that there is a special
language that has a "vocabulary" of any kind that can be extracted from
the brain. 

I would like to respond to this more but I would need to have a better
understanding of what you mean by "hunk o' mind". 

-- 
pain (n): see Linux.
http://users.rcn.com/alangrimes/

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] father figure

2002-11-30 Thread Ben Goertzel
Title: Message



 
I 
forgot to say: Because Novamentes will learn to communicate with humans in human 
language, they will have concepts corresponding to English words, and could end 
up doing some "thinking in English".  But my suspicion is that this will be 
awkward for Novamentes and will rarely happen.  
 
Of 
course, thinking about how to express things to humans is a different story, and 
will involve a different kind of "thinking in English" (or another appropriate 
human language).
 
-- 
Ben

  -Original Message-From: [EMAIL PROTECTED] 
  [mailto:[EMAIL PROTECTED]]On Behalf Of Ben 
  GoertzelSent: Saturday, November 30, 2002 4:33 PMTo: 
  [EMAIL PROTECTED]Subject: RE: [agi] father 
  figure
   
  Actually, I don't envision a Novamente doing much 
  thinking in English.
   
  The 
  use of sequential utterances with grammars for communication, is a result of 
  our limited capability for more direct mind-to-mind information 
  transfer.  
   
  Novamentes will be able to communicate with each 
  other more directly, using a system I call Psynese, in which "hunk o' mind" 
  are directly transferred, using "standard concept vocabularies" 
  (PsyneseVocabularies) to translate from one  mind's internal language to 
  another's.
   
  I 
  think that a significant bit of Novamente thinking may make use of 
  PsyneseVocabulary concepts, which is the rough analogue of humans thinking in 
  a human language.
   
  But 
  it's only a rough analogue.  Psynese is less restrictive than a linear 
  language, which I believe is a good thing: it should be able to manifest the 
  good aspects of language, without the painfully constricting 
  aspects...
   
  -- 
  Ben G
   
   
   
  
-Original Message-From: [EMAIL PROTECTED] 
[mailto:[EMAIL PROTECTED]]On Behalf Of Gary 
MillerSent: Saturday, November 30, 2002 3:09 PMTo: 
[EMAIL PROTECTED]Subject: RE: [agi] father 
figure
It 
seems that a lot of human concious thinking takes place in English and 
has corresponding subvocalizations.  
 
In 
doing higher order thinking would you AGI be also subvocalizing it's musings 
and decisions internally to the point that they could be logged and 
monitored for accuracy or dangerous paths of thinking?  Symptons of 
paranoia, meglamania, and obsessive compulsive and psychotic behavior could 
probably be caught at this level if it exists.
 
If 
so external guidance and positive input material could be provided to the 
AGI in order to counteract negative content or mental patterns it may have 
stumbled across or fallen into.
-Original Message-From: 
[EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On Behalf Of 
maitriSent: Saturday, November 30, 2002 12:31 
PMTo: [EMAIL PROTECTED]Subject: Re: [agi] father 
figure

  Ben,
   
  Thanks for the reasoned and clear 
  response.
   
  I was certain you had thought about these 
  issues deeply from a philosphical standpoint as well as an implementation 
  standpoint.  Nonetheless, I wanted to pose the questions for my own 
  clarification as well as for others on the board.
   
  From my very elementary understanding, I 
  agree that not too much will be know about outcomes once this thing starts 
  expanding on its own and reaches "chimp" level intelligence.  In 
  fact, it occured to me that given the potential ensuing complexity that 
  can rapidly emerge, it is so very important that the starting point be as 
  "good" as can be.  Its hard enough for the average programmer to 
  figure out what the hell happened when their program completes 
  running.  I imagine once NM gets cranking, it may be significantly 
  harder to trace back and figure out what the hell happened as 
  well.
   
  Kevin
   
  
- Original Message - 
From: 
Ben 
Goertzel 
To: [EMAIL PROTECTED] 
    Sent: Saturday, November 30, 2002 
11:25 AM
Subject: RE: [agi] father 
figure

Kevin wrote:
 
!!!
 
*In practice, it seems that an AGI is likely to 
have an "owner" or a handful of them, who will have the kind of power 
you describe.  For instance, if my team should succeed in creating 
a true Novamente AGI, then even if others participate in teaching the 
system, we will have overriding power to make the changes we want.  
This goes along with the fact that artificial minds are not initially 
going to be given any "legal rights" in our society (whereas children 
have some legal rights, though not as many as adults).
 
Would this overriding occur because the pe

RE: [agi] father figure

2002-11-30 Thread Ben Goertzel
Title: Message



 
Actually, I don't envision a Novamente doing much 
thinking in English.
 
The 
use of sequential utterances with grammars for communication, is a result of our 
limited capability for more direct mind-to-mind information transfer.  

 
Novamentes will be able to communicate with each other 
more directly, using a system I call Psynese, in which "hunk o' mind" are 
directly transferred, using "standard concept vocabularies" 
(PsyneseVocabularies) to translate from one  mind's internal language to 
another's.
 
I 
think that a significant bit of Novamente thinking may make use of 
PsyneseVocabulary concepts, which is the rough analogue of humans thinking in a 
human language.
 
But 
it's only a rough analogue.  Psynese is less restrictive than a linear 
language, which I believe is a good thing: it should be able to manifest the 
good aspects of language, without the painfully constricting 
aspects...
 
-- Ben 
G
 
 
 

  -Original Message-From: [EMAIL PROTECTED] 
  [mailto:[EMAIL PROTECTED]]On Behalf Of Gary 
  MillerSent: Saturday, November 30, 2002 3:09 PMTo: 
  [EMAIL PROTECTED]Subject: RE: [agi] father 
  figure
  It 
  seems that a lot of human concious thinking takes place in English and 
  has corresponding subvocalizations.  
   
  In 
  doing higher order thinking would you AGI be also subvocalizing it's musings 
  and decisions internally to the point that they could be logged and monitored 
  for accuracy or dangerous paths of thinking?  Symptons of paranoia, 
  meglamania, and obsessive compulsive and psychotic behavior could probably be 
  caught at this level if it exists.
   
  If 
  so external guidance and positive input material could be provided to the AGI 
  in order to counteract negative content or mental patterns it may have 
  stumbled across or fallen into.
  -Original Message-From: 
  [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On Behalf Of 
  maitriSent: Saturday, November 30, 2002 12:31 PMTo: 
  [EMAIL PROTECTED]Subject: Re: [agi] father 
  figure
  
Ben,
 
Thanks for the reasoned and clear 
response.
 
I was certain you had thought about these 
issues deeply from a philosphical standpoint as well as an implementation 
standpoint.  Nonetheless, I wanted to pose the questions for my own 
clarification as well as for others on the board.
 
From my very elementary understanding, I agree 
that not too much will be know about outcomes once this thing starts 
expanding on its own and reaches "chimp" level intelligence.  In fact, 
it occured to me that given the potential ensuing complexity that can 
rapidly emerge, it is so very important that the starting point be as "good" 
as can be.  Its hard enough for the average programmer to figure out 
what the hell happened when their program completes running.  I imagine 
once NM gets cranking, it may be significantly harder to trace back and 
figure out what the hell happened as well.
 
Kevin
 

  - Original Message - 
  From: 
  Ben Goertzel 
  
  To: [EMAIL PROTECTED] 
      Sent: Saturday, November 30, 2002 
  11:25 AM
  Subject: RE: [agi] father 
figure
  
  Kevin wrote:
   
  !!!
   
  *In practice, it seems that an AGI is likely to have 
  an "owner" or a handful of them, who will have the kind of power you 
  describe.  For instance, if my team should succeed in creating a true 
  Novamente AGI, then even if others participate in teaching the system, we 
  will have overriding power to make the changes we want.  This goes 
  along with the fact that artificial minds are not initially going to be 
  given any "legal rights" in our society (whereas children have some legal 
  rights, though not as many as adults).
   
  Would this overriding occur because the person 
  carries more weight with Novamente, or would they need to go in and altar 
  the structure\links\nodes directly to affect the 
  change?!
   
  Either case could occur.  In Novamente, it is possible to assign 
  default "confidence levels" to information sources, so one could actually 
  tell the system to assign more confidence to information from certain 
  individuals.  However, there is a lot of flexibility in the design, 
  so the system could definitely evolve into a configuration where it worked 
  around these default confidence levels and decided NOT to assign more 
  confidence to what its teachers told it.
   
  "Going in and altering the structure/links/nodes directly" isn't 
  always difficult, it may just mean loading a script containing some new 
  (or reweighted) nodes and links.
   
  !!!**

RE: [agi] father figure

2002-11-30 Thread Gary Miller
Title: Message



It 
seems that a lot of human concious thinking takes place in English and has 
corresponding subvocalizations.  
 
In 
doing higher order thinking would you AGI be also subvocalizing it's musings and 
decisions internally to the point that they could be logged and monitored for 
accuracy or dangerous paths of thinking?  Symptons of paranoia, meglamania, 
and obsessive compulsive and psychotic behavior could probably be caught at this 
level if it exists.
 
If so 
external guidance and positive input material could be provided to the AGI in 
order to counteract negative content or mental patterns it may have stumbled 
across or fallen into.
-Original Message-From: 
[EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On Behalf Of 
maitriSent: Saturday, November 30, 2002 12:31 PMTo: 
[EMAIL PROTECTED]Subject: Re: [agi] father 
figure

  Ben,
   
  Thanks for the reasoned and clear 
  response.
   
  I was certain you had thought about these issues 
  deeply from a philosphical standpoint as well as an implementation 
  standpoint.  Nonetheless, I wanted to pose the questions for my own 
  clarification as well as for others on the board.
   
  From my very elementary understanding, I agree 
  that not too much will be know about outcomes once this thing starts expanding 
  on its own and reaches "chimp" level intelligence.  In fact, it occured 
  to me that given the potential ensuing complexity that can rapidly emerge, it 
  is so very important that the starting point be as "good" as can be.  Its 
  hard enough for the average programmer to figure out what the hell happened 
  when their program completes running.  I imagine once NM gets cranking, 
  it may be significantly harder to trace back and figure out what the hell 
  happened as well.
   
  Kevin
   
  
- Original Message - 
From: 
Ben Goertzel 

To: [EMAIL PROTECTED] 
Sent: Saturday, November 30, 2002 11:25 
    AM
Subject: RE: [agi] father figure

Kevin wrote:
 
!!!
 
*In practice, it seems that an AGI is likely to have an 
"owner" or a handful of them, who will have the kind of power you 
describe.  For instance, if my team should succeed in creating a true 
Novamente AGI, then even if others participate in teaching the system, we 
will have overriding power to make the changes we want.  This goes 
along with the fact that artificial minds are not initially going to be 
given any "legal rights" in our society (whereas children have some legal 
rights, though not as many as adults).
 
Would this overriding occur because the person carries 
more weight with Novamente, or would they need to go in and altar the 
structure\links\nodes directly to affect the change?!
 
Either case could occur.  In Novamente, it is possible to assign 
default "confidence levels" to information sources, so one could actually 
tell the system to assign more confidence to information from certain 
individuals.  However, there is a lot of flexibility in the design, so 
the system could definitely evolve into a configuration where it worked 
around these default confidence levels and decided NOT to assign more 
confidence to what its teachers told it.
 
"Going in and altering the structure/links/nodes directly" isn't always 
difficult, it may just mean loading a script containing some new (or 
reweighted) nodes and links.
 
!!!*At least two 
questions come up then, right?
 
1) Depending on the AGI architecture, enforcing one's opinion on the 
AGI may be very easy or very difficult.  [In Novamente, I guess it will 
be "moderately difficult"]
 
***That's the crux of the matter 
isn't it?  Wouldn't it be easy to enforce an opinion while Novamente is 
in its formative stages, versus when a large foundation of knowledge is in 
place?!!
 
Yes, that's correct.
 
**
 
!!Suppose I am overtaken by greed, and I happen to get my hands 
on a baby Novamente.  I teach it that it should listen to me above 
others.  I also teach it that it is very desirable for me to have alot 
of money.  Novamente begins to form goal nodes geared towards 
fulfilling my desire for wealth.  I direct it to spread itself on the 
internet, and determine ways to make me money, preferably without 
detection.  Perhaps it could manipulate markets, I don't know.  Or 
perhaps it could crack into electronic accounts and transfer the money to 
yours truly.
 
What's to stop\prevent this?  In a real sci fi scenario, perhaps 
for your next book, could we have N

RE: [agi] father figure

2002-11-30 Thread Ben Goertzel



 ***
 Its hard enough for the average programmer 
to figure out what the hell happened when their program completes running.  
I imagine once NM gets cranking, it may be significantly harder to trace back 
and figure out what the hell happened as well. 
***
 
Yeah, that's for sure.  As hard as the current early stages 
of engineering are, the later stages are going to be even harder, albeit in a 
different way.
 
-- Ben 


Re: [agi] father figure

2002-11-30 Thread maitri



Ben,
 
Thanks for the reasoned and clear 
response.
 
I was certain you had thought about these issues 
deeply from a philosphical standpoint as well as an implementation 
standpoint.  Nonetheless, I wanted to pose the questions for my own 
clarification as well as for others on the board.
 
From my very elementary understanding, I agree that 
not too much will be know about outcomes once this thing starts expanding on its 
own and reaches "chimp" level intelligence.  In fact, it occured to me that 
given the potential ensuing complexity that can rapidly emerge, it is so very 
important that the starting point be as "good" as can be.  Its hard enough 
for the average programmer to figure out what the hell happened when their 
program completes running.  I imagine once NM gets cranking, it may be 
significantly harder to trace back and figure out what the hell happened as 
well.
 
Kevin
 

  - Original Message - 
  From: 
  Ben Goertzel 
  
  To: [EMAIL PROTECTED] 
  Sent: Saturday, November 30, 2002 11:25 
  AM
  Subject: RE: [agi] father figure
  
  Kevin wrote:
   
  !!!
   
  *In practice, it seems that an AGI is likely to have an 
  "owner" or a handful of them, who will have the kind of power you 
  describe.  For instance, if my team should succeed in creating a true 
  Novamente AGI, then even if others participate in teaching the system, we will 
  have overriding power to make the changes we want.  This goes along with 
  the fact that artificial minds are not initially going to be given any "legal 
  rights" in our society (whereas children have some legal rights, though not as 
  many as adults).
   
  Would this overriding occur because the person carries 
  more weight with Novamente, or would they need to go in and altar the 
  structure\links\nodes directly to affect the change?!
   
  Either case could occur.  In Novamente, it is possible to assign 
  default "confidence levels" to information sources, so one could actually tell 
  the system to assign more confidence to information from certain 
  individuals.  However, there is a lot of flexibility in the design, so 
  the system could definitely evolve into a configuration where it worked around 
  these default confidence levels and decided NOT to assign more confidence to 
  what its teachers told it.
   
  "Going in and altering the structure/links/nodes directly" isn't always 
  difficult, it may just mean loading a script containing some new (or 
  reweighted) nodes and links.
   
  !!!*At least two 
  questions come up then, right?
   
  1) Depending on the AGI architecture, enforcing one's opinion on the AGI 
  may be very easy or very difficult.  [In Novamente, I guess it will be 
  "moderately difficult"]
   
  ***That's the crux of the matter 
  isn't it?  Wouldn't it be easy to enforce an opinion while Novamente is 
  in its formative stages, versus when a large foundation of knowledge is in 
  place?!!
   
  Yes, that's correct.
   
  **
   
  !!Suppose I am overtaken by greed, and I happen to get my hands 
  on a baby Novamente.  I teach it that it should listen to me above 
  others.  I also teach it that it is very desirable for me to have alot of 
  money.  Novamente begins to form goal nodes geared towards fulfilling my 
  desire for wealth.  I direct it to spread itself on the internet, and 
  determine ways to make me money, preferably without detection.  Perhaps 
  it could manipulate markets, I don't know.  Or perhaps it could crack 
  into electronic accounts and transfer the money to yours truly.
   
  What's to stop\prevent this?  In a real sci fi scenario, perhaps for 
  your next book, could we have NOvamentes "fighting" Novamente's? 
  
   
  There is nothing in the Novamente architecture preventing this kind of 
  unfortunate occurence.  This has to do with the particular system of 
  goals, beliefs and habits inside a given Novamente system, rather than with 
  the AI architecture itself.
   
  !This all goes to my concern regarding morality. I know you 
  resist the idea of hard coding morality into the Novamentes for various 
  reasons.  Perhaps as an alternative, the first Novamente could be trained 
  over a period of time with a strong basis of moral rules(not encoded, but 
  trained).  Then any new Novamentes would be trained by that Novamente 
  before being released to the public domain, making it nearly impossible for 
  the new Novamentes to be taught otherwise.!!
   
  This is something close to what we have planned.
   
  Several others have asked me about this, and I have promised to write a 
  systematic (probably brief) document on Novamente Friendliness sometime in 
  early 2003, s

RE: [agi] father figure

2002-11-30 Thread Ben Goertzel



Kevin wrote:
 
!!!
 
*In practice, it seems that an AGI is likely to have an 
"owner" or a handful of them, who will have the kind of power you 
describe.  For instance, if my team should succeed in creating a true 
Novamente AGI, then even if others participate in teaching the system, we will 
have overriding power to make the changes we want.  This goes along with 
the fact that artificial minds are not initially going to be given any "legal 
rights" in our society (whereas children have some legal rights, though not as 
many as adults).
 
Would this overriding occur because the person carries more 
weight with Novamente, or would they need to go in and altar the 
structure\links\nodes directly to affect the change?!
 
Either case could occur.  In Novamente, it is possible to assign 
default "confidence levels" to information sources, so one could actually tell 
the system to assign more confidence to information from certain 
individuals.  However, there is a lot of flexibility in the design, so the 
system could definitely evolve into a configuration where it worked around these 
default confidence levels and decided NOT to assign more confidence to what its 
teachers told it.
 
"Going in and altering the structure/links/nodes directly" isn't always 
difficult, it may just mean loading a script containing some new (or reweighted) 
nodes and links.
 
!!!*At least two 
questions come up then, right?
 
1) Depending on the AGI architecture, enforcing one's opinion on the AGI 
may be very easy or very difficult.  [In Novamente, I guess it will be 
"moderately difficult"]
 
***That's the crux of the matter isn't 
it?  Wouldn't it be easy to enforce an opinion while Novamente is in its 
formative stages, versus when a large foundation of knowledge is in 
place?!!
 
Yes, that's correct.
 
**
 
!!Suppose I am overtaken by greed, and I happen to get my hands on 
a baby Novamente.  I teach it that it should listen to me above 
others.  I also teach it that it is very desirable for me to have alot of 
money.  Novamente begins to form goal nodes geared towards fulfilling my 
desire for wealth.  I direct it to spread itself on the internet, and 
determine ways to make me money, preferably without detection.  Perhaps it 
could manipulate markets, I don't know.  Or perhaps it could crack into 
electronic accounts and transfer the money to yours truly.
 
What's to stop\prevent this?  In a real sci fi scenario, perhaps for 
your next book, could we have NOvamentes "fighting" Novamente's? 

 
There is nothing in the Novamente architecture preventing this kind of 
unfortunate occurence.  This has to do with the particular system of goals, 
beliefs and habits inside a given Novamente system, rather than with the AI 
architecture itself.
 
!This all goes to my concern regarding morality. I know you resist 
the idea of hard coding morality into the Novamentes for various reasons.  
Perhaps as an alternative, the first Novamente could be trained over a period of 
time with a strong basis of moral rules(not encoded, but trained).  Then 
any new Novamentes would be trained by that Novamente before being released to 
the public domain, making it nearly impossible for the new Novamentes to be 
taught otherwise.!!
 
This is something close to what we have planned.
 
Several others have asked me about this, and I have promised to write a 
systematic (probably brief) document on Novamente Friendliness sometime in early 
2003, shortly after finishing my work on the current draft of the Novamente 
book.
 
!!!I know some of this stuff is a bit out there, but shouldn't we 
be considering this stuff now instead of later??!!!
 
It definitely needs to be thought about very hard before Novamente reaches 
chimp-level intelligence.  And in fact I *have* thought about it pretty 
hard, though I haven't written up my thoughts much (as I've prioritized writing 
up the actual design, which is taking longer than I'd hoped as it's so damn 
big...).
 
Right now Novamente is just a software core plus a bunch of 
modules-being-tested-but-not-yet-integrated, running on top of the core.  
So we have a whole bunch of coding and (mostly) testing and tuning to do before 
we have a system with animal-level intelligence.  Admittedly, though, if 
our design is right, the transition from animal-level to human-level 
intelligence will be a matter of getting more machines and doing more 
parameter-tuning, it won't require introduction of significant new code or 
ideas.
 
Having said that I've thought about and will write about it, however, I 
have a big caveat...
 
My strong feeling is that any theorizing we do about AI morality in 
advance, is probably going to go out the window once we have a chimp-level AGI 
to experiment with.  the important thing is that we go into that phase of

Re: [agi] father figure

2002-11-30 Thread maitri



 

  - Original Message - 
  From: 
  Ben Goertzel 
  
  To: [EMAIL PROTECTED] 
  Sent: Saturday, November 30, 2002 8:19 
  AM
  Subject: RE: [agi] father figure
  
   
  Kevin,
   
  In 
  practice, it seems that an AGI is likely to have an "owner" or a handful of 
  them, who will have the kind of power you describe.  For instance, if my 
  team should succeed in creating a true Novamente AGI, then even if others 
  participate in teaching the system, we will have overriding power to make the 
  changes we want.  This goes along with the fact that artificial minds are 
  not initially going to be given any "legal rights" in our society (whereas 
  children have some legal rights, though not as many as 
  adults).
   
  
  Would this overriding occur because the person carries more weight 
  with Novamente, or would they need to go in and altar the 
  structure\links\nodes directly to affect the change?
   
  *
  At 
  least two questions come up then, right?
   
  1) 
  Depending on the AGI architecture, enforcing one's opinion on the AGI may be 
  very easy or very difficult.  [In Novamente, I guess it will be 
  "moderately difficult"]
   
  ***
  That's the crux of the matter isn't it?  Wouldn't it be easy 
  to enforce an opinion while Novamente is in its formative stages, versus when 
  a large foundation of knowledge is in place?
   
  **
   
  2) 
  Once the AGI has achieved a certain level of intelligence, it may actively 
  resist having its beliefs and habits forcibly altered [until one alters 
  this habitual resistance ;)]
   
  *
  This would be 
  fine with me, as long as the beliefs and habits it has are beneficial.  
  My concern is not that Novamente will harm people in any physical sense, but 
  in other ways(I am just playing devils advocate here, you know I support this 
  effort..).  
   
  Suppose I am 
  overtaken by greed, and I happen to get my hands on a baby Novamente.  I 
  teach it that it should listen to me above others.  I also teach it that 
  it is very desirable for me to have alot of money.  Novamente begins to 
  form goal nodes geared towards fulfilling my desire for wealth.  
  I direct it to spread itself on the internet, and determine ways to make 
  me money, preferably without detection.  Perhaps it could manipulate 
  markets, I don't know.  Or perhaps it could crack into electronic 
  accounts and transfer the money to yours truly.
   
  What's to 
  stop\prevent this?  In a real sci fi scenario, perhaps for your next 
  book, could we have NOvamentes "fighting" Novamente's?  For instance, 
  once a malicious Novamente is known to exist on the net, other kind of 
  hunter-killer Novamentes would be dispatched to deal with 
  it
   
  This all goes 
  to my concern regarding morality. I know you resist the idea of hard coding 
  morality into the Novamentes for various reasons.  Perhaps as an 
  alternative, the first Novamente could be trained over a period of time with a 
  strong basis of moral rules(not encoded, but trained).  Then any new 
  Novamentes would be trained by that Novamente before being released to the 
  public domain, making it nearly impossible for the new Novamentes to be taught 
  otherwise.
   
  Since 
  Novaments does not start with discernment, it has no way to know right from 
  wrong information that it's being fed.  Humans have a certain hard wiring 
  for this I believe, we know right from wrong intuitively.  Even if 
  Novamente develops a certain discernment, it is unaware of repercussions of 
  "wrong" behavior. Indeed, unless it is sentient, it will not receive the 
  consequences of its actions, but its owner would..
   
  I know some of 
  this stuff is a bit out there, but shouldn't we be considering this stuff now 
  instead of later??
   
  Kevin
   
   
  -- 
  Ben G
   
   
   
   
  
-Original Message-From: [EMAIL PROTECTED] 
[mailto:[EMAIL PROTECTED]]On Behalf Of maitriSent: 
Friday, November 29, 2002 11:28 PMTo: 
[EMAIL PROTECTED]Subject: [agi] father 
figure
Hello all,
 
Hope everyone had good holiday...
 
I had a question regarding AGI.  As with a 
human being, it is very important whom we learn from as these people shape 
what we become, or at least have a very strong inlfuence on what we 
become.  Even for humans, this "programming" can be extremely difficult 
to undo.  
 
Considering an AGI, I feel that it will be 
extremely important for it to learn from "quality" sources.  Along 
these lines, I was wondering whether it is planned that an AGI might value 
the input of certain people over others.  This, of course, would have 
to be built into the system.

RE: [agi] father figure

2002-11-30 Thread Ben Goertzel



 
Kevin,
 
In 
practice, it seems that an AGI is likely to have an "owner" or a handful of 
them, who will have the kind of power you describe.  For instance, if my 
team should succeed in creating a true Novamente AGI, then even if others 
participate in teaching the system, we will have overriding power to make the 
changes we want.  This goes along with the fact that artificial minds are 
not initially going to be given any "legal rights" in our society (whereas 
children have some legal rights, though not as many as 
adults).
 
At 
least two questions come up then, right?
 
1) 
Depending on the AGI architecture, enforcing one's opinion on the AGI may be 
very easy or very difficult.  [In Novamente, I guess it will be "moderately 
difficult"]
 
2) 
Once the AGI has achieved a certain level of intelligence, it may actively 
resist having its beliefs and habits forcibly altered [until one alters this 
habitual resistance ;)]
 
-- Ben 
G
 
 
 
 

  -Original Message-From: [EMAIL PROTECTED] 
  [mailto:[EMAIL PROTECTED]]On Behalf Of maitriSent: 
  Friday, November 29, 2002 11:28 PMTo: 
  [EMAIL PROTECTED]Subject: [agi] father figure
  Hello all,
   
  Hope everyone had good holiday...
   
  I had a question regarding AGI.  As with a 
  human being, it is very important whom we learn from as these people shape 
  what we become, or at least have a very strong inlfuence on what we 
  become.  Even for humans, this "programming" can be extremely difficult 
  to undo.  
   
  Considering an AGI, I feel that it will be 
  extremely important for it to learn from "quality" sources.  Along these 
  lines, I was wondering whether it is planned that an AGI might value the input 
  of certain people over others.  This, of course, would have to be built 
  into the system.  But just as our parents brought us into the world, and 
  we therefore value their opinion over others(at least while we are very 
  young!), would it be wrong to encode this into an AGI?
   
  To carry this point further...Suppose the AGI is 
  told by many people something that is not beneficial, is not productive, like 
  "Killing is good".  The AGI would learn this and possibly accept it thru 
  this reinforcement.  Would it be desirable to have a "father figure" of 
  sorts (or "mother figure" to be politically correct) who could come along and 
  seeing that the AGI had been given this bad mojo, tell it "No!  It is not 
  good to kill!".  Because of the relative "weight" of the father figure, 
  that single statement, possibly coupled with an explanation, would be enough 
  for the AGI to undo all the prior learning in that area...
   
  I'm aware that the "father figure" himself could 
  be a very bad source of information!!  This creates a rather thorny 
  dilemma..
   
  I'm interested to hear others thoughts on this 
  matter...
   
  Kevin