Steve Richfield said:

A useful AGImust be able to rise above its own orders to be able to eliminate 
problems rather than destroying them!

I agree that an AGI, to be friendly, must not blindly obey a human user. I 
would rather have it act according to humanity's collective volition as 
described by Yudkowsky.  I.e. the AGI should do what a prudent set of 
representative humans would do when faced with the same command.   

But I have another solution to the abortion example you mentioned.  I believe 
that Texai, which will be deployed as a distributed community of specialized 
agents acting in concert, should obey the laws of the geographical territory in 
which it operates, and should conform to the degree possible to the cultural 
mores of the user.  So my progressive-issue-buddy AGI agent may well know a lot 
about pro-choice and how to advocate it, whereas someone else's 
Christian-church-buddy AGI agent may well know a lot about abortion 
alternatives and how to advocate those.  In a country like the USA both of 
these agents may be operable.  In some other country perhaps only one of them 
could operate (e.g. the Chinese government may have a policy that AGIs 
operating in their country fully support their one-child policies).


Presuming that you do NOT want to store all of history and repeatedly analyze 
all of it as your future AGI operates, you must accept MULTIPLE 
potentially-useful paradigms, adding new ones and trashing old ones as more 
information comes in.

For humans, forgetting is cognitively efficient, making a case that AGIs should 
likewise forget.  But because computer storage is relatively cheap and 
knowledge relatively concise, I am planning to archive all conversations with 
users, to the degree permitted by the user's privacy policy.  For users 
operating a local Texai instance, their private data will be archived locally 
and mirrored if permitted in an encrypted form on other  (remote) Texai 
instances.


 Wikipediapresumes a WASP (White Anglo-Saxon Protestant) or other 
single-paradigm view of the world, as do the AGI designs that I have observed.
The English Wikipedia may have this unfortunate characteristic even though it 
is contrary to the organization's policies, but certainly that cannot be the 
case for all the other Wikipedias such as the Spanish Wikipedia.   Interest in 
Texai is balanced between the USA and the rest of the world.  Here is a cluster 
map of my blog readers.  Having an AGI organized as a geographically 
distributed community may at least partially solve the issue you raise.

Cheers,
-Steve

Stephen L. Reed


Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860



----- Original Message ----
From: Steve Richfield <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Tuesday, May 27, 2008 12:23:37 PM
Subject: Merging threads was Re: Code generation was Re: [agi] More Info Please

Steve,


On 5/26/08, Stephen Reed <[EMAIL PROTECTED]> wrote: 
But I have a perhaps more troublesome issue in that abusive mentors may seek to 
teach destructive behavior to the system, and such abuse must be easily 
detected and its effects healed, to the frustration of the abusers.  E.g. in 
the same fashion as Wikipedia.
 
At this point, three discussion threads come together...
 
1.  Erroneous motivations. Most human strife is based on erroneous motivations 
- which are often good ideas expressed at the wrong meta-level. For example, it 
would seem good to minimize the number of abortions, as this is one effort to 
simply counter another and hence is at minimum a waste of effort. However, 
stopping others from having abortions starts a needless battle when all that 
may be necessary is some subtle social engineering so that no one would ever 
want one. If we tell our AGI to stop all killing, you'll probably just get 
another war, whereas if you tell our AGI to do some social engineering to 
reduce/eliminate the motivation to kill, you will get a very different result. 
Unfortunately, this all goes WAY over the heads of most of the 
Wikipedia-filling population, not to mention many people working on an AGI. All 
of the discussions here (that I have seen) regarding AGIs gone berserk have 
presumed erroneous motivations, and then cringed at the
 prospective results. A useful AGI must be able to rise above its own orders to 
be able to eliminate problems rather than destroying them!
 
2. Learning and thinking. Presuming that you do NOT want to store all of 
history and repeatedly analyze all of it as your future AGI operates, you must 
accept MULTIPLE potentially-useful paradigms, adding new ones and trashing old 
ones as more information comes in. Our own very personal ideas of learning and 
thinking do NOT typically allow for the maintenance of multiple simultaneous 
paradigms, cross-paradigm translation, etc. If our future AGI is to function at 
an astronomical level as people here hope that it will, it will NOT be thinking 
as we do, but will be doing something quite orthogonal to our own personal 
processes. Either people must tackle what will be needed to accomplish this 
(analysis), or there would seem to be little hope for future success because 
debugging would be impossible in a system whose correct operation is 
unknown/unthinkable. I tackled a very small part of this, as needed to support 
Dr. Eliza development. Obviously, MUCH more
 analysis is needed for the AGI that everyone hopes will come out of this 
process. Development without analysis (which covers most of the postings on 
this forum) simply consigns the results to the bit bucket.
 
3.  Wikipedia miscreants. Wikipedia presumes a WASP (White Anglo-Saxon 
Protestant) or other single-paradigm view of the world, as do the AGI designs 
that I have observed. If abusive mentors are a significant problem, then there 
is something wrong with the design. At worst, an abusive mentor should simply 
be bringing a dysfunctional paradigm into consideration, which may actually be 
useful, for communicating with the abusive mentor in their own terms. Wikipedia 
can never become really useful until it integrates a multiple-paradigm view of 
things, whereupon the concept of "abuse" should evaporate.
 
Now, if we could just pull these all together and get our arms around multiple 
paradigms and erroneous motivations, we might have a really USEFUL discussion.
 
Steve Richfield
=================
 
----- Original Message ----
From: William Pearson <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Monday, May 26, 2008 2:28:32 PM
Subject: Code generation was Re: [agi] More Info Please

2008/5/26 Stephen Reed <[EMAIL PROTECTED]>:
> Regarding the best language for AGI development, most here know that I'm
> using Java in Texai.  For skill acquisition, my strategy is to have Texai
> acquire a skill by composing a Java program to perform the learned skill.


How will it memory manage between skills? You want to try and avoid
thrashing the memory. The java memory system allows any program to ask
for as much memory as they need, this could lead to tragedy of the
commons situations.
 
  Will Pearson


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



________________________________
 
agi | Archives  | Modify Your Subscription   


________________________________
 
agi | Archives  | Modify Your Subscription  


      


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com

Reply via email to