Stephen,

This is simply amazing! I thought that I made some key points, but I failed
to accurately communicate any of the ones that you commented on! Hmmm, I
wonder if the fault was in my posting, your reading of it, nearby erroneous
interpretations, or what? Perhaps someone else on this forum can debug the
disconnect? I will (attempt to) clarify, not to argue my points (again), but
more to explain that I was really saying something different than you
apparently read.

On 5/27/08, Stephen Reed <[EMAIL PROTECTED]> wrote:
>
>  Steve Richfield said:
> A useful AGI must be able to rise above its own orders to be able to
> eliminate problems rather than destroying them!
>
>
> I agree that an AGI, to be friendly, must not blindly obey a human user. I
> would rather have it act according to humanity's collective 
> volition<http://www.sl4.org/archive/0406/9244.html>as described by Yudkowsky. 
>  I.e. the AGI should do what a prudent set of
> representative humans would do when faced with the same command.
>
> But I have another solution to the abortion example you mentioned.  I
> believe that Texai, which will be deployed as a distributed community of
> specialized agents acting in concert, should obey the laws of the
> geographical territory in which it operates, and should conform to the
> degree possible to the cultural mores of the user.  So my
> progressive-issue-buddy AGI agent may well know a lot about pro-choice and
> how to advocate it, whereas someone else's Christian-church-buddy AGI agent
> may well know a lot about abortion alternatives and how to advocate those.
> In a country like the USA both of these agents may be operable.  In some
> other country perhaps only one of them could operate (e.g. the Chinese
> government may have a policy that AGIs operating in their country fully
> support their one-child policies).
>

THIS is EXACTLY the sort of thing that a USEFUL AGI will NOT do! I believe
that it is NOT possible to usefully comment in this area without an
understanding of Reverse Reductio ad Absurdum logic. The absurdity of many
arguments is ABSOLUTE PROOF of a COMMON underlying FALSE assumption. Your
proposal and others in effect turns AGIs loose to stupid attempts to enforce
one side of a reductio ad absurdum proof that BOTH sides have a brain short.
Instead of pumping more current through the brain short, let's undo the
short. The problem is that the short of often in the "prime directives". I
(and other sane people) should want NO PART of AGIs who are there just to
pump more current through brain shorts.

  Presuming that you do NOT want to store all of history and repeatedly
> analyze all of it as your future AGI operates, you must accept MULTIPLE
> potentially-useful paradigms, adding new ones and trashing old ones as more
> information comes in.
>
>
> For humans, forgetting is cognitively efficient, making a case that AGIs
> should likewise forget.
>

All memory is built on paradigms - general understandings of how things
work. Where these are later discovered to be erroneous or where there are
more than one useful paradigm (e.g. one that provides better explanations,
while another better predicts cures), the memories themselves become
unusable when new paradigms are introduced.

 But because computer storage is relatively cheap and knowledge relatively
> concise, I am planning to archive all conversations with users, to the
> degree permitted by the user's privacy policy.  For users operating a local
> Texai instance, their private data will be archived locally and mirrored if
> permitted in an encrypted form on other  (remote) Texai instances.
>

This is pretty simple with text, but when things move into real-time moving
images from which to understand the world, this takes a little more storage.

   Wikipedia presumes a WASP (White Anglo-Saxon Protestant) or other
> single-paradigm view of the world, as do the AGI designs that I have
> observed.
>
> The English Wikipedia may have this unfortunate characteristic even though
> it is contrary to the organization's policies,
>

WHAT?! This IS their policy, but they themselves are too damn stupid to see
it! They seem to think that there is a "right" and/or "best" way of
expressing things, when often there is a multiplicity of ways, each of which
may be best for different readers. This often devolves into "reference
battles" where the traditional side can come up with way more references
than the recent research side, so the recent research gets pushed out.

  but certainly that cannot be the case for all the other
Wikipedias<http://wikipedia.org/>such as the Spanish
> Wikipedia <http://es.wikipedia.org/wiki/Portada>.
>

This is just a different single-paradigm Wiki, when what is needed is a
single Wiki that permits many paradigms.

  Interest in Texai is balanced between the USA and the rest of the world.
> Here is a cluster 
> map<http://www3.clustrmaps.com/counter/maps.php?url=http://texai.org&type=small&category=free&clusters=no&map=world>of
>  my blog readers.  Having an AGI organized as a geographically distributed
> community may at least partially solve the issue you raise.
>

No, it makes things WORSE by factionalizing rather than drilling down to the
underlying false assumptions. This is how wars are needlessly made.

Again, I am amazed by my apparent TOTAL failure to communicate. Can someone
here please debug this?

Steve Richfield
================

  ----- Original Message ----
> From: Steve Richfield <[EMAIL PROTECTED]>
> To: agi@v2.listbox.com
>   Sent: Tuesday, May 27, 2008 12:23:37 PM
> Subject: Merging threads was Re: Code generation was Re: [agi] More Info
> Please
>
> Steve,
>
> On 5/26/08, Stephen Reed <[EMAIL PROTECTED]> wrote:
>>
>>  But I have a perhaps more troublesome issue in that abusive mentors may
>> seek to teach destructive behavior to the system, and such abuse must be
>> easily detected and its effects healed, to the frustration of the abusers.
>> E.g. in the same fashion as Wikipedia.
>>
>
> At this point, three discussion threads come together...
>
> 1.  Erroneous motivations. Most human strife is based on erroneous
> motivations - which are often good ideas expressed at the wrong meta-level.
> For example, it would seem good to minimize the number of abortions, as this
> is one effort to simply counter another and hence is at minimum a waste of
> effort. However, stopping others from having abortions starts a needless
> battle when all that may be necessary is some subtle social engineering so
> that no one would ever want one. If we tell our AGI to stop all killing,
> you'll probably just get another war, whereas if you tell our AGI to do some
> social engineering to reduce/eliminate the motivation to kill, you will get
> a very different result. Unfortunately, this all goes WAY over the heads of
> most of the Wikipedia-filling population, not to mention many people working
> on an AGI. All of the discussions here (that I have seen) regarding AGIs
> gone berserk have presumed erroneous motivations, and then cringed at the
> prospective results. A useful AGI must be able to rise above its own orders
> to be able to eliminate problems rather than destroying them!
>
> 2. Learning and thinking. Presuming that you do NOT want to store all of
> history and repeatedly analyze all of it as your future AGI operates, you
> must accept MULTIPLE potentially-useful paradigms, adding new ones and
> trashing old ones as more information comes in. Our own very personal ideas
> of learning and thinking do NOT typically allow for the maintenance of
> multiple simultaneous paradigms, cross-paradigm translation, etc. If our
> future AGI is to function at an astronomical level as people here hope that
> it will, it will NOT be thinking as we do, but will be doing something quite
> orthogonal to our own personal processes. Either people must tackle what
> will be needed to accomplish this (analysis), or there would seem to be
> little hope for future success because debugging would be impossible in a
> system whose correct operation is unknown/unthinkable. I tackled a very
> small part of this, as needed to support Dr. Eliza development. Obviously,
> MUCH more analysis is needed for the AGI that everyone hopes will come out
> of this process. Development without analysis (which covers most of the
> postings on this forum) simply consigns the results to the bit bucket.
>
> 3.  Wikipedia miscreants. Wikipedia presumes a WASP (White Anglo-Saxon
> Protestant) or other single-paradigm view of the world, as do the AGI
> designs that I have observed. If abusive mentors are a significant problem,
> then there is something wrong with the design. At worst, an abusive mentor
> should simply be bringing a dysfunctional paradigm into consideration, which
> may actually be useful, for communicating with the abusive mentor in their
> own terms. Wikipedia can never become really useful until it integrates a
> multiple-paradigm view of things, whereupon the concept of "abuse" should
> evaporate.
>
> Now, if we could just pull these all together and get our arms around
> multiple paradigms and erroneous motivations, we might have a really USEFUL
> discussion.
>
> Steve Richfield
> =================
>
>
>>   ----- Original Message ----
>> From: William Pearson <[EMAIL PROTECTED]>
>> To: agi@v2.listbox.com
>> Sent: Monday, May 26, 2008 2:28:32 PM
>> Subject: Code generation was Re: [agi] More Info Please
>>
>> 2008/5/26 Stephen Reed <[EMAIL PROTECTED]>:
>> > Regarding the best language for AGI development, most here know that I'm
>> > using Java in Texai.  For skill acquisition, my strategy is to have
>> Texai
>> > acquire a skill by composing a Java program to perform the learned
>> skill.
>>
>> How will it memory manage between skills? You want to try and avoid
>> thrashing the memory. The java memory system allows any program to ask
>> for as much memory as they need, this could lead to tragedy of the
>> commons situations.
>>
>>
>>   Will Pearson
>>
>>
>> -------------------------------------------
>> agi
>> Archives: http://www.listbox.com/member/archive/303/=now
>> RSS Feed: http://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription: http://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>>
>>  ------------------------------
>>   *agi* | Archives <http://www.listbox.com/member/archive/303/=now>
>> <http://www.listbox.com/member/archive/rss/303/> | 
>> Modify<http://www.listbox.com/member/?&;>Your Subscription 
>> <http://www.listbox.com/>
>>
>
>  ------------------------------
>   *agi* | Archives <http://www.listbox.com/member/archive/303/=now>
> <http://www.listbox.com/member/archive/rss/303/> | 
> Modify<http://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com/>
>
>  ------------------------------
>   *agi* | Archives <http://www.listbox.com/member/archive/303/=now>
> <http://www.listbox.com/member/archive/rss/303/> | 
> Modify<http://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com/>
>



-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com

Reply via email to