Steve,

On 5/26/08, Stephen Reed <[EMAIL PROTECTED]> wrote:
>
>  But I have a perhaps more troublesome issue in that abusive mentors may
> seek to teach destructive behavior to the system, and such abuse must be
> easily detected and its effects healed, to the frustration of the abusers.
> E.g. in the same fashion as Wikipedia.
>

At this point, three discussion threads come together...

1.  Erroneous motivations. Most human strife is based on erroneous
motivations - which are often good ideas expressed at the wrong meta-level.
For example, it would seem good to minimize the number of abortions, as this
is one effort to simply counter another and hence is at minimum a waste of
effort. However, stopping others from having abortions starts a needless
battle when all that may be necessary is some subtle social engineering so
that no one would ever want one. If we tell our AGI to stop all killing,
you'll probably just get another war, whereas if you tell our AGI to do some
social engineering to reduce/eliminate the motivation to kill, you will get
a very different result. Unfortunately, this all goes WAY over the heads of
most of the Wikipedia-filling population, not to mention many people working
on an AGI. All of the discussions here (that I have seen) regarding AGIs
gone berserk have presumed erroneous motivations, and then cringed at the
prospective results. A useful AGI must be able to rise above its own orders
to be able to eliminate problems rather than destroying them!

2. Learning and thinking. Presuming that you do NOT want to store all of
history and repeatedly analyze all of it as your future AGI operates, you
must accept MULTIPLE potentially-useful paradigms, adding new ones and
trashing old ones as more information comes in. Our own very personal ideas
of learning and thinking do NOT typically allow for the maintenance of
multiple simultaneous paradigms, cross-paradigm translation, etc. If our
future AGI is to function at an astronomical level as people here hope that
it will, it will NOT be thinking as we do, but will be doing something quite
orthogonal to our own personal processes. Either people must tackle what
will be needed to accomplish this (analysis), or there would seem to be
little hope for future success because debugging would be impossible in a
system whose correct operation is unknown/unthinkable. I tackled a very
small part of this, as needed to support Dr. Eliza development. Obviously,
MUCH more analysis is needed for the AGI that everyone hopes will come out
of this process. Development without analysis (which covers most of the
postings on this forum) simply consigns the results to the bit bucket.

3.  Wikipedia miscreants. Wikipedia presumes a WASP (White Anglo-Saxon
Protestant) or other single-paradigm view of the world, as do the AGI
designs that I have observed. If abusive mentors are a significant problem,
then there is something wrong with the design. At worst, an abusive mentor
should simply be bringing a dysfunctional paradigm into consideration, which
may actually be useful, for communicating with the abusive mentor in their
own terms. Wikipedia can never become really useful until it integrates a
multiple-paradigm view of things, whereupon the concept of "abuse" should
evaporate.

Now, if we could just pull these all together and get our arms around
multiple paradigms and erroneous motivations, we might have a really USEFUL
discussion.

Steve Richfield
=================


>   ----- Original Message ----
> From: William Pearson <[EMAIL PROTECTED]>
> To: agi@v2.listbox.com
> Sent: Monday, May 26, 2008 2:28:32 PM
> Subject: Code generation was Re: [agi] More Info Please
>
> 2008/5/26 Stephen Reed <[EMAIL PROTECTED]>:
> > Regarding the best language for AGI development, most here know that I'm
> > using Java in Texai.  For skill acquisition, my strategy is to have Texai
> > acquire a skill by composing a Java program to perform the learned skill.
>
> How will it memory manage between skills? You want to try and avoid
> thrashing the memory. The java memory system allows any program to ask
> for as much memory as they need, this could lead to tragedy of the
> commons situations.
>
>
>   Will Pearson
>
>
> -------------------------------------------
> agi
> Archives: http://www.listbox.com/member/archive/303/=now
> RSS Feed: http://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: http://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>
>  ------------------------------
>   *agi* | Archives <http://www.listbox.com/member/archive/303/=now>
> <http://www.listbox.com/member/archive/rss/303/> | 
> Modify<http://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com/>
>



-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com

Reply via email to