Sorry, I neglected to include my summary statement, now appended below.
- Jef

Jef Allbright wrote:
> Ben Goertzel wrote:
> 
>> Finally, it is interesting to speculate regarding how self may differ
>> in future AI systems as opposed to in humans.  The relative stability
>> we see in human selves may not exist in AI systems that can
>> self-improve and change more fundamentally and rapidly than humans
>> can.  There may be a situation in which, as soon as a system has
>> understood itself decently, it radically modifies itself and hence
>> violates its existing self-model.  Thus: intelligence without a
>> long-term stable self.  In this case the "attractor-ish" nature of
>> the self holds only over much shorter time scales than for human
>> minds or human-like minds.  But the alternating process of forward
>> and backward inference for self-construction is still critical, even
>> though no reasonably stable self-constituting attractor ever
>> emerges.  The psychology of such intelligent systems will almost
>> surely be beyond human beings' capacity for comprehension and
>> empathy. 
> 
> Strange, it's almost as if you were looking over my shoulder
> as I planted similar seeds of thought on the extropy list during the
> last day. 
> 
> In regard to your "finally" paragraph, I would speculate that
> advanced intelligence would tend to converge on a structure
> of increasing stability feeding on increasing diversity.  As
> the intelligence evolved, a form of natural selection would
> guide its structural development, not toward increasingly
> desirable ends, but toward increasingly effective methods.  A
> necessary element of such a system, I speculate, must be an
> increasingly rich source of diversity, so I imagine a sort of
> fractal spherical (in 3D) tree-like structure where ongoing
> growth involves both the sprouting of new branches
> (diversity) and the strengthening of the support structure
> (reinforcement of principles that are repeatedly tested and
> seem to work).  Of course there's no reason to limit this structure
> to 3D. 
> 
> Welcome to the Tree-mind, the Hive-mind is dead.  ;)

The "self" aspect would be a function of any particular point of
interaction on the tree structure.  I expect that the center point would
have very little to say, but would have a profound sense of self IFF it
had anything to say.

> - Jef

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to