I see the ultimate top as thermodynamic efficiency. Intelligence operates 
within this physical universe efficiently. This efficiency can be expressed in 
many ways. More efficiency = more intelligence specifically and generally.

 

You can make an intelligent system though without it being self-organizing. 
Internally there may be some self-organizing within subprocesses I suppose… 

 

John

 

From: Steve Richfield via AGI [mailto:a...@listbox.com] 
Sent: Monday, January 5, 2015 12:17 AM
To: AGI
Subject: [agi] The Top

 

Hi all,

I was about to respond to Jim's latest thread regarding conceptual structure, 
but then I realized that the reason everyone here is talking at/past each other 
is that nearly everyone has a different idea as to where the "top" of this 
subject is located. First, some prospective examples:

1.  Some believe that the "top" is an ability to acquire, store, access, and 
act on information to provide a text-based interface.

2.  Some believe that the "top" is an ability to self-organize to form an 
intelligent system.

3.  Some believe that since intelligence apparently evolved from a primitive 
process control system, that paralleling this development might start with a 
better understanding of self-organizing process control systems.

4.  Some believe that in the process of learning how to do MUCH better 
compression, that we will learn how to self-organize the process of processing 
intelligent communications.

5.  There are almost as many of these as there are members on this forum. I 
could easily attach names (including my own) to the above, but I prefer to 
avoid having this devolve into an argument as to exactly what the various 
members believe.

OK, so just WHERE IS the real "top"? Can a system be considered to be 
"intelligent" without being self-organizing? Can an approach be considered to 
be valid without being extensible to ALL of our functions?

Myself, I think self-organization is essential, and if a system can't even 
self-organize to perform simplistic process control, e.g. like a hydra, then 
what hope is there to ever be "intelligent" (#3 above)? However, I seem to be 
alone in this view, yet I can't fathom how others ever expect success without 
these basics.

I would be interested in seeing crafted replacements or additions to my above 
descriptions of various views of the "top", that embody SOME reasonable 
rationale as to how they might lead to AGI success.

Can anyone shine light in this very dark corner?

Thanks.

Steve.
 

 


AGI |  <https://www.listbox.com/member/archive/303/=now> Archives  
<https://www.listbox.com/member/archive/rss/303/248029-82d9122f> |  
<https://www.listbox.com/member/?&;> Modify Your Subscription

 <http://www.listbox.com> 

 




-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to