On Tue, Sep 16, 2008 at 2:50 AM, Terren Suydam <[EMAIL PROTECTED]> wrote:
>
> Once again, I'm not saying that modeling an economy is all that's necessary
> to explain intelligence. I'm not even saying it's a necessary condition of 
> it. What
> I am saying is that it looks very likely that the brain/mind is 
> self-organized, and
> for those of us looking to biological intelligence for inspiration, this may 
> be important.

Fare enough, but what makes this aesthetic property of
self-organization processes so important in the design of
optimization? You can't win this lottery by relying on intuition
pointing to the right bet 50 times in a row, you need to check and
recheck the validity of nearly every one of your design decisions. It
might be a good hunch for the next step, but unless you succeed at
cashing it out as a step that helps, the hunch doesn't count for much.


> There are a class of highly complex, unstable (in the thermodynamic sense)
> systems that self-organize in such a way as to most efficiently dissipate the
> imbalances inherent in the environment (hurricanes, tornadoes, watersheds, 
> life
> itself, the economy).  And, perhaps, the brain/mind is such a system. If so,
> that description is obviously not enough to "guess the password to the safe".
> But that doesn't mean that self-organization has no value at all. The value 
> of it
> is to show that efficient design can emerge spontaneously, and perhaps we
> can take advantage of that.

If this property is common between brains and hurricanes, why is it
more relevant than the property of, say, being made out of living
cells (that at least doesn't include hurricanes)? I'm not asserting
that it's an irrelevant property, I'm not inverting your assertion,
but to assert either way you need a valid reason.


> By your argumentation, it would seem you won't find any argument about
> intelligence of worth unless it explains everything. I've never understood the
> strong resistance of many in the AI community to the concepts involved with
> complexity theory, particularly as applied to intelligence. It would seem to 
> me
> to be a promising frontier for exploration and gathering insight.

I will find an argument worth something if it explains something, or
if it serves as a tool that I expect to be useful in explaining
something down the road. As for complexity, it looks like a
consequence of efficient coding of representation in my current model,
so it's not out of the question, but in my book it is a side effect
rather than a guiding principle. It is hard to decipher an intelligent
algorithm in motion, even if it can be initiated to have known
consequences. Just as you may be unable to guess the individual moves
of a grandmaster or a chess computer, but guess the outcome (you
lose), you may be completely at a loss measuring the firing rates of
transistors of CPU running an optimized chess program and trying to
bridge the gap to its design, goals and thoughts (which will turn out
to be anthropomorphic concepts not describing the studied phenomenon).
But you can write the program initially, and know in advance that it
will win, or you can observe it in motion and notice that it wins.
Neither helps you to bridge the gap between the reliable outcome and
low-level dynamic, but there is in fact a tractable connection, hidden
in the causal history of the development of optimization process,
where you did design the low-level dynamic to implement the goal. If
self-organizing process hits a narrow target without guidance, it
doesn't hit *your* target, it hits an arbitrary target. The universe
turns to cold iron before you win this lottery blindly.


-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com

Reply via email to