By the way, just wanted to point out a beautifully simple example - perhaps the 
simplest - of an irreducibility in complex systems.

Individual molecular interactions are symmetric in time, they work the same 
forwards and backwards. Yet diffusion, which is nothing more than the aggregate 
of molecular interactions, is asymmetric. Figure that one out.

That's the *kind* of irreducibility that pops up all over complex systems.

Terren

--- On Mon, 6/30/08, Terren Suydam <[EMAIL PROTECTED]> wrote:

> From: Terren Suydam <[EMAIL PROTECTED]>
> Subject: Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN 
> AGI
> To: agi@v2.listbox.com
> Date: Monday, June 30, 2008, 1:55 AM
> Hi Ben,
> 
> I don't think the flaw you have identified matters to
> the main thrust of Richard's argument - and if you
> haven't summarized Richard's position precisely,
> you have summarized mine. :-]
> 
> You're saying the flaw in that position is that
> prediction of complex networks might merely be a matter of
> computational difficulty, rather than fundamentally
> intractability. But any formally defined complex system is
> going to be computable in principle. We can always predict
> such a system with infinite computing power. That
> doesn't make it tractable, or open to understanding,
> because obviously real understanding can't be dependent
> infinite computing power.
> 
> The question of fundamental intractability comes down to
> the degree with which we can make predictions about the
> global level from the local. And let's hope there's
> progress to be made there because each discovery will make
> our lives easier, to those of us who would try to
> understand something like the brain or the body or even
> just the cell. Or even just folding proteins!
> 
> But it seems pretty obvious to me anyway that we will never
> be able to predict the weather with any precision without
> doing an awful lot of computation. 
> 
> And what is our mind but the weather in our brains?
> 
> Terren
> 
> --- On Sun, 6/29/08, Ben Goertzel <[EMAIL PROTECTED]>
> wrote:
> 
> > From: Ben Goertzel <[EMAIL PROTECTED]>
> > Subject: Re: [agi] WHAT SORT OF HARDWARE $33K AND
> $850K BUYS TODAY FOR USE IN AGI
> > To: agi@v2.listbox.com
> > Date: Sunday, June 29, 2008, 10:44 PM
> > Richard,
> > 
> > I think that it would be possible to formalize your
> > "complex systems argument"
> > mathematically, but I don't have time to do so
> right
> > now.
> > 
> > > Or, then again ..... perhaps I am wrong:  maybe
> you
> > really *cannot*
> > > understand anything except math?
> > 
> > It's not the case that I can only understand math
> --
> > however, I have a
> > lot of respect
> > for the power of math to clarify disagreements. 
> Without
> > math, arguments often
> > proceed in a confused way because different people are
> > defining terms
> > differently,a
> > and don't realize it.
> > 
> > But, I agree math is not the only kind of rigor.  I
> would
> > be happy
> > with a very careful,
> > systematic exposition of your argument along the lines
> of
> > Spinoza or the early
> > Wittgenstein.  Their arguments were not mathematical,
> but
> > were very rigorous
> > and precisely drawn -- not slippery.
> > 
> > > Perhaps you have no idea what the actual
> > > argument is, and that has been the problem all
> along? 
> > I notice that you
> > > avoided answering my request that you summarize
> your
> > argument "against" the
> > > complex systems problem ... perhaps you are just
> > confused about what the
> > > argument actually is, and have been confused
> right
> > from the beginning?
> > 
> > In a nutshell, it seems you are arguing that general
> > intelligence is
> > fundamentally founded
> > on emergent properties of complex systems, and that
> > it's not possible for us to
> > figure out analytically how these emergent properties
> > emerge from the
> > lower-level structures
> > and dynamics of the complex systems involved.  
> Evolution,
> > you
> > suggest, "figured out"
> > some complex systems that give rise to the appropriate
> > emergent
> > properties to produce
> > general intelligence.  But evolution did not do this
> > figuring-out in
> > an analytical way, rather
> > via its own special sort of "directed trial and
> > error."   You suggest
> > that to create a generally
> > intelligent system, we should create a software
> framework
> > that makes
> > it very easy to
> > experiment with  different sorts of complex systems,
> so
> > that we can
> > then figure out
> > (via some combination of experiment, analysis,
> intuition,
> > theory,
> > etc.) how to create a
> > complex system that gives rise to the emergent
> properties
> > associated
> > with general
> > intelligence.
> > 
> > I'm sure the above is not exactly how you'd
> phrase
> > your argument --
> > and it doesn't
> > capture all the nuances -- but I was trying to give a
> > compact and approximate
> > formulation.   If you'd like to give an
> alternative,
> > equally compact
> > formulation, that
> > would be great.
> > 
> > I think the flaw of your argument lies in your
> definition
> > of
> > "complexity", and that this
> > would be revealed if you formalized your argument more
> > fully.  I think
> > you define
> > complexity as a kind of "fundamental
> > irreducibility" that the human
> > brain does not possess,
> > and that engineered AGI systems need not possess.  I
> think
> > that real
> > systems display
> > complexity which makes it **computationally
> difficult** to
> > explain
> > their emergent properties
> > in terms of their lower-level structures and dynamics,
> but
> > not as
> > fundamentally intractable
> > as you presume.
> > 
> > But because you don't formalize your notion of
> > complexity adequately,
> > it's not possible
> > to engage you in rational argumentation regarding the
> deep
> > flaw at the
> > center of your
> > argument.
> > 
> > However, I cannot prove rigorously that the brain is
> NOT
> > complex in
> > the overly strong
> > sense you  allude it is ... and nor can I prove
> rigorously
> > that a
> > design like Novamente Cognition
> > Engine or OpenCog Prime will give rise to the emergent
> > properties
> > associated with
> > general intelligence.  So, in this sense, I don't
> have
> > a rigorous
> > refutation of your argument,
> > and nor would I if you rigorously formalized your
> argument.
> > 
> > However, I think a rigorous formulation of your
> argument
> > would make it
> > apparent to
> > nearly everyone reading it that your definition of
> > complexity is
> > unreasonably strong.
> > 
> > -- Ben G
> > 
> > 
> > -------------------------------------------
> > agi
> > Archives:
> http://www.listbox.com/member/archive/303/=now
> > RSS Feed:
> http://www.listbox.com/member/archive/rss/303/
> > Modify Your Subscription:
> > http://www.listbox.com/member/?&;
> > Powered by Listbox: http://www.listbox.com
> 
> 
>       
> 
> 
> -------------------------------------------
> agi
> Archives: http://www.listbox.com/member/archive/303/=now
> RSS Feed: http://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> http://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com


      


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to