Steve,
 
Good job on the defense of a reductionist position.  I utilize a five phase
approach to the study of complex systems.
 
Definition - Analysis - Normalization - Synthesis - Realization (DANSR)
 
Reductionism has its place in the analytical phase at equilibrium.  Analysis
is normally a study of integrable, often linear systems, but it can be
accomplished on non-linear, feed-forward systems as well.  The synthesis
phase puts information re: complex behavior and emergence back into the
integrated mix and may be "analyzed" in non-linear, recurrent networks.
This is actually a probabilistic inversion of analysis as described in
Inverse Theory.
 
Bayesian refinement cycles (forward <-> inverse) are applied to new
information as one progresses through the DANSR cycle. This refines the
effect of new information on prior information - which I hope folks see is
not simply additive - and which may be entirely disruptive (see evolution of
science itself) .
 
The fact this seems to work for complex systems is philosophically
uninteresting, and may ignored - so the discussion can continue.
 
Final point: Descartes ultimately rejected the concept of zero because of
historical religious orthodoxy - so he personally never applied it to the
continuum extension of negative numbers. All his original Cartesian
coordinates started with 1 on a finite bottom, left-hand boundary -
according to Zero, The Biography of a Dangerous Idea, by Charles Seife.
 
Ken
 
 


  _____  

From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf
Of Steve Smith
Sent: Sunday, September 07, 2008 6:42 PM
To: The Friday Morning Applied Complexity Coffee Group
Cc: Aku
Subject: Re: [FRIAM] Reductionism - was: Young but distant gallaxies


Orlando- 

You can find good references in Wikipedia
<http://en.wikipedia.org/wiki/Reductionism>  on this topic, including the
Descartes references.


  _____  

Reductionism
>From Wikipedia, the free encyclopedia
Descartes held that non-human animals could be reductively explained as
automata - De homines 1662.


Duck of VaucansonReductionism can either mean (a) an approach to
understanding the nature of complex things by reducing them to the
interactions of their parts, or to simpler or more fundamental things or (b)
a philosophical position that a complex system is nothing but the sum of its
parts, and that an account of it can be reduced to accounts of individual
constituents.[1] This can be said of objects, phenomena, explanations,
theories, and meanings.

  _____  



All -

IMO, 
Reductionism(a) is a highly utilitarian approach to understanding complex
problems, but in some important cases insufficient.  It applies well to
easily observable systems of distinct elements with obvious relations
operating within the regime they were designed, evolved, or selected for.
It applies even better to engineered systems which were designed, built and
tested using reductionist principles.   I'm not sure how useful or apt it is
beyond that.  Some might argue, that this covers so much, who cares about
what is left over?... and this might distinguish the rest of us from
hard-core reductionists... we are interested in the phenomena, systems, and
regimes where such does not apply.  This is perhaps what defines Complexity
Scientists and Practitioners.

Reductionism(b) is a philosophical extension of (a) which has a nice feel to
it for those who operate in the regime where (a) holds well.  To the extent
that most of the (non-social) problems we encounter in our man-made world
tend to lie (by design) in this regime, this is not a bad approach.  To the
extent that much of science is done in the service of some kind of
engineering (ultimately to yield a better material, process or product), it
also works well.   

Reductionism(b)  might be directly confronted by the "Halting Problem" in
computability theory.   Reductionism in it's strongest form would suggest
that the behaviour of any given system could ultimately be predicted by
studying the behaviour of it's parts.   There are certainly large numbers of
examples where this is at least approximately true (and useful), otherwise
we wouldn't have unit-testing in our software systems, we wouldn't have
interchangeable parts, we wouldn't be able to make any useful predictions
whatsoever about anything.  But if it were fully and literally true, it
could be applied to programs in Turing-Complete systems.   My own argument
here leads me to ponder what (if any) range of interesting problems lie in
the regime between the embarrassingly reduceable and the (non)-halting
program.

But to suggest (insist) that *all* systems and *all* phenomenology can be
understood (and predicted) simply by reductionism seems to have been
dismissed by most serious scientists some while ago.   Complexity Science
and those who study Emergent Phenomena implicitly leave Reductionism behind
once they get into "truly" complex systems and emergent phenomena.

I, myself, prefer (simple) reductionistic simplifications over (complex)
handwaving ones (see Occam's Razor) most of the time, but when the going
gets tough (or the systems get complex), reductionism *becomes* nothing more
than handwaving in my experience. 

- Steve





<<Duck_of_Vaucanson.jpg>>

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

Reply via email to