Thanks Steve. O
Steve Smith wrote:
Orlando-
You can find good references in Wikipedia on this
topic, including the Descartes references.
Reductionism
From Wikipedia, the free encyclopedia
Descartes held that non-human animals could be reductively explained as
automata — De homines 1662.
Reductionism can either mean (a) an approach
to
understanding the nature of complex things by reducing them to the
interactions of their parts, or to simpler or more fundamental things
or (b) a philosophical position that a complex system is nothing but
the sum of its parts, and that an account of it can be reduced to
accounts of individual constituents.[1] This can be said of objects,
phenomena, explanations, theories, and meanings.
All -
IMO,
Reductionism(a) is a highly utilitarian approach to understanding
complex problems, but in some important cases insufficient. It applies
well to easily observable systems of distinct elements with obvious
relations operating within the regime they were designed, evolved, or
selected for. It applies even better to engineered systems which were
designed, built and tested using reductionist principles. I'm not
sure how useful or apt it is beyond that. Some might argue, that this
covers so much, who cares about what is left over?... and this
might distinguish the rest of us from hard-core reductionists... we are
interested in the phenomena, systems, and regimes where such does not
apply. This is perhaps what defines Complexity Scientists and
Practitioners.
Reductionism(b) is a philosophical extension of (a) which has a nice
feel to it for those who operate in the regime where (a) holds well.
To the extent that most of the (non-social) problems we encounter in
our man-made world tend to lie (by design) in this regime, this is not
a bad approach. To the extent that much of science is done in the
service of some kind of engineering (ultimately to yield a better
material, process or product), it also works well.
Reductionism(b) might be directly confronted by the "Halting Problem"
in computability theory. Reductionism in it's strongest form would
suggest that the behaviour of any given system could ultimately be
predicted by studying the behaviour of it's parts. There are
certainly large numbers of examples where this is at least
approximately true (and useful), otherwise we wouldn't have
unit-testing in our software systems, we wouldn't have interchangeable
parts, we wouldn't be able to make any useful predictions whatsoever
about anything. But if it were fully and literally true, it could be
applied to programs in Turing-Complete systems. My own argument here
leads me to ponder what (if any) range of interesting problems lie in
the regime between the embarrassingly reduceable and the (non)-halting
program.
But to suggest (insist) that *all* systems and *all* phenomenology can
be understood (and predicted) simply by reductionism seems to have been
dismissed by most serious scientists some while ago. Complexity
Science and those who study Emergent Phenomena implicitly leave
Reductionism behind once they get into "truly" complex systems and
emergent phenomena.
I, myself, prefer (simple) reductionistic simplifications over
(complex) handwaving ones (see Occam's Razor) most of the time, but
when the going gets tough (or the systems get complex), reductionism
*becomes* nothing more than handwaving in my experience.
- Steve
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
|
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org