Saul -

Excellent points!

I agree that the examples I am thinking of are of a "Classification" nature. I also like the analogy of "stamp collecting", as I think it actually captures one of the traits of human intuition that makes us as capable as we are of making sense of the universe we inhabit. Like many birds (Corvids in particular?) and some rodents (Pack Rats most notably), we are good at spying, collecting and organizing (do our animal cousins organize their finds the way we do?) shiny, colorful and otherwise notable objects. We notice anomolous phenomena, record it, and organize it according to various organizational schemes of which I think "classification" is the most obvious and often useful.

I think that "classification" might be described as a simplistic example of analogy making. The target domain of the analogy being the features of the phenomena (or artifacts) and the source domain being something like simple geometric relations (lists, categories, tables, etc.).

One step more elaborate perhaps is the organization into graphical models... into describing the relations between phenomena and artifacts. A first stab at this was made in the following work... establishing correlation networks among the *terminology* used by experts in a field of study:

In some exploratory work I did with Deana Pennington (UNM cum UTEP) for the NSF on "Creativity in Science" we started building models of the Lexicons of researchers in collaboration. Her study group were climate modeling scientists with surprisingly diverse uses of what was nominally the same language. It is not surprising at all that laymen in the field are easily confused or even offended by the results coming out of that work... not because it is wrong, but because the language is not normalized. For example, Atmospheric and Oceonagraphic scientist have huge overlaps in the phenomena they study, but typically in different regimes... so what is an important distinction to one may not be to another, and the words they choose to use describe the same mechanism or phenomena can be blatantly or subtly different. Similar for plant biologists, animal biologists, ecologists, etc. Same targets, similar models, different terminology.

Part of the incidental work in the project was to try to normalize and fuse these models (as represented by the natural language used to describe these overlapping fields). One methodology provided by Tim Goldsmith (UNM Psychology) was roughly as follows: Interview each scientist and get a list of words that are important in their work. Take the Union of these words and build a correlation table of those words (rows and columns labeled by those words). Have each scientist fill out the resulting NxN Matrix with numbers (1-10) roughly indicating how strongly correlated the words were. By studying the patterns *between* these resulting matrices, a certain sense of how "distant" the various scientists were could be achieved.

This technique (I think) was developed to help understand how much learning is happening (do the same thing with an expert ... the teacher of a subject... and neophytes... the students). As students progress in a classroom (or laboratory) setting, presumably their understanding (and the matrix of terms used) will begin to align better and better with that of the teacher. This was first (in my presence) used to compare two methods for medical students to learn about the function of the Nephron in the Kidney.... a control group being presented with "conventional" learning materials and another group being presented with an immersive "experience" presenting the same material but in first-person context with the opportunity to *explore* the model. The results were positive, but too many factors were involved to make any conclusive judgement... but the point was to begin to explore this as a technique for learning and learning about learning.

I personally think that "good science" is important but my own interest is in how the human intuition is used and how it can be engaged more effectively in the process of exploration, discovery, and analysis of the "real world" that we presume we live in (nod to NST's point).

- Steve
It seems that many scientific fields go through a phase of observation (derisively called "stamp collecting") followed by a phase of classification. If you're lucky then patterns can be picked out of the classification scheme to "predict" where to look for new entities or new interesting phenomena.

The Periodic Table is one of the cited examples. Another example (though perhaps not as good) is the Hertzprung-Russell diagram used in astronomy where stars are plotted onto a graph with luminosity and colour as the two axes. They form a characteristic pattern which had to be explained by any theory of stellar evolution.

I also recall many years ago picking up a book on atomic spectra published in 1901 - some 12 years before the Bohr theory of the atom - which illustrated hundreds of different emission spectra and talked about the relationships between spectral line frequencies in terms of waves and resonances. It reflected a very interesting point in the science where patterns were emerging and calling out for an explanation.

So it seems that a "classification" model can be used to make "predictions" - to see if the pattern extends to unobserved areas - and that this can be independent of an underlying explanatory theory. I think Gell-Mann's QCD models probably fit this idea. The image of the "ten-pin owling skittles" pattern and the mystery of what lies at the tip is very evocative.

Regards,
Saul

On 11 July 2012 06:56, Bruce Sherwood <bruce.sherw...@gmail.com <mailto:bruce.sherw...@gmail.com>> wrote:

    "For Engineers perhaps, predictive models are sufficient, they may not
    be (very?) interested in explaining *why* a particular material has
    the properties it does, merely *what* those properties are and how
    reliable the properties might be under a variety of conditions."

    I don't think this currently true. A big chunk of what used to be
    labeled "physics" is now in academic engineering departments with the
    name "material science". This consists of exploiting models that
    explain observed properties of materials, with the goal of looking for
    opportunities to change parameters to get improved behavior. In the
    early 1990s I heard a talk by an engineering professor at the science
    museum in Toronto, where he explained how such research had led to
    concrete many times stronger than it had been, and that the iconic
    tall tower in Toronto could not have been built not many years before
    it was built, as it relied on much stronger concrete.

    In some cases someone sees how, starting from fundamental physics
    principles, one can predict that such and such should happen or be. In
    other cases an observed phenomenon gets explained in terms of
    fundamental physics principles (post-diction), which then suggests how
    changes in the situation might yield an improved behavior. Pre-diction
    and post-diction both require a deep understanding of how to go from
    underlying fundamental principles to the behavior, but pre-diction in
    addition requires the imagination to run the argument forward, not
    already knowing the answer. That's why I claim that post-diction
    ("explanation") is more common than pre-diction.

    There's a fruitful interplay between pre-diction and post-diction. An
    example I've mentioned some time ago, from our intro physics textbook:
    When searching for an explanation for spark formation in air (we see
    the spark and ask how it occurs, which is post-diction or explanation)
    there are a couple of tentative explanations that one can rule out.
    Another explanation seems to explain the phenomenon, and the validity
    of this post-diction is greatly strengthened by noting that it (and
    not the other explanations) pre-dicts that it takes twice the critical
    electric field to trigger a spark if the air density is doubled, a
    pre-diction that is consistent with observations.

    Bruce

    ============================================================
    FRIAM Applied Complexity Group listserv
    Meets Fridays 9a-11:30 at cafe at St. John's College
    lectures, archives, unsubscribe, maps at http://www.friam.org




--
Saul Caganoff
Enterprise IT Architect
Mobile: +61 410 430 809
LinkedIn: http://www.linkedin.com/in/scaganoff


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

Reply via email to