Phil Henshaw wrote:
> Hmmmm... what does that mean?   The model of evolution I observe working
> in both natural systems and in designed systems is "exploration at the
> fringe"   What that means depends on the system involved, but the
> invariant is a high degree of organizational invariance in the core and
> a low degree on the leading edge.
Considering scientific ideas as individuals, and considering predictive
power, and in turn engineering utility and industrial profitability as
fitness, there are venues for ideas to survive and also obstacles to
survival.   From a evolutionary perspective, it's expected that there
will be a core of common descent that, from 30,000 feet, appears to move
slowly, and that large deviations away from it usually will be the end
of the road for the individuals in the population having those
deviations.   It is also expected that there will be many smaller
survivable deviations away from the core by individuals and that these
deviations can form identifiable groups.

On the multitude of leading edges and surviving outliers, what common
sorts of similar patterns and problem solving can be identified?   E.g.
new uses for old tools, or maybe relatively new tools with novel or
accidental application to another problem.   Is there a common character
to major innovations?   Software engineers call these Design Patterns.
Others use terms like Best Practices.  It may be valuable wisdom, but it
is different from the innovation that preceded it.

> When I do a search with Google I see very little 'intelligence' of that
> kind in the results.  There appears to be some statistical weighting,
> but the 'intelligence' of the results seems to depend entirely on
> whether my word combination captures the concept I'm looking for.   I
> don't believe that's definable by any means I know of yet.  
>   
Yes. As far as I'm aware Google has not yet deployed a production
quality technology for the semantic web.   Google doesn't reason about
concepts.  Not only can't it trim down logically inappropriate results,
it can't expand on related concepts unless there happens to be data
(like from Wikipedia) where someone has created a document that
physically contains the overlap of different nomenclatures.   It
certainly can't tell you whether two mathematical formulations of
similar models will make the same predictions unless, again, there
happens to be a  web page posting of someone that said it was so.

> Would you
> agree, or are you using a tool that somehow comes back with what I would
> have 'meant' to say if I had only known how other people refer to the
> subject...?
I'm thinking out loud about how one might develop a system using
existing technologies to do automatically propose such questions and
answer them using many sources of contemporary information.   I don't
see it as a question of whose priors about usage are `right', but rather
if stable meanings emerge.   Across scientific communities, I would
expect that natural language terms take on incompatible meanings from
even nearby communities, and that some terms may not have any stable
meaning at all (hand waving, hot air, etc.)  What would it take to
create a computerized scholar to compress insights across the scientific
literature such that a person or computer could ask it questions and get
answers either in precise language or with appropriate caveats?   Better
yet, be able to say "I don't believe it!  Show me the evidence and
justify!".. and have the system do just that.


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

Reply via email to