Steve Smith wrote at 04/05/2013 10:54 AM:
>  1. How do we define/recognize valid measures of evidence?
>  2. Is the current "exponential" growth in tech divergent or convergent?
> 
>  1. I have worked on several projects involving the formal management of
>     evidence and belief which makes me cynical when people suggest that
>     there is "one true form of evidence".   Most of it ended up off in
>     high dimensional pareto fronts with multiple measures of
>     confidence.  The underlying theory (much just beyond my grasp to
>     regurgitate) is based in variants of Dempster-Shaffer and Fuzzy
>     Sets/Intervals.   There is always a Bayesian in the crowd that
>     starts "Baying" (sorry) about how "Bayesian Methods are the *only*
>     thing anyone ever needs".  This specific example in statistics and
>     probability theory is but one.   Similarly, it took a long time for
>     anyone to accept far-from-equilibrium systems as being worth
>     studying simply because their tools didn't work there.   Like
>     looking for your lost keys under the streetlamp because the "light
>     is too bad in the alley where you dropped them".

Well, the first thing to cover is that the definition won't necessarily
be pre-statable.  In order for it to be an accurate measure, it will
have to evolve with the thing(s) being measured.

The second consideration is whatever you mean by "valid".  If I give you
the benefit of the doubt, I assume you mean "trustworthy" or
"credentialed" in some sense.  And, again, I'd settle that by tying
trustworthiness to the thing being measured.  I typically do this by
asking the participants in a domain whether any given measure of their
domain is acceptable/irritating.  Measures of local hacker spaces is a
good anecdote for me, lately.  With the growth of the maker community,
it's informative to ask various participants what they think of things
like techshop vs. dorkbot (or our local variants).

Both these suggest skepticism toward the _unification_ of validity or
trustworthiness.  Evidence boils down to a context-sensitive
aggregation, which is why Bayesian methods are so attractive.  But I'm
sure they aren't the only way to install context sensitivity.  Recently,
I've been trying to understand Feferman's "schematic axiom systems"
http://math.stanford.edu/~feferman/papers/godelnagel.pdf and how a
schema might be extracted from a formal system in such a way as to
provide provide reasoning structures that are sensitive to application.
 (My complete and embarrassing ignorance slows my progress, of course.)

>  2. [...] What I'm equally interested in is if there is a
>     similar divergence in thinking.  [...] I believe
>     that humans have a natural time constant around belief (and as a
>     consequence, understanding, knowledge, paradigms?) on the order of
>     years if not decades or a full lifetime.   That time-constant may be
>     shrinking, but I rarely believe someone when they claim during or
>     after an arguement to have "changed their mind"... at best, they are
>     acknowledging that a seed has sprouted which in a few years or
>     decades might grow into a garden.

Obviously, I'm still not convinced that _thinking_ is all that
important.  It strikes me that _doing_ is far more important.  My
evidence for this lies mostly in the (apparent) decoupled relationship
between what people say and what they do.  I can see fairly strong maps
between immediate, short-term thoughts like "Ice cream is good" and
actions like walking to the freezer, scooping some out, and eating it.
But I see fairly convoluted maps between, e.g., "Logging your data is
good" and what bench scientists actually end up writing in their logs.

-- 
=><= glen e. p. ropella
All the lies I tell myself


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

Reply via email to