On 7/26/13 1:30 PM, glen wrote:
Marcus G. Daniels wrote at 07/26/2013 10:42 AM:
A set of people ought to be able to falsify a proposition faster than one person, who may be prone to deluding themselves, among other things. This is the function of peer review, and arguing on mailing lists. Identification of truth is something that should move slowly. I think `negotiated truth' occurs largely because people in organizations have different amounts of power, and the powerful ones may insist on something false or sub-optimal. The weak, junior, and the followers are just fearful of getting swatted.

Fantastic point. So, the (false or true) beliefs of the more powerful people are given more weight than the (true or false) beliefs of the less powerful. That would imply that the mechanism we need is a way to tie power to calibration, i.e. the more power you have, the smaller your error must be.
This assumes a ground truth... which is probably more or less relevant depending on domain. To some extent we are very bimodal about this... we both hold our public officials to higher standards and to lower ones at the same time.

If an objective ground is impossible, we still have parallax ... a kind of continually updating centroid, like that pursued by decision markets.
Or a continually refining confidence distribution which we can hope for/seek a nice steep gaussianesque shape.
But a tight coupling between the most powerful and a consensual centroid would stultify an organization. It would destroy the ability to find truth in outliers, disruptive innovation. I suppose that can be handled by a healthy diversity of organizations (scale free network). But we see companies like Intel or Microsoft actively opposed to that... they seem to think such behemoths can be innovative.
I think they *can* drive the consensual reality to some extent... to the point that counterpoint minority opinions polyp off (Apple V MS, Linux V Commercial, Debian V RedHat V Ubuntu, etc.)
So, it's not clear to me we can _design_ an artificial system where calibration (tight or loose) happens against a parallax ground for truth (including peer review or mailing lists).
It seems intuitively obvious to me that such *can*, and that most of it is about *specifying* the domain... but maybe we are talking about different things?


It still seems we need an objective ground in order to measure belief error.
It think this is true by defnition. In my work in this area, we instead sought measures of belief and plausibility at the atomic level, then composing that up to aggregations. Certainly, V&V is going to require an "objective ground" but it is only "relatively objective" if that even vaguely makes sense to you?

The only way around it is to rely on natural selection, wherein "problems" with organizatinos may well turn out to be the particular keys to their survival/success. So, that would fail to address the objective of this conversation, which I presume is how to reorg. orgs either before they die off naturally (because they cause so much harm) or without letting them die off at all. (Few sane people want, say, GM to die, or our government to shut down ... oh wait, many of our congressional reps _do_ want our govt to shut down.)
<grin>

I think we are really talking about "theories of life" here, really...

- Steve


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

Reply via email to