Re: Data Cuts, Normalization, and Analysis for Politics L3

2005-07-30 Thread Dan Minette

- Original Message - 
From: "Dan Minette" <[EMAIL PROTECTED]>
To: "Killer Bs Discussion" 
Sent: Monday, July 25, 2005 3:50 PM
Subject: Data Cuts, Normalization, and Analysis for Politics L3


1) Even reports which, as given, could not be true can be mined for
critical information:

The application to politics is two fold.  First, even when one has good
suspicion that there is some bias in a report, one should still accept the
presentation of that report as a fact.  In some cases, like the National
Inquirer, the fact that there is a report of a secret prophecy that Bigfoot
will marry Elvis, who's been living with space aliens, probably has minimal
correlation observables.

In other cases, one finds a much better correlation between observations
and reports, after things have time to be sorted out.  I'll give an example
involving the president GWB's DWI was quickly confirmed, while the
report based on forged records of GWB's National Guard duty had to be
retracted.

Taken together, we have a discernable pattern.  I realize that folks on the
right state that the liberal media has it in for GWB and thus finds
spurious things to accuse him of.  But, it was the same liberal media which
reported the stories questioning the accuracy of the report.  Articles were
written both attacking and supporting the position that the forged report
was trueuntil the propensity of the evidence required a conclusion.  It
doesn't always work out this well, mind you, but it this is not the
behavior of a group that is ideologically bent on getting one message out.

Instead, the data seems to support a different hypothesis: news
organizations are very interested ratings, readership, taking the lead in
stories.  So, they look for scoops that will raise their rating.  This
holds particularly true for news magazine shows, I think.  I've developed a
model that's fairly consistent with observations that indicates that news
shows, in general, are biased towards stories that create buzz.

Given this, we can develop a rule of thumb concerning revelations.  When
they first come out, they should be taken with a grain of salt.  After
other organizations get their teeth into the revelation, one should quickly
see if it is immediately confirmed, if it will take a while to confirm, or
if questions are immediately raised.

One advantage of using this technique should be clear: it is not based on
the ideological impact of the news.  Therefore, it is fairly well insulated
against the risks of confirmation bias that may exist with the person using
the technique.

While there are additional aspects to this, they probably fit better under
the points listed below, so I'll cover them there.



 2) Having a teammate with a significantly different perspective look at
the problem is usually very helpful:

Translated to politics, this involves having friendly debate partners who
have different outlooks than your own.  Two of mine are my Zambian
daughter, Neli and Gautam.  When I come up with a reading of the data, I
often determine how I could defend it with data in such a manner that my
debating partners will see the merit of the argument, even if they read
things differently.  So, they help me, even before I discuss things with
them.


>
3)  What I have found successful is establishing a hierarchy of likely
causes.
There are a couple of obvious carry overs from engineering to politics
here.  First, the hierarchy doesn't actually reject possibilities; it
assigns lower probability to them.  Second, as described in my previous
post, the ranking of the probabilities is adjusted as more data comes in.
Thus, as data comes in, one is guided by technique to reconsider one's own
position and, perhaps, modify it slightly to better fit the expanded data
set.

 4)  Calibrating against past observations is very helpful.
In particular, it is helpful if as many observations as possible are
included in the calibration.  If one suspects a bias towards a particular
viewpoint, it is not enough to catalog past instances that support that
view of the bias.  One must also accept past instances that are
inconsistent with that view.

Let me give an example.  We can compare reports on conventional fighting
that have come from the administration vs. scoops that have come from
various people.  Among them was Seymour Hearsh, who claimed that the US had
many more people killed at the start of the Afghanistan war than
reported...and that the Ranger raids were disasters.

On the whole, if you compare the predictions of the Administration with
those of the various pundits; the Administration's predictions were
superior.  Allowing a modest error bar for the fog of war, you would see
that the GWB administration did a pretty good job representing the progress
of the conventional war phases of both the Afghanistan and Iraq conflicts.

Now, a good Bush Republican will point this out as 

Re: Data Cuts, Normalization, and Analysis for Politics L3

2005-07-25 Thread Ronn!Blankenship

At 03:50 PM Monday 7/25/2005, Dan Minette wrote:


A couple of similar observations (if for no other purpose than to show that 
Dan's experience is not the single available data point . . . )


[snip]



I will start the analysis by using a very old technique: looking at how
this question has been solved in an easier context and then seeing if the
lessons learned there can be applied to this problem.  The context that I
will consider is one that has strongly influenced my thinking, both
professional and personal, over the last 15-20 years.  It is the solving of
reported field problems at my first job, with Dresser Atlas.

When I joined Dresser Atlas, I noticed a vicious circle between operations
and engineering.  To give a bit of background, our group was responsible
for the design and support of nuclear tools that were run by operations in
customers' oil wells.  Operations were directly responsible for the
accuracy and reliability of the tools.  Since the tools were designed and
characterized by engineering, fundamental problems were referred to
engineering.

This usually happened in the "fire drill" mode.  A customer would express
significant dissatisfaction with our service, indicating a possible cut off
of Atlas from working for them.  The field would report the problem that
they saw as responsible for the problem and make an urgent request to
engineering to solve the problem.  Engineering would stop it's long term
work for anywhere from a day to two weeks, investigate the reported
problem, and respond.

Most of the time, it was an exercise in futility and frustration.
Engineering could not find the reported problem.  Indeed, many times, the
work gave strong indications that the reported problem was very unlikely to
exist.  Engineering would report this back to the field, frustrated at
losing time in the development of new tools, which were also demanded by
the field.  The field became frustrated and angry at what they considered
the culture of denial in engineering.




In most operational units in the Air Force (i.e., units which actually had 
planes and flew them rather than providing a support function only), there 
are usually two divisions in the unit:  "operations," which flies the 
planes, and "maintenance," which keeps the planes in flying condition.  As 
in Dan's example, when something goes wrong, the pilot from ops blames 
maintenance for not maintaining the aircraft or at least the malfunctioning 
part properly, and maintenance turns right around and blames the pilot for 
breaking it.  The unit I was in, which was a part of the Flight Test 
Center, had a third branch called "engineering," which in that unit was 
responsible for planning the test missions in order to test whatever 
capability of the aircraft or other system needed testing and then to 
collect whatever data was sent back via telemetry or recorded by on-board 
instruments or instruments on the ground (e.g., a radar site or other 
instrument which recorded the flight path of the aircraft being 
tested).  Thus, instead of ops blaming maintenance and maintenance blaming 
ops for whatever went wrong, both blamed engineering . . . (Yes, I was in 
engineering).


[snip]



3) It is impossible to be totally open to every possibility; while getting
locked in a particular mindset will blind you to obvious solutions.
This seems like a contradiction, but it really isn't.  It is a balance
point.  One cannot be totally open minded to every possibility, because the
possibilities are virtually endless.  One joke I use to make about this
when we were stumped concerning the source of a problem was "Well, I don't
think we need to look at the effect of the barometric pressure in Cleveland
on our data."  In other words, we needed to be open minded, but not too
open minded.



At the university where I did my undergraduate work, the freshman physics 
course for majors was taught by the department head.  During one of the 
first labs, where the purpose was to collect some data from an experiment 
and fit it to an equation to show that verily the equation derived from 
theory did describe the results, Dr. Morton would start things off by 
suggesting an alternative equation which had as additional variables things 
like the phase of the Moon or the day of the week . . .


[snip]


--Ronn! :)

I always knew that I would see the first man on the Moon.
I never dreamed that I would see the last.
--Dr. Jerry Pournelle


___
http://www.mccmedia.com/mailman/listinfo/brin-l


Data Cuts, Normalization, and Analysis for Politics L3

2005-07-25 Thread Dan Minette
It's been a while since I wrote parts 1 & 2 of my promised three part
analysis:

1) How hard can one push prisoners who are probably associated with
terrorism or terrorist groups?  Where is the boundary of unacceptable
treatment?  Is this boundary dependant on the circumstances?

2) How does one handle the status of prisoners taken in ongoing hostilities
if they are POWs? if they are "unlawful combatants", but there is not
enough evidence to convict them of  a specific war crime?

Even the most casual observer might have noticed that I have yet to address
#3:

3) How does one determine the most likely possibility and the range of
possibilities from conflicting reports from conflicting sources?)

Listening to a number of different people from a number of different places
in the political spectrum argue for radically different sets of facts from
the same observation I've noticed that people often have a cut criterion
that appears to be based on their beliefs.  For example, conservatives talk
about the liberal bias. In the '60s and '70s, Marxists I knew talked about
the inherent pro-capitalistic bias of the US and European papers.
Conservatives I know use to tell me that Rush is more accurate than the
mainstream media.  Many were convinced that the news media was covering up
the strong evidence that Bill Clinton murdered both Vince Carter and Ron
Brown. When, at the request of friends, I went to talk with Dennis Kopinski
(sp) supporters at their house, I was amazed that many of them laid most of
the worlds ills, including the Balkans, on people the US put in power. For
example, at that meeting,I was told that Milosovitch was really a CIA tool
that we decided was bad only after he stopped obeying orders.

One consistent pattern I've seen was a data cut that was consistent with
pre-set beliefs.  Information that confirms those beliefs is considered
reliable, while information that contradicts those beliefs are suspect.
It's a natural tendency of humans to do this, and one could go into a long
analysis of why.  But, this post will probably be L3 without this analysis,
so we'll postpone that discussion to another time. I hope we can take this
human tendency as a given, and then look at techniques that might help us
overcome it.

I will start the analysis by using a very old technique: looking at how
this question has been solved in an easier context and then seeing if the
lessons learned there can be applied to this problem.  The context that I
will consider is one that has strongly influenced my thinking, both
professional and personal, over the last 15-20 years.  It is the solving of
reported field problems at my first job, with Dresser Atlas.

When I joined Dresser Atlas, I noticed a vicious circle between operations
and engineering.  To give a bit of background, our group was responsible
for the design and support of nuclear tools that were run by operations in
customers' oil wells.  Operations were directly responsible for the
accuracy and reliability of the tools.  Since the tools were designed and
characterized by engineering, fundamental problems were referred to
engineering.

This usually happened in the "fire drill" mode.  A customer would express
significant dissatisfaction with our service, indicating a possible cut off
of Atlas from working for them.  The field would report the problem that
they saw as responsible for the problem and make an urgent request to
engineering to solve the problem.  Engineering would stop it's long term
work for anywhere from a day to two weeks, investigate the reported
problem, and respond.

Most of the time, it was an exercise in futility and frustration.
Engineering could not find the reported problem.  Indeed, many times, the
work gave strong indications that the reported problem was very unlikely to
exist.  Engineering would report this back to the field, frustrated at
losing time in the development of new tools, which were also demanded by
the field.  The field became frustrated and angry at what they considered
the culture of denial in engineering.

At first, I simply fell in with the engineering party line.  I saw how we
wasted time on fire drills chasing close to impossible claims from the
field.  But, after a while, I talked with enough people in technical
services (a field interface group), and talked with enough customers to
determine that the field problems were not just a fantasy, or the result of
bad operations practice.  Something was going on, and the reports were good
faith efforts to describe what that something was.

One particular instance stands out for me.  A district engineer reported a
problem.  I looked at the reported problem, and saw that it's existence was
inconsistent with a wealth of data that I had analyzed.  Since these data
were carefully taken, and were taken with a number of different tools of
the exact same design, I was pretty sure that the reported problem did not
exist.

I called the engineer back to report my findings.  He r