The issue of causality from the results of fitting a model to data has been
a major topic on SEMNET over the last many years.

If anyone wishes to pursue ideas on this and related issues, subscribe to
[EMAIL PROTECTED]

Most of the focus is on structural equation modeling (SEM). For
statisticians, a quick referral to Jim Steiger's article "Driving Fast in
Reverse" in JASA March 2001, p331-p338 (if you have it around) is a quick
discourse on SEM and the inherent problems of figuring out what is going on
from a model (I can send a copy via e-mail attachment if anyone asks)..

In SEMNET we have had some interesting open discussions over the years with
some of the book authors Wu casually name drops.

Judea Pearl's bottom line position is that a correlation between two
variables that is supported by data, "must mean something". The meaning has
to be deduced from path diagrams, a lot of rational logic, equation sets
with coefficients, and model outputs from very large, expensive software
packages.

The field is unfortunately dominated by those who have very large data sets,
do not have any physical measurements (i.e. use survey data, test
(examination, quiz, etc) results, results of "expert" opinions, etc.),  can
afford the computer resources to do the reduction, and are relying on the
fit of a model (and in many cases, any model) to the data to support a
logical claim of causation. In most cases, the model with the better fit
wins out.

Causation essentially is the "sizzle" that sells the paper and is a lead to
further grants.

DAHeiser




=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

Reply via email to