Eric noted:  "While I certainly agree that many textbooks convey the
absolutely
misleading impression that the "PBC" is some special form of measure, I
think that the usual formula presented for it is pedagogically useful in a
few ways (not that the typical textbook makes use of them):1) It
demonstrates that a correlation problem in which one variable is dichotomous
is equivalent to a two-group mean-difference problem."

    You all may find this hard to believe, but, in my experience, a large
proportion of social scientists have the delusion that if you conduct a
traditional two-group t-test, then you are qualified to make causal
inferences (that is, variance in the continuous variable is caused by
alteration of the dichotomous variable), but if you analyze the same
variables with a correlation analysis you cannot make a causal inference.  I
show my students and colleagues the equivalence of testing the null that the
point biserial is zero and testing the null that two means are identical,
and they are amazed.  I explain that it is how you collect the data, not how
you do the analysis, which determines whether or not you can make, with some
confidence, a causal attribution not complicated by confounds, third
variable problems, etc.  These days I try to avoid the issue of whether or
not there is really any difference between high correlation and causation --
when I get started on such issues, I end up taking the whole semester
discussing philosophical issues rather than stats.

    Once I was consulting for a Ph.D. candidate at a major university (one
frequently ranked #1 in football).  His predictor variables were a mixture
of continuous and categorical variables, his criterion variable a continuous
variable.  The data were from a questionnaire, no experimental
manipulations.  I dummy coded the categorical variables and did a multiple
regression.  The chair of his thesis asked him to choose an alternative
analysis -- ANOVA or ANCOV, so he could make causal attributions.  I told
him to submit exactly the same statistics, but describe them as being an
ANCOV rather than a multiple correlation/regression, and all was peachy
again with his dissertation chair.

    I wish I could tell you that the delusion suffered by this dissertation
chair was unusual, but it seems not to be.  Part of the problem is the
confusion of "correlational" analysis with "correlational design."  I argue
that the latter should be called "nonexperimental," not "correlational," to
help avoid this delusion.  Also, the phrase "correlation does not imply
causation" is too strongly interpreted, IMHO.  I tell my students that
correlation does "imply" causation, that is, it suggests causation, but it
certainly does not establish it beyond doubt.  I add that correlation ( not
necessarily linear) between two variables  is a necessary but not sufficient
condition to establish a causal relationship between those two variables.

    Have you all also encountered this curious delusion, where a client or
student thinks that the way you do the statistical analysis determines
whether or not you can make a causal attribution?  My graduate students have
told me that that is what they were taught as undergraduates.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++ Karl L. Wuensch, Department of Psychology, East Carolina University,
Greenville NC 27858-4353 Voice: 252-328-4102 Fax: 252-328-6283
[EMAIL PROTECTED] http://core.ecu.edu/psyc/wuenschk/klw.htm



=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

Reply via email to