A few points:

(1)  I think that there is a conceptual confusion of using the
terms "to effect" as a substitute for "related to", the latter
can be either causal or correlational or both.

(2) One could argue that the fundamental goal of science
is to identify and define causal relationships among
variables (both empirical and latent) -- Humanistic psychologists
and others in that camp might disagree (at least they did
back in the 1960s and 1970s).  Causal relationships
can be observed in two general situations:

(a) Systematic Observational Research:  if we accept
that causal relationships actually exist in the physical world
(and not just in our minds), then systematic observation
can help to locate and identify such causal relationships.
Astronomy is the classical example of how the relationships
in systematic observations can be mathematically modeled
and these model imply explanations and theories.  Similar
situations exist in economics, political science, sociology,
and other areas where general mathematical modeling
and structural equation modeling (SEM) have been used.
Astronomy, however, is boosted by developments in physics,
making it more rigorous and provide guidance on how
choose among competing models of the same phenomenon.
The more "squishy" social and biomedical sciences have
additional difficulties due to problems of construct validity,
measurement model issues, and, the problem of "complexity"
because these phenomena are observed in what has
once been called "open systems", that is, an infinite number
of variable are present in the phenomena being studied but
only a few of them are really relevant.  Deciding which variables
are relevant is an ongoing process between collecting observations,
model testing, and repeat ad infinitum or one has "good enough"
model/theory.

In the case of astronomy, would anyone really quibble if
one asked "What is the effect of large planetary mass on
other nearby planets and objects?"  I grant that one might
have to understand how gravitational forces operate in order
to appreciate the question.

(b) Experimental designs, when conceived and implemented
correctly, represent "closed systems" where variables that might
be  involved in a cause-effect relationship are clearly defined
allow the causal relationship to be detected and measured
while controlling for all other variables that may or may not
be involved in the causal relationship (i.e., situations where
moderation and/or mediation may be operating).  This is
the classic situation of "independent variables" which are
selected and manipulated by the research (though participant
attributes like gender, age, degree of illness, skill, knowledge,
etc., might also be used as "independent variables" though
the mechanisms that are involved may be unclear and it would
be better to refer to these as "quasi-independent variables")
and "dependent variable(s)" which are suppose to manifest
the "effect" of the independent variable -- this is best represented
by the equation "dependent variable" = f("independent variable").

The function f(x) can take a variety of different forms though
there is the old saying that is relevant "causation implies correlation"
but correlation does not necessarily refer to the Pearson r, rather,
it refers to the general mathematical relationship relating the
independent variables to the dependent variable(s).  Note that
in (b) whether or not one is actually detecting a causal relationship
or merely a correlational relationship depends upon the quality
of the research design being used, how well the procedures
were executed, and other factors.  The recent problems with
replications highlight these points.

Experimental designs have been developed to establish causal
relationship and it may be proper to say the "Effect of X on Y"
though this may not apply to quasi-independent variables or
when relevant 3rd variables have not be included (in SEM
terms we have "model misspecification" regardless of whether
we are referring to an ANOVA structural model or a regression
model or an SEM model that uses both empirical and latent
variables to represent that cause-effect relationships.
NOTE:  In SEM modeling, especially of open systems, a
variable in one part of the model might be a "dependent
variable' but may also be an "independent variable" in relationship
to other variables.

So, perhaps one should ask for the mathematical model that
the "effect" embodies to better understand what it means,
as well as the degree that 3rd variables have been controlled.

More below.

Thu, 20 Jul 2017 16:53:19 -0700, Karl Louis Wuensch wrote:
When using the word "effect," as in "effect-size," I sometimes
warn my students that I am using it in the "soft" sense (not causal).

I think that one should probably make clear how "effect sizes"
for causal relationships differ from "effect sizes" in correlational
relationships.  The former directly represents how changes in
the causal variable produce changes in the outcome variable while
the latter represents how strongly the "X" variable(s) is related to
the "Y" variable(s) with the possible influence of other variables
(3rd variables "Z") that are involved even though they have not
be measured and included in the analysis.

A related concern of mine is the use of the terms "independent
variable" and "dependent variable" in research that is not experimental -
that is, when no variable is manipulated.

Does this apply to variables that are participant/subject attributes?
Sidenote: economists often use the terms independent and dependent
variables in their mathematical models of how economic factors
operate.  I think that this is an appeal to analyses in, say, astronomy,
but is probably a stretch.  For one view of how econometricians
view causality, specifically a type called "Granger Causality", see:
http://ejpam.com/index.php/ejpam/article/view/2948

As they say: "Induction is a bitch." ;-)

There is a tendency to use "independent variable" whenever the
variable is categorical and "dependent variable" when it is continuous.

I think that this may be peculiar to psychology and/or possibly
to certain groups of researchers who use certain types of
ANOVA.

Once I helped a previous student with his dissertation.  No variables
were manipulated, but several were categorical.  I help him dummy
code the categorical variables and use them in a multiple correlation
analysis, with continuous covariates, to predict the focal continuous
outcome variable.  His dissertation advisor told him no, do an ANOVA
instead, because then we have independent and dependent variables
and thus can make causal inferences.

So, did you smack the dissertation advisor upside the head or
did you simply point out, after Jack Cohen and many others,
that ANOVA and multiple regression are just different ways of
doing the same analysis, as described in the following:
http://psycnet.apa.org/record/1969-06106-001

However, given that experimental designs (specifically n-way
factorial designs) have uncorrelated independent variables,
this simplifies the analysis and makes conclusions more direct.
Collinearity or correlated ind vars/predictors is the monkey
wrench that gums up interpreting multiple regression results.
Unbalanced factorial designs (where the sample sizes of the
conditions are not constant) give rise to collinearity and make
the interpretation of ANOVA results more difficult.

Okay, enough. I hope I don't say too many stupid things.

-Mike Palij
New York University
m...@nyu.edu






---
You are currently subscribed to tips as: arch...@mail-archive.com.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5&n=T&l=tips&o=51089
or send a blank email to 
leave-51089-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu

Reply via email to