Dennis,

"Dennis Roberts" <[EMAIL PROTECTED]> wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
>
> >
> >
> >This point really strikes at a deeper issue. If the measures are so
designed
> >that they are necessarily correlated, then they are bad measures.
>
> it sounds to me that you are saying that if people construct and/or use
> measures that have some naturally occurring (that is, we expect them to be
> associated) correlation, that we are both bad scientists AND, doing bad
science
>
> am i clear on that?


If you pretend that your correlated variables are not confounded then you
are a bad scientist. If you wish to differentiate the two variables, then
you must develop measures that unconfound/uncorrelate the variables.



> but, some models work best when only uncorrelated variables/factors are
> used ... while some models depend on and rely on relationships amongst
> variables (a simple reliability model would be one of those)

The classical true score model is an appropriate situation for correlated
variables since each is designed to be a replication of the underlying
common true scores. Combining such replicates cancels out the random errors
and magnifies the common latent variable. There is nothing wrong with doing
this. The items in the questionnaire will ideally be correlated. But these
variables are not confounded in the conceptual since. They are equal
conceptually. Confounding means confusing two different things.


>
> what we seem to be getting in all this discussion is the following:
>
> the best model is one that will accommodate data that will fit THAT model,
> the model i like  ... and, all other models are wrong/bad/not useful

This is kind of twisted logic is characteristic of other peopl on these
newlist.  I am simply stating the requirements for inferring causation from
corresponding regressions. It is true that correlated variables are
confounded when they are supposed to be independent. This is a problem for
all statistics. Using correlated factors in an anova design also confounds
variables.


>
> however, and i know this is really simple (perhaps that is why i am in
> error), but i think we need to posit theories/notions/ideas ... and then
> collect data that seem to be able to assist us in "validating" (if indeed
> it is possible) those theories/notions/ideas ... and, the best models to
> use in these cases are the ones that will account for the data we collect
> (ie, the measures we use)

Really we always have some idea of how things work and these ideas determine
which variables we inspect. But we also really also accomodate reality, to
some extent. It is wrong to put all the emphasis on deduction/assimilation.
Induction is also important. If it were not, then science and knowledge
would never expand beyond the tautologies of existing theory. Data can
challenge the validity of ideas. That we can discover asymmetrical (causal)
relations in data is not something forbidden by God. Though those who
promote relativism and dogmatism, seem to hate the idea of discovering
causes. It makes them feel small and reduces their power to practice the
sophistry of justifying any old idea that a client has.

>
> there is no true model ... nor any best model ... only models that better
> account for certain kinds of data than other models in certain data
situations


This is relativism and is a symptom of intellectual cowardice and short
sightedness. Some models are by definition wrong. If I say I always lie,
then I am contradicting myself and my claim is invalid. It is not more or
less invalid. It is invalid. The same holds for logically inconsistent
models. It is true that there is always the possibility of some hidden fact
or factor that can make a theory applicable in some circumstance. This is
why we can neither prove nor disprove any scientific theory empirically. We
can never know for absolute certain that we have not confounded things. But
an illogical theory can be disproved by simple logic. The logic is not based
on experiment or observation, but on the statements that define the theory
as a logical unit.

>
> it is up to us as good scientists ... to find models or create models ...
> that handle the data we use and assuming that we have integrity in what we
> are doing and assuming that we have used intelligence in doing what we are
> doing ... if we find that the model we use (or our pet model) is not doing
> too well ... then, we need to look for or create a better one


Of course you are correct to say that we should improve out theory. If you
are suggesting that CR is not doing well then you are very wrong. None of
you have bothered to get data that conforms to the assumptions of the
models. What has happened is pure fraud. People interpret results without
admitting the flaws in their tests. That is fraud and it should be punished.
The professions of statistics and psychology are too corrupt to do anything
about such fraud, however, and I am peerless in my complaints.

As to bad data. It is not any duty to make the most of garbage. If people
wish to infer causation from correlations, then they must be more scientific
in their collection of data. Collecting samples that sparsely represent the
combinations of extremes of causes, and refusing to admit this weakness in
the data, if at best ignornance and at worst evil. There is plenty of room
for incompetence and fraud in between these poles.

Dennis, even if CR does not work, you and your friends are passing up an
opportunity to look at data in ways you never have before. Any thing to get
Bill Chambers. This tells me you do not see the beauty in science that
Poincare described.

Do you see that two normally sampled causes will generally fail to fill out
the corners of their crosstabulations?  Does it baother you that so many
people make generalizations from data that so poorly elaborates the
phenomena of interest?

Bill



>
> i think this is the way science really works ... and how it should work
>
You are not talking about science. What you are describing is akin to what
Author Anderson did with ENRON's data. They lied and cheated and hurt a lot
of people who failed to ask for all the details, people who trusted the
shiney experts. Those experts were a bunch of yuppies whose valued greed and
deception and thought truth was only relative. They are a drop in the bucket
of all the corruption in this world and in science in particular. What you
are talking about may be the way statistics gets practiced but it is not
science. Equivocation and spin have been understood since the days of Plato.
Some of your buddies on this list are bad scientists and Dr Steve is
probably a parasite on children. I do not care how much money you guys make.
If you do not love truth, if you do not refuse to lie, if you sell your
souls for money and fame, you are not scientists.


Bill

>
>
>
>
> .
> .
> =================================================================
> Instructions for joining and leaving this list, remarks about the
> problem of INAPPROPRIATE MESSAGES, and archives are available at:
> .                  http://jse.stat.ncsu.edu/                    .
> =================================================================



.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
.                  http://jse.stat.ncsu.edu/                    .
=================================================================

Reply via email to