In talk.politics.drugs Szasz <[EMAIL PROTECTED]> wrote:
> Brian Sandle <[EMAIL PROTECTED]> wrote in message 
>news:<[EMAIL PROTECTED]>...
>> I have also put this on sci.stat.edu for any comment. I am not sure if 
>> that is the right group.

And since you mention SPSS I have added that group.

>> 
>> Szasz <[EMAIL PROTECTED]> wrote:
>> > Brian Sandle <[EMAIL PROTECTED]> wrote in message 
>news:<[EMAIL PROTECTED]>...
>>  [...]
>> >> Rank each subject 
>> >> in order for social factors, extent of cocaine use in pregnancy, extent of 
>> >> hyperactivity at age four or more of resulting child. Then go to some 
>> >> source like Castellan (recent edition) which  gives a formula for partial 
>> >> rank correlation with significance. It may not be a certain result, but if 
>> >> it shows something, then think out other means of investigation.
>>  
>> >    At some point, Brian, you have to consider the possibility that the
>> > people who did these studies used the best statistical methods
>> > available,

Or to some extent wahat they may have used may have been limited by what 
was available in the analysis packages available to them.

>> 
>> Understanding of statistics is not very good, even at PhD level, 
>> yourself, szasz:

>    I understand what is necessary. We'll see how much you know,
> yourself, Brian:

I am learning.

>> 
>>  and simply re-doing the study in different ways until they
>> > get a result you agree with is not the way scientists do business.

>    Which is what you suggested.

Not really. Looking at it the way I suggest could even rule out my 
theory.

I know what you are talking about: using all sorts of packages till one 
shows positve for your agenda and publishing it.

No I have not done any analysis on it, just seen what I believe could be 
wrong thinking of Proctor. So I suggest a test. The test will disprove 
rather than prove. If it does not disprove then as I said, other means 
should be searched for for explanation or disproval.

>>  
>> >    By the way, a partial rank correlation (AKA Pearson product-moment
>> > correlation) is generally considered the basis of all multivariate
>> > statistical techniques.
>> 
>> A partial rank correlation is _not_ also known as Pearson product-moment 
>> correlation.

>    The basic point is that the Pearson product-moment correlation is
> what the partial rank correlation, regression techniques, factor
> analysis, and a whole host of multivariate statistics is based upon.

Yes and drinking is based on swallowing.

> The individual differences between techniques within the multivariate
> family of techniques is irrelevant for the purposes of this
> discussion, as I will explain:

>> 
>> First mistake: The Pearson product-moment correlation is not a rank 
>> correlation.

>    A rank correlation is a form of a regression technique,

Rather, regression can be used to calculate rank correlation.

 which is
> related to the Pearson product-moment correlation. In fact, *all*
> multivariate techniques are to a certain degree based on the Pearson R
> squared.

And all eating and drinking is based on swallowing.

>> 
>> The Pearson product-moment correlation would be used for linear data,

>    Yes indeed. There are also techniques related to the Pearson R that
> can be used to analyze data with non-linear models.

Maybe someone else will make a comment, there, I might be a bit boring in 
reply, but for the moment, let's look at some further analysis which does 
apply with both linear and non-linear.

Now first you do a correlation. Your answers are are a whole lot of 
numbers, each one showing the strength of correlation calculated for each 
pair of variables.

Next you do the partial correlations. The result for each pair may be the 
same, it may be greater or less. If it is zero, then the original 
correlation may be considered to be `spurious' i.e. the relationship shown 
by the ordinary correlation is not one of cause and effect.


   Linkname: PA 765: Partial Correlation
        URL: http://www2.chass.ncsu.edu/garson/pa765/partialr.htm
       size: 290 lines

gives a good look at the processes.

*********
                            Partial Correlation

Overview

   Partial correlation is the correlation of two variables while
   controlling for a third or more other variables. The technique is
   commonly used in "causal" modeling of small models (3 - 5 variables).
   For instance, r[12.34] is the correlation of variables 1 and 2,
   controlling for variables 3 and 4. The researcher compares the
   controlled correlation (ex., r[12.34]) with the original correlation
   (ex., r[12] and if there is no difference, the inference is that the
   control variables have no effect. If the partial correlation
   approaches 0, the inference is that the original correlation is
   spurious -- there is no direct causal link between the two original
   variables because the control variables are either (1) common
   anteceding causes, or (2) intervening variables. Other patterns and
   inferences discussed below have to do with partial control and
   suppression effects.

   Partial correlation still requires meeting all the usual assumptions
   of Pearsonian correlation: linearity of relationships, the same level
   of relationship throughout the range of the independent variable
   ("homoscedasticity"), interval or near-interval data, and data whose
   range is not truncated.

   Partial correlation is common when there is only one control variable
   but is sometimes used when there are two or three. For large models,
   researchers use path analysis or structural equation modeling when
   data are near or at interval level, or use log-linear modeling for
   lower-level data. Newer versions of structural equation modeling
   software allow variables of any type on either side of the equation.
[...]
********
>From the FAQ:
********
     * If partial correlation assumptions are met and measures are
       reliable and valid, does upholding a model through partial
       correlation analysis mean that the model is true? 
        No, because other models arranging the same variables in
            different ways may also be found to meet partial correlation
            tests. As with other methodologies, partial correlation
            analysis can rule models out, but not establish them as the
            one and only true models. In addition, of course, all the
            usual validity and reliability issues apply.
*********
So let me say what I am keen to rule out before allowing more weight to
the theory that cocaine does not affect offspring at age 4. (Remember for
the moment, of course, that the research under discussion is only relating
of children up to age two. The older age effect needs further
investigation, also.) 

It has been reported by Proctor that apparent affects of cocaine in
preganancy on the resulting babies can be explained by social factors
which swamp any of cocaine's effect in this case. Though he says, as a
toxicologist, that cocaine has potent effects on people - enlarged left
side of heart for one.

Now I ask whether a hyperactive four-year-old, produced by a cocaine 
pregnancy, could put current demands on the family, reduce its income, 
drive the parents to drink, and actually cause the social factors. The 
correlation would be there but the cause is in the opposite direction. 

So if you eliminate social factors you may be eliminating problem 
four-year-olds, and thereby the effects of pregnancy-cocaine. So you get 
no correlation which suits the agenda of certain sectors.

So I ask for comparison of the partial correlaitons with the correlations 
to throw some more light, maybe to negate my theory, maybe not. If not I 
have not proven anything, but suggested more looking ought to be done.

Homework question: Would it disprove my theory if the partial correlation 
removing cocaine showed zero for any relation between hyperactivity and 
social factors?

More from the FAQ:
*********
   Is there a form of partial association for nominal and ordinal
   variables, analogous to partial r for interval variables?
   
   Yes, and the causal inference logic is the same. Consider the case of
       Yule's Q, a measure of association for nominal data which are
       dichotomies, such as the association of gender with having/not
       having an arrest record, as illustrated in the table below:
********
So we can go to rank correlation compared with partial rank correlation, 
too, if we decide our samples are not of normal (Gaussian) distribution.

>> which are given scores and are found to be symmetrically spread above and
>> below an average value. So it very often should not be used.

>    Your mistake, among many: and yet a partial rank correlation is
> simply a more complicated version of the same technique, except that
> simply you're proposing to add in more variables to the model (to be
> "partialled out", hence the term "partial" correlation), and of course
> variables that already are obviously irrelevant and contaminating
> variables for the purposes of drawing valid conclusions from the crack
> baby study.

Perhaps someone else may wish to expand a bit more on what I have said 
above to explain to szasz.

>    The fact that you're suggesting a technique that is vague to many
> doesn't fool me: you're simply arguing that since you disagreed with
> the conclusions drawn and the results the conclusions were drawn from,
> you complain that the study essentially didn't allow for contaminating
> variables to swing the results towards your predetermined conclusion,
> i.e., that drugs are panapathogens.

I have explained that, but let me comment on the technique being vague to 
many. You mention below that SPSS is a very popular analysis program. It 
has Pearson's correlaiton, and Spearman's rank correlation, with 
significance. But not partial rank correlation with significance. Or it 
didn't some 7 years ago. And since a lot of social variables are not 
Gaussian in distribution, the rank correlations will be used. But since 
the partail rank correlation with significance was missing from the 
package, I feel that tjat may influence the subtlety of the analysis that 
researchers are doing.

>> Second mistake: The Pearson product-moment correlation is not a partial 
>> correlation.

>    Again, Sandle, the Partial Rank Correlation and the Pearson
> product-moment correlation are based on the same math, and are the
> same family of techniques.

>    The basic point, and your towering and continuing mistake: simply
> arguing for another, closely related statistical technique, and one
> less suited for drawing valid conclusions from the data, is not good
> science.

More comment from someone else please.

>> Though a table can be written up showing how the various  
>> variables correlate to one another, nothing is done mathematically to hold 
>> variables constant, which is the essence of partial correlation.

>    Again, the basic point, and your basic mistake: there is nothing to
> suggest anything except that current research suggests that the "crack
> baby" syndrome is a myth. You standing here and using piss-poor
> amateur statistical  jargon and bluster to argue against the raw
> empirical data is simply ludicrous.

I do not think so.

>> 
>> 
>> When data are not evenly spread a _rank_ correlation is thought to be more 
>> robust.

>    So, in essence, we began with a study that wanted to isolate the
> effects of crack on prenatal development. They isolate (e.g.,
> empirically and/or statistically control for) the effects of
> socio-economic variables, so that these variables do not contaminate
> whatever results they get regarding the "crack baby" effect after
> extraneous variables (i.e., variables not related to the
> pharmacological effects of the crack) are controlled for.

Explained above.

>    So, they do that, and now you want to argue for adding in
> additional extraneous variables that these trained scientists knew
> enough to control for, as these extraneous variables would naturally
> detract from their ability to draw conclusions from the data.

You have to `add in' variables to control for them.

>    Follow me so far? OK. So, we know they've done this (based on the
> abstract, and some second-hand info from Dr. Proctor), and yet you
> (Brian) want to argue that they massage the statistics by using
> partial rank correlations

`Massage' no, further analyse, yes.

> (for reasons you still haven't made clear),

I had explained before, in reply to Proctor, as I have repeated above.

> in the hopes that they get a result you might agree with.

In the hopes that mistakes are not being made.

>    Do you understand why I jeer at you?

I understand that this discussion started on a political newsgroup, in 
which a bit of fast talking needs analysis.

>> Data are not given scores, but just put in order from biggest to 
>> littlest. The Spearman rank correlation might then be used. That is not 
>> however a partial correlation. Its table shows hows how various things are 
>> correlated but does not hold mathematically hold any variable constant and 
>> so look deeper.

>    Again, you have given no rationale whatsoever for having
> researchers use this kind of technique. Whether correlated variables
> are ranked, held fixed or rotated, it doesn't matter.

Drink is always swallowed no matter what it is.

 The researchers
> knew which variables to control for in their statistical modelling,
> and you do not.

Should be simple enough to look up their paper and thoughts. Do you have 
access?

>> 
>> The next step, a _partial_ rank correlation, was dealt with in Siegel's
>> work on statistical method. But the technique was not advanced 
>> sufficiently to calculate any _significance_ figure.

>    This is all very nice and makes you sound very learned, but is
> essentially irrelevant jargon (again) to the basic point I am
> continuing to raise: the consensus of scientific data is that crack
> baby syndrome in any form is a myth.

They were trying to examine the scenario. Surely the analysis of the 
results then has an affect on consensus. The way you say, the consensus is 
the agenda you want to be correct, so don't do any more anlysis.

 Your inane bloviating about
> multivariate techniques you visibly know little about how to apply
> does not detract from a simple fact about science: if you don't like
> the results, do your own studies.

Where does that put theoretical scientists?

>> 
>> Later, Castellan re-did Siegel's book and introduced a significance 
>> calculation for partial rank correlation. He said it was not totally to 
>> be trusted, I think.
>  
>    And yet you suggest this technique with no visible rationale!
> Fascinating.

Though you may not be able to totally go by the signficance figure, it 
must be another pointer, when summed with other evidence.

>> 
>> When I was interested in the work of an MA psychology student I was
>> provided through the newsgroups with a VAX based program which would do
>> the calculations, though I did not find out how to use it on the VAX at
>> that stage.
>> 
>> What is available now? Linux or anything?

>    SPSS is the standard in social sciences here in the United States.

Yes I see it is in Linux, now. The more packages we look at the better.

>> 
>>  The idea that your suggestion is anything more
>> > than a distraction from the main point (e.g., that 'crack baby'
>> > syndrome is essentially a made-up syndrome) is laughable. March on.
>> 
>> I said `crack _baby_' is made up.

>    That's funny, you seem to be arguing for a very similar version of
> the crack baby syndrome, below. A cheap semantic-rhetorical ploy, I
> suppose.

By whom? Circular argument. "Crack pregancy does not affect babies, 
therefore there is no such thing as crack babies, therefore there is no 
such thing as resulting crack four-year-olds, because to distinguish 
four-year-olds from babies is only semantic"

I call that more of szasz fast political talk we are so used to.

 Anyways, I suppose its only a matter of time until science
> catches up with your advanced thinking on this matter and discovers
> all the things you appear to be so certain of.

Or helps to disprove my theories.

>> I am waiting for Proctor's reply. The 
>> problems do not show till _after_ babyhood, when the developing executive 
>> funcitoning is trying to call up the releveant areas of the brain for that 
>> time.

>    I've never heard of "developing executive functioning call(ing)
> up... areas of the brain." I'm sure you're trying to say something
> that appears intelligent, here.

There are a lot of organs in the body which are called up with stages of 
maturation. Some like the thymus are decomissioned. The brain is not just 
one organ, it is very complex with all sorts of totally different areas 
with different receptors &c.



-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----==  Over 80,000 Newsgroups - 16 Different Servers! =-----
.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
.                  http://jse.stat.ncsu.edu/                    .
=================================================================

Reply via email to