Here's the second re-post I promised. It was originally posted by
Michael Bailey of Northwestern University.

-Stephen

------------------------------------------------------------------------
Stephen Black, Ph.D.                      tel: (819) 822-9600 ext 2470
Department of Psychology                  fax: (819) 822-9661
Bishop's University                    e-mail: [EMAIL PROTECTED]
Lennoxville, QC           
J1M 1Z7                      
Canada     Department web page at http://www.ubishops.ca/ccc/div/soc/psy
------------------------------------------------------------------------

Forwarded message
-------------------------------------------------------------
The following message is from Bruce Rind. His email address is:
[EMAIL PROTECTED]


Mike,

     I wanted to forward to you some of my responses to Ray Fowler.
After the May 12 Family Research Council press conference, he emailed
me that congressmen were using two main methodological criticisms
to attack the study.  Paul J. Fink, he told me, sent these criticisms
to Dr. Laura in a letter.  She apparently relayed them to the
congressmen.
    The first criticism is that 60% of the data in our meta-analysis
came from a single study done 40 years ago that was flawed, making
our paper flawed.  The second is that about 38% of the studies we
included were "unpublished" which, they claim, invalidates the whole
study.
     Now that congress has condemned our study and the APA has basically
given their blessing to this (and is congratulated in the resolution
for reversing course), I think it is important that fellow researchers
be aware of the outrageous invalidity of these two methodological
criticisms put out by Fink and his colleagues at the "leadership
council."  Fink in interviews for the Philadelphia Inquirer has
called out study "perverse" and "terrible;" David Spiegel, his
collaborator, said in the NYT interview that our study had serious
methodological flaws and that we "used meta-analysis
the way a drunk uses a lamppost--for support, rather than
illumination."  The "60%" and "unpublished" arguments are central
to their attacks.
    You may put our refutations--and I mean refutations, not merely
answers--on your listserves.
    We are interested in hearing from other researchers about their
reactions to the information we provide below and any other comments
they might have regarding the methodology of our review."

              Here are the refutations that we sent to
              Fowler two months ago in May:
      ______________________________________________________

The following 15 lines (numbered) are a quote from Dr. Paul Fink
in a letter to Dr. Laura "critiquing" our meta-analysis.  Below
these lines, we debunk his critique.

1  Of the 59 studies included in the analysis, over 60% of the data is (sic)
2  drawn from one single study done over 40 years ago.
3  The authors loaded their analysis with data involving primarily mild
4  adult-child interactions involving no physical contact.  Rather than
5  focusing on child sexual abuse, the 1956 study on which they largely relied,
6  asked about college student's encounters with sexual deviants during
7  childhood and adolescence, usually in public places.  Based on the nature of
8  these mild experiences, it is not surprising that the students described
9  little permanent harm.  Nonetheless, the authors of the Rind study
10 generalized these findings to all sexual abuse.
11 It is as if a study that purports to examine the effects of
12 being shot in the head contained a majority of cases in which
13 the marksman missed.  Such research might demonstrate that
14 being shot in the head generally has no serious or lasting
15 effects.

We show below that Fink's criticisms are completely specious.  His
claim that, of the 59 studies we included, over 60% of the data is
(sic) drawn from one single study done over 40 years ago (lines 1,2)
is blatantly false.  His claims that we "loaded" our analysis with
these data (line 4), that we "largely relied" on these data (line 5),
and that we generalized these data to all sexual abuse (line 10)
are similarly blatantly false.  Fink is referring to a study by
Landis (1956).  Here are the facts:

(1) The Landis study was NOT used in any of our meta-analyses, which
     were the primary and most important analyses in the study, from
     which we concluded that sexually abused students were only
     slightly less well adjusted than control students.

(2) We only used the Landis data for self-reported reactions and
     effects.  Regarding self-reported reactions, data from 9 female
     and 9 male samples were combined (see Table 7, p. 36) to get
     overall reactions.  The Landis data made up 35% of the female
     data and 30% of the male data (33% of male and female combined).
     The Landis data were the most negative of all studies; if we had
     been trying to doctor the results in favor of positive reactions,
     we would have calculated the unweighted means for reactions.
     Instead, we used weighted means, giving substantial weight to
     the Landis study.  Below we present the means as presented versus
     the means WITHOUT the Landis study to show how inclusion of the
     Landis study negatively biased the means, which contradicts
     Fink's assertion that we "loaded" our analysis:

         as presented in paper                       WITHOUT Landis
           pos    neut     neg                     pos    neut    neg
    women   11     18       72            women    16      18      66
    men     37     29       33            men      50      25      24

     The table on the right is the effect of removing Landis, which
     clearly goes against Fink's argument of "loading" to minimize
     reports of harm.  If we had included Landis, but reported
     UNWEIGHTED means, then we would have gotten the following:

             pos    neut    neg
    women     14     18      68
    men       43     27      30

     This shows that using weighted means, as we did, and including
     the Landis study, as we did, gave the highest values of overall
     negative reactions, which contradicts the "loading" imputation.

(3) In the self-reported effects analyses, we reviewed the 6 male and
     5 female samples that had this information.  Here, the Landis data
     made up 53% of the total N for males and 68% of the total N for
     females (combined = 63%).  Thus, this may be what Fink was
     referring to when he claimed that over 60% of the data is (sic)
     drawn from one single study (see Table 8, p. 37).
         We first examined self-reported negative effects on subjects'
     current sex lives or attitudes.  For males we noted, of the 5
     samples that had data, the percent of reported negative effects
     ranged from 0.4% (Landis) to 16% (Condy).  If we had been
     trying to "load" the overall mean, we would have used the
     weighted mean to give more weight to Landis' very low percentage.
     But we did not do this; instead we used the UNWEIGHTED mean,
     which yielded 8.5% negative reports (using the weighted mean
     to take advantage of Landis' low percentage would have yielded
     4.4%).  If we merely dropped Landis' study, the overall negative
     mean would change trivially from 8.5% to 10.5%.
         In the case of females for negative effects on current sex
     lives or attitudes, only two samples had data: 2.2% (Landis) and
     24% (Fritz et al.).  We gave the UNWEIGHTED mean of 13%, when the
     weighted mean of 3.8% would have "loaded" the results.
         Next, we considered lasting general negative effects.
     Those based on males came from only 3 samples (Fishman 27%,
     Landis 0%, and West & Woodhouse 0%).  We did not give a mean.
     For females, 3 samples had data of lasting effects (Hrabowy 25%,
     Nash & West 20%, and Landis 3%).  We did not give a mean.  What
     we did do was to conclude properly that lasting negative self-
     reported effects occurred for only a minority of students--a
     conclusion that holds INDEPENDENTLY of inclusion of the Landis
     data.

(4) Fink's point about "loading" our analysis with "primarily
     mild adult-child interactions involving no physical contact"
     (lines 3,4) is also false.  We included all the studies that
     were available at the time, 16 of which included only cases
     of physical contact.  We examined in our meta-analyis whether
     CSA-symptom relations varied as a function of contact vs.
     non-contact CSA.  They did not (see p. 33).  Thus, we were
     not trying "load" the data, as Fink imputes.  Moreover, we
     established that abuse severity was the same in the college
     samples as in national probability samples (see Table 1, p. 30).

(5) Fink's analogy to being shot in the head versus having the
     marksman miss shows that he is not well read in the child
     sexual abuse literature, in which it has often been claimed
     that non-contact CSA can be just as traumatizing as contact
     CSA.  Thus, it was completely appropriate to examine the
     Landis data.  Fink's analogy is particularly poor, given
     recent school shootings: is Fink implying that the high
     school students in Littleton who barely missed being hit
     by bullets will have no lasting effects?

(6) IN SUMMARY, the Landis data played no role in the MOST IMPORTANT
     analyses in the article, which were the meta-analyses, from which
     we derived our most important conclusions.  Second, for self-reported
     reactions, we analyzed the Landis data, which were the most negative
     of all studies, in such a way as to give them maximum impact, which
     contradicts imputations of "loading" our analyses.  Third, for
     self-reported effects, we analyzed the Landis data, which were less
     negative than most other studies, in such a way so as to minimize
     their overall impact, which again contradicts Fink's assertion of
     "loading" the data.

(7) CONCLUSION: Fink misrepresented how we analyzed the Landis data.
     Above, we showed how we analyzed the Landis data to do just the
     opposite of "loading" them, as Fink has wrongly characterized.
     Further, the section of the paper to which Fink is referring
     constitutes relatively minor analyses; the major part of the
     analyses and conclusions in the paper come from the meta-analyses,
     to which Landis's data are completely irrelevant.  Due to Fink's
     misrepresentation of our analyses and his feeding this
     misrepresentation to Dr. Laura and ultimately to the Congress
     with all the grave consequences of media sensationalism and
     political pandering, the question should now turn to why he
     has done this.

In conclusion, we add that our handling of the Landis data MAXIMIZED
the reporting of negative outcomes, rather than MINIMIZING it.  This
is the EXACT OPPOSITE of what our critics have claimed.

  ____________________________________________________________

The next email is our response to the claim that about 38% of the
studies we used were "unpublished"--of course, calling doctoral
dissertations unpublished is debatable because they are part of the
public record, being available at the library of congress, etc.
The criticism is that these studies were never subjected to peer
review or published, which invalidates our meta-analysis:


    We included 36 published studies along with 23 unpublished studies
(21 doctoral dissertations and 2 master theses).  The critics cite this
information from p.27, but then deceptively do NOT cite the follow-up
information on p.34, in which we statistically compared results from
the published and unpublished studies.  In comparing the mean effect
sizes (i.e., associations between CSA and symptoms) of the two groups,
we found them NOT to be statistically significantly different at the
conventional .05 level.  The mean published versus unpublished effect
sizes were r=.11 and r=.08, respectively, which are certainly NOT
different in a practical sense.  For comparison, the mean effect size
in national probability samples was r=.09, with which the unpublished
data were completely consistent.
     Moreover, the critics fail to mention our findings regarding the
homogeneity (i.e., consistency) of the effect sizes across the studies
(see p.31).  Of the 54 effect sizes meta-analyzed, all but 3 were
consistent with the mean effect size of r=.09.  The three outliers
were all published studies.  Thus, all unpublished studies were
consistent with the overall trend, demonstrating that they were in
no way anomolous and that they in no way biased the overall results.
     Furthermore, the doctoral dissertations were generally very well
done studies, often better than published studies, because they often
included more measures and better designs, reflecting the supervision of a
group of university professors with PhD's.
    Including unpublished studies is STANDARD practice in conducting
meta-analyses.  Any good meta-analyst attempts to locate unpublished
studies relevant to the issue he or she is reviewing.  This is because
of the "file drawer" problem--i.e., there is potential bias in academic
journals in publishing only studies with significant results;
consequently much research on a phenomenon that comes up with nonsignificant
results may go unpublished regardless of the research quality (and the
research quality of the dissertations was generally quite good).  Thus,
as indicated by the file drawer problem, including "unpublished" doctoral
dissertations in all likelihood INCREASED, rather than decreased,
the validity of our overall results.
     Finally, the 36 published studies alone make this review as extensive
or more extensive than previous meta-analyses on CSA (e.g., Jumper
had 26 studies, Neumann et al. had 38 studies).  In terms of assessing
nonclinical samples, our 36 published studies are the most by far ever
employed (only about half of Jumper's and Neumann et al.'s were
nonclinical).

SUMMARY: The critics are selective in what information they cite from
our article.  They claim we used a large percent (38%) of unpublished
studies, but don't bother to mention that unpublished results are
consistent with published results, both statistically and practically.
They also fail to mention that the unpublished studies were almost all
doctoral dissertations that had to go through the rigorous process of
review by groups of university professors with PhD's.

Reply via email to