Hello All.
I guess I should respond to Scott's comments point by point.
Mike, I had thought your very point was because most studies of
antidepressants aren't conducted in a strictly double-blind fashion
(because of medication side effects...although you didn't address active
placebo studies), we cannot draw clear-cut conclusions from them. But
Mike, you are now saying that we can conclude with confidence that
antidepressants have no treatment effect. One can't have things both
ways - if the studies are categorically "invalid" (not merely imperfect)
as you asserted in previous messages, then one can't draw conclusions
from them one way or the other. Mike, I don't follow your logic here.
>Since the drugs are for sale, the FDA thinks they work. By your
statement, "we cannot draw clear-cut conclusions from them", we should
logically
conclude that there is no evidence the drugs work. Since the FDA must
make a decision when a drug company makes an application, the FDA should
assume the null hypothesis until there is evidence to support a
treatment effect. My assertions that the drugs are ineffective comes
from my
own personal observations of patients who don't get better but endorse
change on the measures. I admit that my personal observations of
depressed people is not a basis for generalization. However, this is
all I have since none of the studies are properly blinded and valid.
I can't prove the negative. It is the burden of the drug companies to
prove there is an effect before we give them to patients. The drugs are
being given now as if the effect was proven.
Mike, you also never responded to my points or Jim Clark's questions
regarding your earlier claims that "all" of the dependent measures in
antidepressant studies come from either clients or therapists
themselves. When I pointed out (with references to meta-analyses) that
this assertion was false, you merely continued to reiterate your
previous points without acknowledgng our criticisms.
>These were just examples of a general point. I will rephrase it: find
a dependent measure that is not influenced by expectation bias. They all
involve someone making a rating of a psychological construct or ratings
of behavior. All the people making the ratings are involved in the study
and influenced by expectations for treatment effectiveness. This
includes parents of children who are experiencing the side effects of
the drugs.
All the investigators have to do is study the expectation bias. Just
ask the subjects after the study is completed to indicate which condition
they thought they were in. This is rarely, if ever, studied. Studies
of this will go a long way to explain the role of cognition in treatment and
placebo. For humans, placebo is always a cognitive manipulation of
expectation.
>Contrast this with a dependent measure that is mostly not influenced
by expectation bias, body weight. Psychologists who study obesity treatment
actually have a dependent measure that is very hard to manipulate by
expectation. If I have an expectation bias that I'm in treatment, it is
still
very hard to lose weight (don't we know). It is very easy to rate my
mood a point or two better on a self-report measure.
>A meta-analysis of 100 unblinded studies is a meta-analysis of 100
poorly designed studies. If all the individual studies are noise, the
meta-analysis will just add up the noise. The meta-analysis should come
to the conclusion: "Since none of the studies were properly blinded,
we cannot come to a conclusion that there is a treatment effect."
Instead, the possible effects of an expectation confound is itemized and
discussed at length. The lack of blinding is never measured or
considered. It's only in the context of many side effects and treatment
failures that
issues like this even reach the surface.
I have to confess that I'm finding this TIPS discussion regarding
antidepressant and therapeutic efficacy increasingly troubling. It
seems to be more of a discussion of ideology than science. It also
seems to be marked by the kind of dichotomous, categorical claims (e.g.,
studies of therapeutic efficacy are "invalid", antidepressants "have no
treatment effect," "there is nothing there," "ECT is pure behavior
therapy," "ECT is a punishment condition," "the Beck Depression
Inventory..is not a measure of mood") that we would rightly criticize in
our students.
>This is just a veiled reference to my personal characterization of
study findings. My qualifiers are extreme because the research deficits in
this area are extreme. If all the studies are unblinded then none of
the studies are blinded. I don't have to say some studies are unblinded
because the truth is that all are unblinded. The studies remain
unblinded by assumption and everyone behaves as if the studies are well
designed. Referring to ECT as a punishment condition is just something
you have never heard before. This is exactly the expectation in which ECT
is presented. It has the same expectation condition as hydrotherapy and
insulin shock: we will keep doing this to you until you endorse change
on the depression measure. After a few seizures, you see the light and
make the expected changes. This is a completely logical interpretation
of the mechanism of treatment that fits our current understanding of the
role of punishment in behavior therapy.
>If the variance on the Beck Depression Inventory is determined by an
extraneous factor like expectation, then the validity of the scale is
compromised. The test now becomes a measure of the extraneous factor
and not the construct it was designed to measure. This is not a new
idea. I recently published a paper on malingering in which much of the
variance of some of the subtests of the WAIS were predicted by
malingering. The subtest became a measure of the influence of a
malingering strategy and was no longer a measure of the cognitive ability
it was designed to measure.
http://www.learnpsychology.com/papers/mypapers/Williams_Malingering_Factor_ACN.pdf
Again, I am somewhat skeptical of many claims of strong
antidepressant efficacy myself, so have no particular agenda in this
debate. But shouldn't we be refraining from drawing extremely strong
conclusions from large, extreme complex bodies of literature that we all
agree are challenging to interpret given various methodological limitations?
>They are not a challenge to interpret since they are all badly
designed. The field has drawn a conclusion of treatment effect when the
evidence is not there. The study defects are so pervasive that no one
comments on them.
I also worry that this discussion is mixing up epistemic with
ontological assertions. It's one thing to say "I think that studies of
antidepressant medication are inconclusive because of methodological
flaws (and that many people have overstated the strength of evidence for
their efficacy)" but another to say "It's clear that antidepressant
medications don't work." One is an assertion about the evidence for
claim X, the other is an assertion about the verimissilude of claim X.
These are two entirely different assertions, and Mike wants to be able
to make both of them. I don't think he can.
>This is a repetition of your first comment. I refer to my response above.
In the end, I think its hard for someone to deny my conclusions if the
research is, in fact, so badly designed. We are all taught to retain the
null hypothesis until there is convincing evidence. I think most people
on the list and others with whom I discuss these issues have not
been confronted with such a pervasive criticism of all psychological
treatment outcome research. The idea that human cognition interacts
with the research design is a novel concept for many researchers. It
does not appear in the research design courses.
My hope is that people in this area simply study blinded and unblinded
conditions. The interaction of human cognition with research design
and the validity of dependent measures is almost completely unstudied.
It sounds like a fascinating topic for psychologists to examine.
Mike Williams
---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here:
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5&n=T&l=tips&o=12829
or send a blank email to
leave-12829-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu