On Mon, Aug 16, 2010 at 11:42 AM, G Money <gm0n3...@gmail.com> wrote:
>
> I don't necessarily have a problem with that...let the companies spend the
> time and money to do the research, and then let the FDA ensure that
> everything is on the up and up.

Personally, I think that all data related to a clinical trial should
be submitted to the FDA. Selective disclosure of data is an invitation
to gloss over the ugly bits but the ugly bits are what ends up killing
people. I think that good researchers with honest intentions can still
feel the push to say, "well, I think this one experiment was just done
wrong and we didn't see it pop up in any of the other studies, so
we'll toss this one as an outlier". Companies should be still be able
to make their case to the FDA about which studies are more important,
useful, etc and why. But I think the FDA should have all the data.

> I'd be curious how, if...and this might be a big IF...if all 10 trials were
> on the up and up, why 9 would fail, and then suddenly 1 would succeed????
> Shouldn't they either all corroborate, or all refute....if all proper
> variables are accounted for???

Several reasons. One is that study design (for both good and bad
reasons) often changes from one study to another. These changes can be
very small and largely irrelevant or they can be much more
substantial. One of the most frequent things that happens in a study
is inconclusive results. You think you see a trend one way or another
but it isn't statistically significant one way or another by the
standard definitions. Yet, as a researcher, you have a feeling that
there is something going on there and you just need to tweak things to
try and bring it out. It might be that your level of detail was not
matched to what you needed to see an effect or maybe you think that
the protocol wasn't followed 100% and you need to tighten up the
protocol. It could be lots of things. So you tweak the study and do it
over again. That's the right thing to do but when those studies never
see the light of day, no one knows that there were a whole lot of
"didn't show any statistical difference" studies.

The other fundamental reason is basic statistics. When you run a study
there is a computed margin of error. That margin of error (known as P
value) tests against the null hypothesis. The null hypothesis is that
there is nothing other than randomness going on in a system, that what
you did has no directional effect. So if you run a test and get back a
P-value of 0.05, that means that there is a 5% chance that the results
you are seeing were based off of random variation instead of some
inherent effect that the thing you are testing for had.

As a result, if you run the same experiment again and again and again,
you are likely to see the results you want on 1 of the tests even if
you don't see them on most of the tests just because of inherent
statistical noise.

Hope that helps,
Judah

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~|
Order the Adobe Coldfusion Anthology now!
http://www.amazon.com/Adobe-Coldfusion-Anthology-Michael-Dinowitz/dp/1430272155/?tag=houseoffusion
Archive: 
http://www.houseoffusion.com/groups/cf-community/message.cfm/messageid:325121
Subscription: http://www.houseoffusion.com/groups/cf-community/subscribe.cfm
Unsubscribe: http://www.houseoffusion.com/groups/cf-community/unsubscribe.cfm

Reply via email to