Forgot to mention the really difficult part is correctly figuring out the
range of those results. A good well controlled study will have a very
narrow range. A study that has problems with reliability, sample size, etc,
will have a very wide range. Another way to look at it is if the range of
differences encompasses 0 by any substantial amount, most likely it means
that the differences are not meaningful.

Speaking of such, I'm prepping a statistical criticism of the latest book
byCharles Murray, author of the Bell Curve. Want to join in?


On Wednesday, February 15, 2012, Larry C. Lyons <larrycly...@gmail.com>
wrote:
> You are not the only one. On my desk at home is a notebook with all my
notes for the next version of my meta-analysis application. 150 pages and
counting - most of which are botched formulae for calculating statistical
power effect sizes and converting obtained probability values to effect
sizes. Makes me wish at times I stayed with single case designs.
>
> 10 word or less that is really difficult. Can I go for 30?
>
> But you've essentially got the idea. I left out a lot, range estimation
and correction for error andthat sort of thing, but yes.
>
> On Wednesday, February 15, 2012, Dana <dana.tier...@gmail.com> wrote:
>>
>> what not really -- the meaning of standard deviations? If so yeah you are
>> right, I think but what Maureen and  I said is an .... ok 10 words or
less
>> version.
>>
>> In this case p=0.011 so theoretically if they did everything else right,
>> these results should replicate 99% of the time. And not, 1%.
>>
>> I realize that's it's not a given that the 1% is random or that it won't
>> occur the next time you repeat the experiment, but I think that is a
rather
>> fine distinction for our purposes. Kinda like the difference between
>> Springfield and Tyson's Corner, as seen from California, yanno? If I
don't
>> have that right then fine, tell me,  but if you're going to crank up your
>> statistical powers I'd rather hear an explanation of that leave one out
>> thing they did a thousand times, because that part I do not understand at
>> ALL.
>>
>> On Wed, Feb 15, 2012 at 6:21 PM, Larry C. Lyons <larrycly...@gmail.com
>wrote:
>>
>>>
>>> Not really. It depends on the stats that are used. When looking at
>>> statistical results, the way to interpret statistical significance is as
>>> follows. Let's say the researchers found the two groups showed a
>>> significant difference of p &lt; 0.05 . This means that if you
replicated
>>> the study an infinite number of times, 95% of these results would fall
very
>>> close to the difference found in the first study. How meaningful that
>>> spread is depends on the standard error of the studies, and other
factors.
>>>  It also mean that in order to show a significant difference with a
smaller
>>> sample you'd need a much larger difference to achieve statistical
>>> significance.
>>>
>>> So you can make very accurate predictions based on fairly small
samples. It
>>> all depends on the statistical power of your experiment. I'm too burned
out
>>> to really discuss it now, but if interested Wikipedia has a pretty good
>>> explanation of it   - http://en.wikipedia.org/wiki/Statistical_power
>>>
>>> On Wednesday, February 15, 2012, LRS Scout <lrssc...@gmail.com> wrote:
>>> >
>>> > The sampling of 90 people is really really small.
>>> >
>>> > On Wed, Feb 15, 2012 at 7:29 PM, Dana <dana.tier...@gmail.com> wrote:
>>> >
>>> >>
>>> >> feel free to run away, Sam, but you still haven't showed me any
basis at
>>> >> all for the crap you've been talking.
>>> >>
>>> >> On Wed, Feb 15, 2012 at 4:18 PM, Sam <sammyc...@gmail.com> wrote:
>>> >>
>>> >> >
>>> >> > I give up and feel the fool for not heeding this advice sooner:
>>> >> >
>>> >> > Don’t argue with idiots. They drag you down to their level and beat
>>> >> > you with experience
>>> >> >
>>> >> > .
>>> >> >
>>> >> > On Wed, Feb 15, 2012 at 7:07 PM, Dana <dana.tier...@gmail.com>
wrote:
>>> >> > >
>>> >> > >>
>>> >> > >> Yes it is. It's the same study done three times. Two people, 90
>>> people
>>> >> > >> and 28 people.
>>> >> > >>
>>> >> > >
>>> >> > > Ah, here's the heart of the problem. No, Sam, it isn't. It's --
I'd
>>> >> call
>>> >> > it
>>> >> > > two studies and an experiment I guess -- that tested the same
>>> >> hypothesis.
>>> >> > > According to your nomenclature here, all trials for the same drug
>>> are a
>>> >> > > single study. And mutually responsible for one another's
>>> methodology.
>>> >> > And,
>>> >> > > according to you, everything anyone remotely affiliated with them
>>> may
>>> >> > have
>>> >>> >> > >
with

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~|
Order the Adobe Coldfusion Anthology now!
http://www.amazon.com/Adobe-Coldfusion-Anthology/dp/1430272155/?tag=houseoffusion
Archive: 
http://www.houseoffusion.com/groups/cf-community/message.cfm/messageid:346965
Subscription: http://www.houseoffusion.com/groups/cf-community/subscribe.cfm
Unsubscribe: http://www.houseoffusion.com/groups/cf-community/unsubscribe.cfm

Reply via email to