Rich Ulrich <[EMAIL PROTECTED]> wrote in sci.stat.edu:
>[ posted and e-mailed.]

Ditto.

>On Sat, 29 Dec 2001 16:46:10 -0500, [EMAIL PROTECTED] (Stan Brown)
>wrote:
>> Now we come to the part I'm having conceptual trouble with: "Have 
>> you proven that one gas gives better mileage than the other? If so, 
>> which one is better?"
>> 
>> Now obviously if the two are different then one is better, and if 
>> one is better it's probably B since B had the higher sample mean. 
>
>I want to raise an eyebrow at this earlier statement.

Hmm... Which "earlier statement" do you mean? If two means are 
different then one of them _must_ be larger than the other; that's 
how real numbers work. Can you explain your raised eyebrow a bit 
more specifically? Or is it just the word "proven", about which I 
comment below.

>  We should
>not overlook the chance to teach our budding statisticians:
>*Always*  pay attention to the distinction between random trials 
>or careful controls, on the one hand; and grab-samples on the other.
>[Maybe your teacher asked the question that way, in order to
>lead up to that in class?]

No; this was in a book of homework problems, which is pretty 
standard at the junior college where I teach. Specifically, it was a 
lengthy exercise in using Excel to do the sort of statistical tests 
the students normally do on a TI83.

>The numbers do  not  *prove*  that one gas gives better mileage;
>the mileage was, indeed, better for one gas than another -- for
>reasons yet to be discussed.  Different cars?  drivers?  routes?

All good points for discussion. But I wouldn't focus too much on 
that off-the-cuff word "prove". (I'm not being defensive since I 
didn't write the exercise. :-) My students did understand that 
nothing is ever proved; that there's still a p-value chance of 
getting the sample results you got even if you did perfect random 
selection an d the null hypothesis is true. Maybe I'm being UNDER-
scrupulous here, but I think it a pardonable bit of sloppy language.

>> But are we in fact justified in jumping from a two-tailed test (=/=) 
>> to a one-tailed result (>)?
>> 
>> Here we have a tiny p-value, and in fact a one-tailed test gives a 
>> p-value of 0.0001443. But something seems a little smarmy about 
>> first setting out to discover whether there is a difference -- just 
>> a difference, unequal means -- then computing a two-tailed test and 
>> deciding to announce a one-tailed result.
>
>Another small issue.  Why did the  .00014  appear?  

I added that for purposes of posting; the original exercise didn't 
have the students do a one-tailed test at all. It's just half the 
two-tailed p-value, as I'm sure you recognize.

>In clinical trials, we observe the difference and then we do attribute
>it to one end.  But it is not the convention to report the one-tailed
>p-level, after the fact.  I think there are editors who would object
>to that, but that is a guess.  Also, for various reasons, our smallest
>p-level for reporting is usually  0.001.

Well, these two p-values are smaller than that: you're talking 
significance level of 0.1% and these were 0.014% or 0.029%.

But my question was not about reporting a smaller p-value; it was 
about first establishing a two-tailed "difference" and then moving 
from that to declaring which side the difference lies on. I think 
A.G. McDowell has disposed of that, however.

-- 
Stan Brown, Oak Road Systems, Cortland County, New York, USA
                                  http://oakroadsystems.com/
"My theory was a perfectly good one. The facts were misleading."
                                   -- /The Lady Vanishes/ (1938)


=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

Reply via email to