Re: please help

2001-06-13 Thread S. F. Thomas

Kelly wrote:
 
 I have the gage repeatability  reproducibility(gage RR) analysis
 done on two instruments, what hyphoses test can I use to test that the
 repeatability variance(expected sigma values of repeatability) of the
 two instruments are significantly different form each other or to say
 one has a lower variance than the other.
 Any insight will be greatly appreciated.
 Thanks in advance for your help.

One approach is to form the likelihood function in each case and to
eliminate the nuisance parameters (the means) by marginalization.
Although it is well known that marginalization by maximization will
give misleading answers for both the location and precision of your
estimate of the variances, I have shown how another method based on
marginalization by the rule of product-sum can avoid the problems
known to exist with respect to the former. (See _Fuzziness and
Probability_ (ACG Press, 1995)). This method also avoids the
assumptions of the Bayesian approach -- effectively a method of
marginalization by integration -- which have been considered and
rejected, and with good reason in my opinion, by those of the
classical school. The product-sum method may be relatively easily
implemented within an extensible stat package such as R, and I would
be happy to apply my implementation of it to your problem if you
would send me the two datasets. Essentially, once the nuisance
parameters (the one or more means) are eliminated, what is left in
each case is the (marginal) likelihood function of the variance, and
one could effectively compare directly the plots of the two variance
marginal likelihoods, and also, if need be, the likelihood function
of the difference, to see how different this is from zero. This is
not a classicist's answer, but tests of hypothesis and all that can
be obviated if the likelihood function can be directly manipulated in
the way I describe. This has been the whole point of the Bayesian
method, except of course for the inadequate justification provided
not only for its insistent subjectiveness, but also for treating
model parameters as though they were random variables in their own
right. Hope this is helpful.

Regards,
S. F. Thomas


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: please help

2001-06-11 Thread Rich Ulrich

On 10 Jun 2001 07:27:55 -0700, [EMAIL PROTECTED] (Kelly) wrote:

 I have the gage repeatability  reproducibility(gage RR) analysis
 done on two instruments, what hyphoses test can I use to test that the
 repeatability variance(expected sigma values of repeatability) of the
 two instruments are significantly different form each other or to say
 one has a lower variance than the other.
 Any insight will be greatly appreciated.
 Thanks in advance for your help.

I am not completely sure I understand, but I will make a guess.

There is hardly any power for comparing two ANOVAs that are
done on different samples, until you make strong assumptions 
about samples being equivalent, in various regards.  

If ANOVAs are on the same sample,
then a CHOW test can be used on the improved prediction
if one hypothesis consists of an extra d.f.  of prediction.
If ANOVAs are on separate samples, I wonder if you could
compare the residual variances, by the simple variance 
ratio F-test -- well, you could do it, but I don't know what arguments
should be raised against it, for your particular case.

There are criteria resembling the CHOW test that are used less
formally, for incommensurate ANOVAs (not the same predictors)
 - AKAIKE and others.

If your measures are done on the same (exact) items, you 
might have a paired test.  Instrument A gets closer values on 
how many of the measurements that are done.

Finally, if you can do a bunch of separate experiments, you
can test whether  A  or B  does better in more than half of them.

-- 
Rich Ulrich, [EMAIL PROTECTED]
http://www.pitt.edu/~wpilib/index.html


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Please help

2001-05-04 Thread Adil Abubakar

My name is Adil Abubakar and i am a student.and seek help .  I have a
question if anyone can help, please respond to
[EMAIL PROTECTED]

Person A did  research on a total of 4500 people and got the follwoing
results
Q How many hours do you spend on the web
0-7 8-15  15+
18%   48%   34%
   Q. Do you read a privacy policy before signing on to a web site
   The answers were

   1= Strongly Agree 2= Agree 3= Neutral 4= disagree 5= strongly
disagree

   9%  17%   20%32% 22% respectively

Another person asked the the same questions from a 100 people and got
the same results in % terms?  Can it be shown via CI that the result is
consitent with the expectations created by the previous survey?
Also can it be argued that the subjects have been subjected to the
questions before.

can it be asserted with statiscal significance , that if the survey is
repeated on at least 100 people the result will in the same proximity of

the above survey??

 any help ya'll can provide will be appreciated  Just the need different

methodlogies

   Thanking you in anticipation

   Adil Abubakar
[EMAIL PROTECTED]





=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Please help

2001-05-04 Thread Donald Burrill

I rather think the problem is not adequately defined;  but that may 
merely reflect the fact that it's a homework problem, and homework 
problems often require highly simplifying assumptions in order to be 
addressed at all.  See comments below.

On Fri, 4 May 2001, Adil Abubakar wrote:

 My name is Adil Abubakar and i am a student and seek help.  snip
   if anyone can help, please respond to [EMAIL PROTECTED]
 
 Person A did research on a total of 4500 people and got the following
 results:

 Q. 1.  How many hours do you spend on the web?
 0-7 8-15  15+
 18% 48%   34%

 Q. 2.  Do you read a privacy policy before signing on to a web site?
 
  1=Strongly Agree 2=Agree 3=Neutral 4=disagree 5=strongly disagree 
 9%  17% 20%   32%22% 

If this were a research situation, or intended to reflect practical 
realities, there would also be information about the relationship between 
the answers to Q. 1 and the answers to Q. 2.  This information might be 
in the form of a two-way table of relative frequencies, or (with suitable 
simplifying assumptions on the variables represented by Q.1 and Q.2) as a 
ccorrelation coefficient.  Without _some_ information about the joint 
distribution, I do not see how one can hope to address the questions 
posed below.
 
 Another person asked the same questions of 100 people and got the same 
 results in % terms.  Can it be shown via CI that the result is
 consistent with the expectations created by the previous survey?

If the % results were indeed the same (so that all differences in 
corresponding %s were zero), it would not be necessary to use a CI (by 
which I presume you mean confidence interval) to show consistency. 
(HOWEVER, even identical % results do not imply consistency, unless at 
the same time the joint distribution were ALSO identical;  and you do 
not report information on this point.)

OTOH, if the results were merely similar but not identical, you would 
want some means of assessing the strength of evidence that resides in the 
empirical differences.  That in turn depends on the assumptions you're 
willing to make about the two variables:  do you insist on treating the 
responses as (ordered) categories, or would you be willing, at least pro 
tempore, to assign (e.g.) codes 1, 2, 3 to the responses to Q. 1, use the 
codes 1, 2, 3, 4, 5 supplied for Q. 2, and treat those values as though 
they represented approximately equal intervals?

 Also can it be argued that the subjects have been subjected to the
 questions before?

Not sure what you mean by this question.  If you know that the Ss have 
indeed been asked these questions previously (are they perhaps a subset 
of the original 4500?), no arguing is needed;  although what this would 
imply about the results is unclear.  If you mean, do the identical (or at 
least consistent) results imply that the Ss must have encountered these 
same questions previously, I do not see how that can be argued, at least 
not without more information than you've so far provided.  Perhaps more 
to the point, why would such an argument be of interest?

 Can it be asserted with statistical significance, that if the survey 
 is repeated on at least 100 people the result will [be] in the same 
 proximity of the above survey??

No.  I suggest you look closely at the definition of statistical 
significance:  the term is quite incompatible with the assertion you 
propose.  (If you don't see that, you might bring a focussed version of 
the question back to the list.  If you do see that, you may still have 
some question that is more or less in the same ball-park as the question 
you've asked here, and you may wish to bring the revised question to our 
attention.)

  any help ... will be appreciated.  Just need the different 
 methodologies. 

Yes;  but for which questions, exactly?
-- DFB.
 
 Donald F. Burrill [EMAIL PROTECTED]
 348 Hyde Hall, Plymouth State College,  [EMAIL PROTECTED]
 MSC #29, Plymouth, NH 03264 603-535-2597
 184 Nashua Road, Bedford, NH 03110  603-472-3742  



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=