1 of a stream:
16.9
17
15.8
17.1
18.7
18
mean = 17.25
variance = 0.995
Portion 2
18.3
18.5
mean = 18.4
variance = 0.02
The SPSS unequal variance t-test gives a 2-tailed P of 0.037, but the equal
variance t produces a two-tailed P of 0.174
An exact test is possible with these data, as there are o
of the
> varriable does dependent on the other variable in some kind of pattern, is
> just that there are not lineraly dependent, hence the almost zero
> correlation coeffiicent. So, I am just wonder whether any kind of tests that
> I could use to test dependency between 2 varaib
There is a test based on nonparametric density estimates. You can estimate
joint and marginal densities by nonparametric methods and then test if
f(x,y)=f(x)f(y). You can find some details and references in Pagan & Ullah
"Nonparametric Econometrics".
On Wed, 20 Feb 2002, Chia
> > > If you find that they are uncorrelated and you have a reason to
believe
> > > that they may be not independent anyway then you can look for more
> > > advanced tests.
> >
> > Can you give some examples of more advanced tests that can be used to
test
> &
Linda <[EMAIL PROTECTED]> wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
> Hi!
>
> I have some experimental data collected and can be grouped into 2
> variables, X and Y. One is the dependent variable (Y) and the other is
> an independent variable (
t; advanced tests.
>
> Can you give some examples of more advanced tests that can be used to test
> the depedency of data when there these data are uncorrelated??
You can check for an obvious non-linear (say quadratic) fit.
WHAT is your 'reason to believe that they may be
not indepe
question.
> If you find that they are uncorrelated and you have a reason to believe
> that they may be not independent anyway then you can look for more
> advanced tests.
Can you give some examples of more advanced tests that can be used to test
the depedency of data when there these data a
more
advanced tests.
On 20 Feb 2002, Linda wrote:
> Hi!
>
> I have some experimental data collected and can be grouped into 2
> variables, X and Y. One is the dependent variable (Y) and the other is
> an independent variable (X). What test shall I made to check whether
> there can be e
Hi!
I have some experimental data collected and can be grouped into 2
variables, X and Y. One is the dependent variable (Y) and the other is
an independent variable (X). What test shall I made to check whether
there can be expressed as independent or not??
Thanks..
Linda
are significantly different from each other or not. One of the
>two underlying assumption to calculate the T-Test is not given (Variances
>are assumed to be NOT equally distributed; but data is normally
>distributed). What kind of (non?)parametric-test does exist - instead of the
>T-Test
a
> whether they are significantly different from each other or not. One of the
> two underlying assumption to calculate the T-Test is not given (Variances
> are assumed to be NOT equally distributed; but data is normally
> distributed). What kind of (non?)parametric-test does exist - i
rical sets of data
> whether they are significantly different from each other or not. One of the
> two underlying assumption to calculate the T-Test is not given (Variances
> are assumed to be NOT equally distributed; but data is normally
> distributed). What kind of (non?)parametric-test d
Excuse the bad grammar or typo noted below... It's been a "long
morning" already, and it's still not 9 am...
:)
Bill
On Fri, 15 Feb 2002, William B. Ware wrote:
> What are your samples sizes? If there are equal or nearly so, the t-test
What are your samples sizes? If there are equal or nearly so, the t-test
is robust with regard to unequal variances.
On the other hand, you could just read the part of the output that reports
results for "equal variances not assumed." You might also consider using
a nonparametric
umerical sets of data
> whether they are significantly different from each other or not. One of the
> two underlying assumption to calculate the T-Test is not given (Variances
> are assumed to be NOT equally distributed; but data is normally
> distributed). What kind of (non?)parametric-test
data
>whether they are significantly different from each other or not. One of the
>two underlying assumption to calculate the T-Test is not given (Variances
>are assumed to be NOT equally distributed; but data is normally
>distributed). What kind of (non?)parametric-test does exist - in
Hello,
would be nice if someone can give me some advice with regard to the
following problem:
I would like to compare the means of two independent numerical sets of data
whether they are significantly different from each other or not. One of the
two underlying assumption to calculate the T-Test
can you bit a bit more specific here? f tests AND t tests are used for a
variety of things
give us some context and perhaps we can help
at a minimum of course, one is calling for using a test that involves
looking at the F distribution for critical values ... the other calls for
using a t
The question is how do I see the difference when it's asked for an
f-test or a t-test?
Jan
=
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are availab
Rich Ulrich wrote:
>
> On Mon, 11 Feb 2002 13:56:46 +0100, "nikolov"
> <[EMAIL PROTECTED]> wrote:
>
> > hello,
> >
> > i want to test the difference between two proportions. The problem is that
> > some elements of these proportions are de
Hola!
For a more robust test, which not assumes equal centers, use the
fligner-Killeen test.
Kjetil Halvorsen
Glen Barnett wrote:
>
> Rich Ulrich <[EMAIL PROTECTED]> wrote in message
> [EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
> > On Sat, 09 Feb 2002 16:5
nikolov <[EMAIL PROTECTED]> wrote:
> i want to test the difference between two proportions. The problem is that
> some elements of these proportions are dependent (i can not isolate them).
> That is, the t-statistics does not work. How could i do? Do other kind of
> tests exis
hello,
i want to test the difference between two proportions. The problem is that
some elements of these proportions are dependent (i can not isolate them).
That is, the t-statistics does not work. How could i do? Do other kind of
tests exist? Is there a book or a paper on the subject?
Thank
Rich Ulrich <[EMAIL PROTECTED]> wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
> On Sat, 09 Feb 2002 16:59:34 GMT, Johannes Fichtinger
> <[EMAIL PROTECTED]> wrote:
>
> > Dear NG!
> > I have been searching for a description of the Ans
On Sat, 09 Feb 2002 16:59:34 GMT, Johannes Fichtinger
<[EMAIL PROTECTED]> wrote:
> Dear NG!
> I have been searching for a description of the Ansari-Bradley dispersion test up to
>now for
> analysing a psychological research. I am searching for a description of this t
Dear NG!
I have been searching for a description of the Ansari-Bradley dispersion test up to
now for
analysing a psychological research. I am searching for a description of this test,
specially a
description how to use the test.
Please, can you tell me, how to use the test, or show me a link
"Linda" <[EMAIL PROTECTED]> wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
> I have 1000 observations of 2 RVs from an experiments. X is the
> independent variable and Y is the dependent variable. How do I perform
> the test whether the following s
Linda <[EMAIL PROTECTED]> wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
> I have 1000 observations of 2 RVs from an experiments. X is the
> independent variable and Y is the dependent variable. How do I perform
> the test whether the following statement is
likelihoods. so the chisquare test of independence or even a linear
correlation ala pearson come to mind. does this help?
[EMAIL PROTECTED] (Linda) wrote in message
news:<[EMAIL PROTECTED]>...
> I have 1000 observations of 2 RVs from an experiments. X is the
> independent variable
I have 1000 observations of 2 RVs from an experiments. X is the
independent variable and Y is the dependent variable. How do I perform
the test whether the following statement is true or not??
f(X,Y)=f(X)f(Y)
Linda
eatment is in Tanaka
"Time Series Analysis".
On 22 Jan 2002, Maand M wrote:
> Hi:
>
> I would like to know where can I read more about Non
> Parametric Unit Root Test for uniform distribution.
> Any book or paper on it?
&g
Shakti Sankhla <[EMAIL PROTECTED]> wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
> Hi All:
>
> This is basically not a SAS problem but I believe that many of the list
> members could help.
>
> I am looking for information on Statistical topic call
Hi All:
This is basically not a SAS problem but I believe that many of the list
members could help.
I am looking for information on Statistical topic called Unique Root
Test.
Any help will be welcomed.
Thanks
Shakti
--
Posted via Mailgate.ORG Server - http://www.Mailgate.ORG
What is it you wonder about?
pingzhao Hu wrote:
=
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at
http://jse.stat.ncsu.edu/
Could someone please help me with a problem I just can't seem to solve. I
can get Dunnett's test results output listing using Proc GLM in SAS but I
cannot get the p-value for the test so that I can output it to a dataset. I
cannot find anything in any SAS documentation that shows
This is just a test -- please ignore!
Jackie Dietz
wrote in message
>> news:<[EMAIL PROTECTED]>...
>> > > I am using nonlinear regression method to find the best parameters
>> > > for my data. I came across a term called "runs test" from the
>> > > Internet. It mentioned that this is to deter
L PROTECTED]>...
> > > I am using nonlinear regression method to find the best parameters for
> > > my data. I came across a term called "runs test" from the Internet. It
> > > mentioned that this is to determines whether my data is differ
> > > sig
On 24 Dec 2001, Carol Burris wrote:
> I am a doctoral student who wants to use student performance on a
> criterion test, a state Regents exam, as a dependent variable in a
> quasi-experimental study. The effects of previous achievement can be
> controlled for using a standardiz
I am a doctoral student who wants to use student performance on a
criterion test, a state Regents exam, as a dependent variable in a
quasi-experimental study. The effects of previous achievement can be
controlled for using a standardized test, the Iowa test of Basic
skills. What kind of an
"Glen" <[EMAIL PROTECTED]> wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
> [EMAIL PROTECTED] (Chia C Chong) wrote in message
news:<[EMAIL PROTECTED]>...
> > I am using nonlinear regression method to find the best parameters for
> > my
[EMAIL PROTECTED] (Chia C Chong) wrote in message
news:<[EMAIL PROTECTED]>...
> I am using nonlinear regression method to find the best parameters for
> my data. I came across a term called "runs test" from the Internet. It
> mentioned that this is to determines
I am using nonlinear regression method to find the best parameters for
my data. I came across a term called "runs test" from the Internet. It
mentioned that this is to determines whether my data is differ
significantly from the equation model I select for the nonlinear
regression. C
test please ignore
=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
http://jse.stat.ncsu.edu/
=
On Fri, 07 Dec 2001 04:59:46 GMT, Richard J Burke
<[EMAIL PROTECTED]> wrote:
> jenny wrote:
>
> > What should I do with the missing values in my data. I ned to perform
> > a t test of two samples to test the mean difference between them.
> >
> > How s
[EMAIL PROTECTED] (jenny) wrote in message
news:<[EMAIL PROTECTED]>...
> What should I do with the missing values in my data. I ned to perform
> a t test of two samples to test the mean difference between them.
>
> How should I handle them in S-Plus or SAS?
It depends on
What should I do with the missing values in my data. I ned to perform
a t test of two samples to test the mean difference between them.
How should I handle them in S-Plus or SAS?
Thanks.
JJ
=
Instructions for joining and leaving
Dear Kathy,
You slightly confuse me with all that detail, but if what I get is
right, and that is that you have two continuous variables (one IV &
one DV), then why don't you use a simple regression analysis? Is there
something I overlooked or does this appear to solve your query?
Best
Niko Til
Hollander, M. and D. A. Wolfe. 1999. Nonparametric Statistical Methods, 2nd
edition. John Wiley & Sons, New York. 787 p. {Encyclopedic, but not as easy
to read as many of the others cited. The notes on each test provide good
discussions and references to recent adva
le plot (Q-Q plot) and trying to the fit both A and B with
> some theoretical distributions (all distributions avaiable in Matlab!!).
> Again, none of the distributions seem can descibe then completely. Then I
> was trying to perform the Wilcoxon Rank Sum test. From the data, it see
would tell you?
> and trying to the fit both A and B with some theoretical distributions
> (all distributions avaiable in Matlab!!). Again, none of the
> distributions seem can descibe then completely. Then I was trying to
> perform the Wilcoxon Rank Sum test.
What hypothesis w
gt; uniform etc via visualisation. Hence, I proceeded to plot the
> Quantile-Quantile plot (Q-Q plot) and trying to the fit both A and B with
> some theoretical distributions (all distributions avaiable in Matlab!!).
> Again, none of the distributions seem can descibe then completely. Then
Are all the questions you post related to the same problem?
Why not let us in on what you're actually doing, so we have more
of a clue how to answer your questions?
Glen
=
Instructions for joining and leaving this list and remarks
"Chia C Chong" <[EMAIL PROTECTED]> wrote in message
news:<9rrv0e$4hk$[EMAIL PROTECTED]>...
> Does anyone know any good reference book about non-parametric statistical
> hypothesis test??
>
> Thanks
>
> CCC
Read more than one. Here are some that
Try "Practical Nonparametric Statistics" by W.J. Conover
"Chia C Chong" <[EMAIL PROTECTED]> wrote in message
9rrv0e$4hk$[EMAIL PROTECTED]">news:9rrv0e$4hk$[EMAIL PROTECTED]...
> Does anyone know any good reference book about non-parametric statistical
>
On Thu, 1 Nov 2001 17:00:31 -, "Chia C Chong"
<[EMAIL PROTECTED]> wrote:
:Does anyone know any good reference book about non-parametric statistical
:hypothesis test??
:
:Thanks
:
:CCC
:
:
Try any one of
@BOOK{leach79,
author = {Leach, C},
year = 1979,
title =
Does anyone know any good reference book about non-parametric statistical
hypothesis test??
Thanks
CCC
=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available
Hi,
I have a basic question that I just couldn't find an answer for.
I want to measure the % error introduced by using EWMA as against a
linear average for a *stationary* random process (not necessarily
Normal) over a given (long/short term) time window:
I am using the Chi Squared tes
HI
I am working in SPSS10. I have two groups research/ control with pre
and post data for each group. I want to use a sig test to check
whether any difference in progress between pre and post data between
research group and control is a sig difference. Currently I have the
all data in four
test
- Original Message -
From:
Wouter
Duyck
To: [EMAIL PROTECTED]
Sent: Thursday, October 18, 2001 1:47
PM
Subject: ANOVA by items
Dear all :-)Suppose i have a factorial design with two
between-subject factors (onefactor A of 3 levels and one factor B of 2
e a p value from a F statstic? Or should I be using a chisquare
> test?
It is a 2x2 table. Logistic regression has nothing to offer:
there is no risk of 'over-fitting' the binary prediction, so it
would only give you ancillary statistics that are irrelevant.
And it offers a
Title: RE: Logistic Regression vs Chi Square test in the following scenario
Nick writes:
>Let's say I have two variables: y is the dependent variable, x is the
>independent variable. Both variables are binary and discrete.
>
>I want to see if there is a relation between
is a relation between x and y.
Is it possible to use logistic regression analysis in this case and
generate a p value from a F statstic? Or should I be using a chisquare
test?
Thanks,
Nick
=
Instructions for joining and leaving this
Dear Ronald,
as far as I understand, the preference (dominance) of Fisher's test
has a historical/computational background. It was quite cumbersome to
calculate the probabilities for different margins by hand (huge number of
tables)
and, therefore, as long as the computers were not able
eal effects are ever null. A
> one-tailed p value, for a normally distributed statistic, does have a
> real meaning, as I pointed out. But precision of
> estimation--confidence limits--is paramount. Hypothesis testing is
> passe.
>
...
The only use for
t for sure ... C is not good ... and D can't be proved
to be correct
none of the choices is correct ... C is probably the BEST choice but still
not a good one
this might be a good question for assessing an inappropriate objective ...
or, an inappro
In article <[EMAIL PROTECTED]>,
gus gassmann <[EMAIL PROTECTED]> wrote:
>Wendy wrote:
>> Hi,
>> I'm looking for a test statistic for trivariate normality. Does anybody know
>> such a test-statistic, respectively a book/website where I can find one ?
&g
In article <tDza6.239027$[EMAIL PROTECTED]>,
Wendy <[EMAIL PROTECTED]> wrote:
>Hi,
>I'm looking for a test statistic for trivariate normality. Does anybody know
>such a test-statistic, respectively a book/website where I can find one ?
I can give you lots of test sta
Wendy wrote:
> Hi,
>
> I'm looking for a test statistic for trivariate normality. Does anybody know
> such a test-statistic, respectively a book/website where I can find one ?
>
> Thanks !
>
> Wendy
If you transform the components to independence (using the Chol
Hi,
I'm looking for a test statistic for trivariate normality. Does anybody know
such a test-statistic, respectively a book/website where I can find one ?
Thanks !
Wendy
=
Instructions for joining and leaving this lis
Hi
I have been conducting goodness of fit tests using A-D tests and one thing i
forgot to do beforehand was to find out if A-D tables of critical values
exist. I have read one book from D'Agostino and Stephens(1986) they outline
distribution specific A-D test critical values which re
h the
slope of 1 and an intercept of 0.
I construct 2 models
1. MV = ß0+ß1pred + err or ln(MV) = lnß0+ß1ln(pred) + err
2. MV = pred +err or ln (MV) = ln (pred)
to test the null hypostesis if ß0 = 0 and ß1 = 1 as this is what I
expect.
I
Hi,
I would say that the degrees of freedom do not change whan you smooth
the data or change it ina any other way.
you could for example LN the data to get rid of the long tails in a
distribution and use then an F-test.
I guess what you do withthe data is indepandent from the test you
conduct
: [EMAIL PROTECTED]
Subject:Re: Bland and Altman Test
On 1 Jan 2001 09:00:27 -0800, [EMAIL PROTECTED] wrote:
> Dear list members
>
> I have a reference to the Bland-Altman test.
>
> Bland J.M and Altman D.G (1986) Statistical methods for assessing
agreement
> betw
Of course, it's of strong advantage to use computers as reference
tools, however, exactly the same tools are frequently used 'against'
patients because of well-known dependencies within countries related
health systems etc. Also, the doctor's final word again is a matter of
his view, education an
On 1 Jan 2001 09:00:27 -0800, [EMAIL PROTECTED] wrote:
> Dear list members
>
> I have a reference to the Bland-Altman test.
>
> Bland J.M and Altman D.G (1986) Statistical methods for assessing agreement
> between two methods of clinical measurement., Lancet, i, 307-10.
>
I am using non-linear regression to fit electophysiological data (current vs t)
to exponential equations. I am using an F-test on the residual sum of squares
to determine how many components are required. A typical trace will have
several thousand points. Question: If I use an adjacent average
On Fri, 05 Jan 2001 10:44:13 +, "P.G.Hamer"
<[EMAIL PROTECTED]> wrote:
> Rich Ulrich wrote:
>
> > Computers do better than experts in making medical
> > diagnoses when the correct answer has to be from a narrow set.
>
> I think that some of the early systems also were better than humans
> a
On Thu, 04 Jan 2001 14:42:02 -0500, Rich Ulrich <[EMAIL PROTECTED]>
wrote:
>alf>
>> The problem is not the existence of literature, the problem is the
>> content.
> [ snip, ... essentially 'cite the good literature, in great detail' ]
Not exactly -- I was asking for 'the good literature' in the
Rich Ulrich wrote:
> Computers do better than experts in making medical
> diagnoses when the correct answer has to be from a narrow set.
I think that some of the early systems also were better than humans
at identifying the possibility of unusual diagnoses. AFAIR it took the
humans to reach a fi
uot; The paradigm that has
worked well is: comparing the correlations between raters (several
skilled humans plus one computer program). This is not my area, but
as I understand it, the computers do well when the criterion can be
narrowly defined. (And, if the criterion won't change.)
Computers
Dear list members
I have a reference to the Bland-Altman test.
Bland J.M and Altman D.G (1986) Statistical methods for assessing agreement
between two methods of clinical measurement., Lancet, i, 307-10.
I am not able to travel to any university at the present time (slight health
problems
; CANADA
> http://www.uwinnipeg.ca/~clark
> >
>
>
>
>
> What kind of statistical methods are used in screening job applicants?
ne or other
publication in the context of comparisons between clinical judgment
and actuarial predictions, whcih
(1) is published in an indexed journal;
(2) has a correct translation between scientific hypothesis and
actually tested statistical hypothesis;
(3) shows the correct statistical test
> 'demostrably valid' one or other day, scientific publication or
> period.
- for instance, what is supposed to mean?
- that some idiot could offer pseudo stat- reasons for something, and
that is exactly how Jim's note looks to you?
- If you meant that, please read better
ly valid' one or other day, scientific publication or
period.
Coming from test psychology, you would hesitate because reliability
and validity coefficents, usually, suffer from 'regression to the
middle', are too low for individual prognosis, explanantion (in the
meaning of Hemp
Hi
On 27 Dec 2000, Jeff Rasmussen wrote:
> >scores, but not in aggregating them). In general, human judgment
> >does not fare all that well relative to actuarial (i.e.,
> >statistical) methods. Interesting that someone posting to a
> >statistical newsgroup would advocate the non-statistical ap
ellectual,
interests, personality, ...
> Do people who apply for a faculty position at your department have to
> take such a psychological test too?
The literature I mentioned is critical of the fact that
psychologists themselves do not follow practices suggested by the
empirical literat
Rich,
You're right, I forgot to mention the biases! I'm on sabbatical so I must have forgotten (or repressed) that rather salient feature.
>By the big ones: sex, race, social class, age, ethnicity.
>By more subtle ones:
>wealth, body language, speech accents, shopping habits.
>And if you don't
yyou are quite sure it
makes sense. A better and more general approach might be to use
regression and test beta-0=0 and/or beta-1=1.
_
| |Robert W. Hayden
| | Work: Department of Mathematics
/ |Plymouth State College MSC#29
On 27 Dec 2000 08:18:11 -0800, [EMAIL PROTECTED] (Jeff
Rasmussen) wrote:
< ... >
Jeff >
' When I'm on such committees I do a rank ordering based
on whatever actuarial data is available and know that doing
otherwise is just mucking around with error. Most other faculty
haruspicate via predictors
separately posted to sci.stat.consult, sci.stat.edu
On Mon, 25 Dec 2000 17:51:14 +0800, Wen-Feng Hsiao
<[EMAIL PROTECTED]> wrote:
> Dear all,
>
> Paired t-test helps us to exam whether the paired samples (or the same
> samples) respond differently between two treatments. T
>
>There is a considerable literature on clinical judgment (i.e.,
>interview and human judgement) vs. actuarial predictions (i.e.,
>predictions from demonstrably valid regression equations ...
>human judgment _might_ be used in producing individual predictor
>scores, but not in aggregating them).
innipeg, Manitoba R3B 2E9 [EMAIL PROTECTED]
> CANADA
http://www.uwinnipeg.ca/~clark
>
What kind of statistical methods are used in screening job applicants?
Do people who apply for a faculty positi
Yes, you can perform a paired t-test by hypothesizing a constant, C, in
H0: mu1 - mu2 = C, but whether or not C = 0 does not necessarily have
anything to do with distribution shape.
Jerrold H. Zar
Northern Illinois University
==
>>> Wen-Feng Hsiao <[EMAIL PR
>>> Wen-Feng Hsiao <[EMAIL PROTECTED]> 12/26/00 08:14PM >>>
Dear all,
Paired t-test allows us to exam whether the paired samples (or the same
samples) respond differently between two treatments. The null
hypothesis
is H0: mu1=mu2 or mu1-mu2=0. Could this be
Dear all,
Paired t-test allows us to exam whether the paired samples (or the same
samples) respond differently between two treatments. The null hypothesis
is H0: mu1=mu2 or mu1-mu2=0. Could this be extended to test a null
hypothesis with H0: mu1-mu2=C, where C is a constant, but unknown.
My
Hi
On Tue, 26 Dec 2000, John Uebersax wrote:
> IMHO, psychological tests in this case should not substitute for a
> thorough interview and human judgment.
>
> Just my .02 worth.
There is a considerable literature on clinical judgment (i.e.,
interview and human judgement) vs. actuarial predictio
On Sat, 23 Dec 2000 02:15:54 GMT, T.S. Lim <[EMAIL PROTECTED]>
wrote:
>I was wondering if it's a common practice in Statistics to require job
>applicants to take a psychological test. At the MS/PhD level (in the
>US), I don't think it's common. However, some compani
I've never heard of any statistician position requiring a psychological
test. Even when I worked at the RAND Corporation, where the position
involved some degree of defense-related research, it was not required.
(Frankly, if a firm required such a test, I would take that as a sign
that it i
1 - 100 of 194 matches
Mail list logo