At 08:03 PM 11/15/01 +, Radford Neal wrote:
>Radford Neal:
>
> >> The difference is that when dealing with real data, it is possible for
> >> two populations to have the same mean (as assumed by the null), but
> >> different variances. In contrast, when dealing with binary data, if
> >> the m
Jerry Dallal wrote:
> John Kane wrote:
>
> > Very true and I was being deliberatly provocative. Howeever I still cannot
> > see penalizing someone for gerttaingt the right anwser no matter how arried
> > at.
>
> Problem: Divide 95 by 19.
>
> Student writes 95/19, 9's cancel, leaving 5/1 = 5 .
Radford Neal wrote:
>
> The difference is that when dealing with real data, it is possible for
> two populations to have the same mean (as assumed by the null), but
> different variances. In contrast, when dealing with binary data, if
> the means are the same in the two populations, the varianc
Dennis Roberts wrote:
>
> At 08:51 AM 11/15/01 -0600, jim clark wrote:
>
> >The Ho in the case of means is NOT about the variances, so the
> >analogy breaks down. That is, we are not hypothesizing
> >Ho: sig1^2 = sig2^2, but rather Ho: mu1 = mu2. So there is no
> >direct link between Ho and
I'm not really arguing for using the pooled stdev in this case, I'm just
trying to find out the reasons for significance testing procedures.
I think that what were discussing here is if we should use CIs BOTH
for stating effect sizes with errors AND for hypoyhesis testing. I read
a book by Mi
In article <[EMAIL PROTECTED]>,
dennis roberts <[EMAIL PROTECTED]> wrote:
>in the moore and mccabe book (IPS), in the section on testing for
>differences in population proportions, when it comes to doing a 'z' test
>for significance, they argue for (and say this is commonly done) that the
>sta
dennis roberts wrote:
>
> in the moore and mccabe book (IPS), in the section on testing for
> differences in population proportions, when it comes to doing a 'z' test
> for significance, they argue for (and say this is commonly done) that the
> standard error for the difference in proportions for
At 08:51 AM 11/15/01 -0600, jim clark wrote:
>The Ho in the case of means is NOT about the variances, so the
>analogy breaks down. That is, we are not hypothesizing
>Ho: sig1^2 = sig2^2, but rather Ho: mu1 = mu2. So there is no
>direct link between Ho and the SE, unlike the proportions
>example
At 04:26 PM 11/15/01 +0100, Rolf Dalin wrote:
>The significance test produces a p-value UNDER THE CONDITION
>that the null is true. In my opinion it does not matter whether we
>know it isn't true. It is just an assumption for the calculations. And
>these calculations do not produce exactly the s
ilei (1564-1642)
---you wrote
Date: Sat, 10 Nov 2001 18:53:25 +From: John Kane
<[EMAIL PROTECTED]>Subject: Re: Z Scores and
stuff[EMAIL PROTECTED] wrote:> Mark>> I contacted
you directly to offer you some simple advice. I suggested that>
since the question you a
Hi
On 15 Nov 2001, dennis roberts wrote:
> in the moore and mccabe book (IPS), in the section on testing for
> differences in population proportions, when it comes to doing a 'z' test
> for significance, they argue for (and say this is commonly done) that the
> standard error for the differen
Title: RE: diff in proportions
Dennis,
I am not sure about this, but here goes anyway. Since the decision making process is based on Type I error (Critical Point and p-value), and since Type I error is under the assumption that the Null Hypothesis is true, then the "p
> > On Tue, 13 Nov 2001, Wendy (alias Eric Duton?) wrote:
> >
> > > When applying multiple regression on timeseries data, should I check
> > > (similarly to ARIMA-models) for unit roots in the dependent variable
>
> > > and the predictor variables and perform the necessary differencing
> > >
>
Thom Baguley wrote:
>
> Alan McLean wrote:
> > This describes a BAD closed book exam. It also describes a bad open book
> > exam.
>
> Not entirely. I have found that many students still worry about such
> things regardless of the information they have about the exam.
>
> > A good one-hour exa
At 07:42 AM 11/14/01 -0800, Carl Huberty wrote:
>I, too, prefer closed-book tests in statistical methods courses. I also
>like short-answer items, some of which may be multiple-choice
>items. [Please don't gripe that all multiple-choice items assess only
>memory recall; such items, if constru
In article <[EMAIL PROTECTED]>, Carl Lee <[EMAIL PROTECTED]> wrote:
>Using introductory statistics as an example, concepts are built in a certain
>sequence. If students get lost at a certain stage, s/he will have difficulty
>to connect the later concepts together. Therefore, it is crucial to test
In article <[EMAIL PROTECTED]>,
Alan McLean <[EMAIL PROTECTED]> wrote:
>Herman Rubin wrote:
>> In article <[EMAIL PROTECTED]>,
>> Thom Baguley <[EMAIL PROTECTED]> wrote:
>> >Glen wrote:
>> >> As a student I *always* preferred closed book exams. If I know the
>> >> material I don't need the book,
the problem with any exam ... given in any format ... is the extent to
which you can INFER what the examinee knows or does not know from their
responses
in the case of recognition tests ... where precreated answers are given and
you make a choice ... it is very difficult to infer anything BUT
Alan McLean wrote:
> This describes a BAD closed book exam. It also describes a bad open book
> exam.
Not entirely. I have found that many students still worry about such
things regardless of the information they have about the exam.
> A good one-hour exam would have
> > three, or at most fo
Herman Rubin wrote:
> >Yes. Also, closed book exams tend to be easier because the range of
> >questions is more restricted. I have found them a way to avoid
> >students spending most of their time memorizing near-useless material.
>
> On the contrary, closed book exams emphasize memorizing
> near
On Sun, 11 Nov 2001 12:09:41 -0600
jim clark <[EMAIL PROTECTED]> wrote:
> Here are the relevant parts of a program I use to generate and
> solve z-distribution problems. I believe the value produced as p
> is the cumulative probability below z1. The values are quite
> precise and should agree
Students also confuse histograms with time series graphs. They describe
a graph as, for example, 'starting low, increasing then decreasing
again'. It's easy enough to see how they get this approach from their
school maths. It's much more difficult to get them to see a histogram as
rather more like
Using introductory statistics as an example, concepts are built in a certain
sequence. If students get lost at a certain stage, s/he will have difficulty
to connect the later concepts together. Therefore, it is crucial to test the
understanding of the connection (or relationship) among related con
On Wed, 14 Nov 2001, Alan McLean wrote in part:
> Herman Rubin wrote:
> >
> > A good exam would be one which someone who has merely
> > memorized the book would fail, and one who understands
> > the concepts but has forgotten all the formulas would
> > do extremely well on.
>
> Since to underst
On Tue, 13 Nov 2001, Wendy (alias Eric Duton?) wrote:
> When applying multiple regression on timeseries data, should I check
> (similarly to ARIMA-models) for unit roots in the dependent variable
> and the predictor variables and perform the necessary differencing
>
> OR
>
> could I simply st
Dear all,
In light of the very interesting and highly appreciated response I
received in my mailbox, allow me to attempt to be more clear. First I
should say that I am not aware of the deep details of the study (it is
indeed someone else's and I am not trying to cover up my errors).
Ss are put i
On 12 Nov 2001 11:41:45 -0800, [EMAIL PROTECTED] (Carl Huberty)
wrote:
> It would be greatly appreciated if I could get references for the six topics
> mentioned in the message below. I assume that Conover (1999) discusses the
> first topic. But beyond that I am at a loss. Thanks in advance.
>
Herman Rubin wrote:
>
> In article <[EMAIL PROTECTED]>,
> Thom Baguley <[EMAIL PROTECTED]> wrote:
> >Glen wrote:
> >> As a student I *always* preferred closed book exams. If I know the
> >> material I don't need the book, and if I don't know the material,
> >> the book isn't going to help in the
"Chia C Chong" <[EMAIL PROTECTED]> wrote in message
news:<9sk4p9$1e9$[EMAIL PROTECTED]>...
> Any recommendation for books on Clustering Algorithm??
Two suggestions:
Anderberg, M.R. (1973), Cluster Analysis for Applications, New York:
Academic Press, Inc.
Hartigan, J.A. (1975),
John Kane wrote:
> Very true and I was being deliberatly provocative. Howeever I still cannot
> see penalizing someone for gerttaingt the right anwser no matter how arried
> at.
Problem: Divide 95 by 19.
Student writes 95/19, 9's cancel, leaving 5/1 = 5 .
How much credit do you award?
===
In article <[EMAIL PROTECTED]>,
Stan Brown <[EMAIL PROTECTED]> wrote:
>John Kane <[EMAIL PROTECTED]> wrote in sci.stat.edu:
.
>I don't think I ever said the answer is not important; if I did say
>so I didn't mean to. The right answer is important, but aft
In article <[EMAIL PROTECTED]>,
Thom Baguley <[EMAIL PROTECTED]> wrote:
>Glen wrote:
>> As a student I *always* preferred closed book exams. If I know the
>> material I don't need the book, and if I don't know the material,
>> the book isn't going to help in the exam enough anyway. For open
>Yes
On 12 Nov 2001, Niko Tiliopoulos wrote:
> I am acting as the stats advisor for my unit in the psychology
> department of the University of Edinburgh, UK. Last week a colleague
> of mine presented me with the following issue, and I am not quite sure
> how to respond:
>
> She is running a psych
Herman Rubin wrote:
> In article <[EMAIL PROTECTED]>,
> John Kane <[EMAIL PROTECTED]> wrote:
> >Herman Rubin wrote:
>
> >> In article <[EMAIL PROTECTED]>,
> >> John Kane <[EMAIL PROTECTED]> wrote:
> >> >Stan Brown wrote:
>
> >> >> Herman Rubin <[EMAIL PROTECTED]> wrote in sci.stat.edu:
> >> >>
Stan Brown wrote:
> John Kane <[EMAIL PROTECTED]> wrote in sci.stat.edu:
> >So you are saying that getting the right answer is not important?
>
> No, of course it's important. But getting the right answer for the
> wrong reasons is bad, since one may not be so lucky next time when,
> say, calcula
tMain.html
homepage2: http://shawneelink.com/~millerwg
- Original Message -
From: "Alan Miller" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Sunday, November 11, 2001 4:20 PM
Subject: Re: Plese give full path URL for SPSS or MINITAB or Pace2000
downloading
> Pete
On Sun, 11 Nov 2001 01:30:27 +1100, "David Muir"
<[EMAIL PROTECTED]> wrote:
> Presently the Gaming Industry of Australia is attempting to define various
> new 'definitions of Standard Deviation'...in a concept to define infield
> metrics for the analysis of machines in terms which imply whether a
i think you are asking the wrong question ... because, as far as i know ...
there is only really one standard deviation concept ... square root of the
variance (average of squared deviations around the mean in a set of data) ...
perhaps what you are really interested in is HOW should VARIABILIT
"No Spam Mapson" wrote:
The OED cites the following use of metric as a noun:
1921 Proc. R. Soc. A. XCIX. 104 "In the non-Euclidean
geometry of Riemann, the metric is defined by certain quantities ...
>>>
>>> A good example of bad usage: *what* metric, *what* q
Glen wrote:
> As a student I *always* preferred closed book exams. If I know the
> material I don't need the book, and if I don't know the material,
> the book isn't going to help in the exam enough anyway. For open
Yes. Also, closed book exams tend to be easier because the range of
questions is
Mark T wrote:
> On Fri, 09 Nov 2001 10:13:21 -0500
> Rich Ulrich <[EMAIL PROTECTED]> wrote:
>
> > On Thu, 8 Nov 2001 18:31:57 +, Mark T <[EMAIL PROTECTED]>
> > wrote:
> >
> > > Hi,
> > >
> > > What are the formulae for calculating the mean to z, larger proportion and
>smaller proportion of a
of the Z table and then put it into a Word
> > document and load that onto the PalmPilot. That way he could quickly refer
> > to the table at any time.
> >
> > I assume Mark can write the code for entering the values.
>
> Yes
>
> >
> > Here is the code,
John Kane <[EMAIL PROTECTED]> wrote in sci.stat.edu:
>So you are saying that getting the right answer is not important?
No, of course it's important. But getting the right answer for the
wrong reasons is bad, since one may not be so lucky next time when,
say, calculating a 99% confidence inter
In article <[EMAIL PROTECTED]>,
John Kane <[EMAIL PROTECTED]> wrote:
>Herman Rubin wrote:
>> In article <[EMAIL PROTECTED]>,
>> John Kane <[EMAIL PROTECTED]> wrote:
>> >Stan Brown wrote:
>> >> Herman Rubin <[EMAIL PROTECTED]> wrote in sci.stat.edu:
>> >> >Test for understanding, not for imitat
gt; document and load that onto the PalmPilot. That way he could quickly refer
> to the table at any time.
>
> I assume Mark can write the code for entering the values.
Yes
>
> Here is the code, I hope it helps.
Thank you very much. This is exactly what I wanted.
> Seems a
=StandDevAway*(-1)
> if SSize=1 then
> kk=abs(Top)
> print " The score entered is ";
> print using fnform$(kk);abs(Top);
> print " less than the Mean."
> end if
> end if
>
> print " For a Z value of &qu
Stan Brown wrote:
> Herman Rubin <[EMAIL PROTECTED]> wrote in sci.stat.edu:
> >Test for understanding, not for imitation of robots. Give
> >a few multi-part problems, and be sure to give partial credit.
>
> Excellent advice. I do (try to) test for understanding, by posing
> problems in real-worl
In article <[EMAIL PROTECTED]>,
Gus Gassmann <[EMAIL PROTECTED]> wrote:
>"J. Williams" wrote:
>> When I taught undergraduate statistics in a previous lifetime, I would
>> distribute copies of the mid-term and final examinations minus the
>> data sets one week prior. Students could study the act
[EMAIL PROTECTED] wrote:
> Mark
>
> I contacted you directly to offer you some simple advice. I suggested that
> since the question you asked is very basic and that it is covered in even the
> most basic books on statistics that you go to the nearest university library
> and browse through BUSIN
Mark
I contacted you directly to offer you some simple advice. I suggested that
since the question you asked is very basic and that it is covered in even the
most basic books on statistics that you go to the nearest university library
and browse through BUSINESS STATISTICS texts. I find that
ing like "Given a value x of a variable X, which has a
> known mean, how does one convert x to z?"
No, that's not what I said. Re-read my original post and try again. Or don't.
> (Your language admits of
> several other possible meanings, but I'll leave it to you
You persist in repeating your original request in your original phrasing,
with no elaboration(s) that might resolve the ambiguities therein.
On Sat, 10 Nov 2001, Mark T wrote:
> On Fri, 09 Nov 2001 Rich Ulrich <[EMAIL PROTECTED]> wrote:
>
> > On Thu, 8 Nov 2001 Mark T <[EMAIL PROTECTED]> wro
On Fri, 09 Nov 2001 10:13:21 -0500
Rich Ulrich <[EMAIL PROTECTED]> wrote:
> On Thu, 8 Nov 2001 18:31:57 +, Mark T <[EMAIL PROTECTED]>
> wrote:
>
> > Hi,
> >
> > What are the formulae for calculating the mean to z, larger proportion and smaller
>proportion of a z-score (standardised score)
Kristen Parker wrote:
> The person above was asking a legitimate question.
Nonsense. That person was posting a homework problem,
with no additional commentary whatsoever.
> If you are unwilling to be helpful thats ok,
> but don't be such a jerk about it either.
There are plenty of other Newsgr
Chris Olsen wrote:
> First of all, I have no clue how one would define grading on the curve.
> ...
> My preferred method is to construct tests & quizzes in a way that gives
> an approximately normal distribution, weight their z-scores, and sum
> to a result.
Sounds as though you have a pretty go
>>> The OED cites the following use of metric as a noun:
>>> 1921 Proc. R. Soc. A. XCIX. 104 "In the non-Euclidean
>>> geometry of Riemann, the metric is defined by certain quantities ...
>>
>> A good example of bad usage: *what* metric, *what* quantities?
>> The reader sho
On Thu, 8 Nov 2001 18:31:57 +, Mark T <[EMAIL PROTECTED]>
wrote:
> Hi,
>
> What are the formulae for calculating the mean to z, larger proportion and smaller
>proportion of a z-score (standardised score) on a standard normal distribution? I
>know about tables listing them all, but I want t
At 02:47 AM 11/9/01 -0800, George wrote:
>I got the book (Plagues and Peoples), and I found other interesting
>sources as well. To quote McGinnis, Feoge "Actual Causes of Death in
>the United States." Journal of the American Medical Association 270
>(18):2207-2212
>
>U. S. 1990 data:
>Tobacco deat
I got the book (Plagues and Peoples), and I found other interesting
sources as well. To quote McGinnis, Feoge "Actual Causes of Death in
the United States." Journal of the American Medical Association 270
(18):2207-2212
U. S. 1990 data:
Tobacco deaths: 400,000
Diet/ low activity pattern deaths
I had to comment on the thread. I've been involved in teaching since, 1958
and have taught at many levels (maybe too many). I tried the open book
approach and believed at one time it was a good method but I always wondered
it it really was the best way to go. I tried take-home exams but was
Mark T wrote:
> Hi,
>
> What are the formulae for calculating the mean to z, larger proportion and smaller
>proportion of a z-score (standardised score) on a standard normal distribution? I
>know about tables listing them all, but I want to know how to work it out for myself
>:o)
At the risk
Jerry Dallal <[EMAIL PROTECTED]> wrote in sci.stat.edu:
>I *do*
>allow one sheet of notes (both sides) for each exam. They're
>cumulative. At any exam, students may bring the sheets for all
>previous exams plus a new one for the current exam.
>
>Students report learning as much if not more from
Herman Rubin <[EMAIL PROTECTED]> wrote in sci.stat.edu:
>Test for understanding, not for imitation of robots. Give
>a few multi-part problems, and be sure to give partial credit.
Excellent advice. I do (try to) test for understanding, by posing
problems in real-world terms and seeing if the stu
Gus Gassmann <[EMAIL PROTECTED]> wrote in sci.stat.edu:
>I much prefer Herman Rubin's suggestion
>of open book, open notes. The problem I have encountered quite
>frequently, however, is that many students don't bother to study,
>because they "can always look it up during the exam". This creates
>e
J. Williams wrote in
sci.stat.edu:
>When I taught undergraduate statistics in a previous lifetime, I would
>distribute copies of the mid-term and final examinations minus the
>data sets one week prior. Students could study the actual exam
>together, apart, or however best fit their mode.
This
Jerry Dallal <[EMAIL PROTECTED]> wrote in message
news:<[EMAIL PROTECTED]>...
> Students report learning as much if not more from preparing what
> they call "cheat sheets" (I refer to them as "reference notes") than
> from any other class activity. I had one PhD student tell me last
> year that
Patrick,
If z = xy, then yes, E[z] = E[xy] = 0.
HOWEVER... E[xyz] = E[x^2*y^2].
--
T. Arthur Wheeler
MathCraft Consulting
Columbus, OH 43017
"Patrick Agin" <[EMAIL PROTECTED]> wrote in message
W5aF7.1435$[EMAIL PROTECTED]">news:W5aF7.1435$[EMAIL PROTECTED]...
>
> Thank you very much Andrew
yes, I mean the cumulative distribution function.
if I integrate from 0 to 2 (for the first piece)
I get : (1/8)(x^2) + C0<=x<2
if I integrate from 2 to 4 (for the second piece)
I get : -(1/8)(x^2) + x + C2<=x<=4
I think I am doing something wrong because the integral is greater
th
Thank you very much Andrew for your reply,
I thought at this possibility before sending the post but my reasoning was:
If cor(x,y)=0, it implies that cov(x,y)=0 => E[(x-mean(x))(y-mean(y))]=0
but if mean(x)=mean(y)=0, then E[xy]=0.
So if z=x*y, E[z]=E[xy]=0, isn't it? Am I wrong?
Patrick
"And
Hi
On 2 Nov 2001, Donald Burrill wrote:
> On Fri, 2 Nov 2001, jim clark wrote:
> > I would hate to ressurect a debate from sometime in the past
> > year, but the chi-squared is a non-directional (commonly referred
> > to as two-tailed) test, although it is true that you only
> > consider one end
On 3 Nov 2001, Gilbert wrote:
> If I have a density function defined as:
>
> f(x)=(1/4)x 0<=x<=2
> f(x)=-(1/4)x + 1 2 f(x)=0 elsewhere
>
> (so the density function is a triangle of height (1/2))
>
> how do I find the distribution
> I am interested in the following expression and conditions under which it
> equals 0:
> E(x*y*z) where x,y and z are random variables and E(.) denotes expectation.
>
> Here, x and y have mean 0 and the correlation between x and y is also zero.
>
> Are these two conditions *sufficient* to ensur
Eugene asked "Can someone point me to current views on the
use of the continuity correctionfor the normal approximation to the binomial
and Poisson?"
It it easy to demonstrate that use of the
correction for continuity when estimating binomial probabilities by
using the normal CDF results i
Hollander, M. and D. A. Wolfe. 1999. Nonparametric Statistical Methods, 2nd
edition. John Wiley & Sons, New York. 787 p. {Encyclopedic, but not as easy
to read as many of the others cited. The notes on each test provide good
discussions and references to recent advances}
===
At 05:06 PM 11/2/01 -0500, Wuensch, Karl L wrote:
> Dennis wrote: " it is NOT correct to say that the p > value (as
>traditionally calculated) represents the probability of finding a > result
>LIKE WE FOUND ... if the null were true? that p would be ½ of > what is
>calculated."
>
>
Dennis wrote: " it is NOT correct to say that the p > value (as
traditionally calculated) represents the probability of finding a > result
LIKE WE FOUND ... if the null were true? that p would be ½ of > what is
calculated."
Jones and Tukey (A sensible formulation of the signific
>
>--
>
>Date: Thu, 1 Nov 2001 12:24:29 -0500
>From: "Andrew E. Schulman" <[EMAIL PROTECTED]>
>Subject: Re: inducing rank correlations
>
> > Now, lets say I specify a target correlation matrix as follows:
>
>
>--
>
>Date: Thu, 1 Nov 2001 12:24:29 -0500
>From: "Andrew E. Schulman" <[EMAIL PROTECTED]>
>Subject: Re: inducing rank correlations
>
> > Now, lets say I specify a target correlation matrix as follows:
>
[EMAIL PROTECTED] (dennis roberts) wrote
> most software will compute p values (say for a typical two sample t test of
> means) by taking the obtained t test statistic ... making it both + and -
> ... finding the two end tail areas in the relevant t distribution ... and
> report that as p
>
Jon Miller wrote:
> Stan Brown wrote:
>
> > You assume that it was my section that performed worse! (That's true,
> > but I carefully avoided saying so.)
> >
> > Section A (mine) meets at 8 am, Section B at 2 pm. Not only does the
> > time of day quite possibly have an effect, but since most peop
Chia C Chong wrote:
>
> I am a beginner in the statistical analysis and hypothesis. I have 2
> variables (A and B) from an experiment that was observed for a certain
> period time. I need to form a statistical model that will model these two
> variables. As an initial step, I plot the histogram
Stan Brown wrote:
> Jill Binker <[EMAIL PROTECTED]> wrote in sci.stat.edu:
> >Even assuming the test yields a good measure of how well the students know
> >the material (which should be investigated, rather than assumed), it isn't
> >telling you whether students have learned more from the class
Gus Gassmann wrote:
> Stan Brown wrote:
>
> > Another instructor and I gave the same exam to our sections of a
> > course. Here's a summary of the results:
> >
> > Section A: n=20, mean=56.1, median=52.5, standard dev=20.1
> > Section B: n=23 mean=73.0, median=70.0, standard dev=21.6
> >
> > Now
On Thu, 1 Nov 2001, Chia C Chong wrote:
> I am a beginner in the statistical analysis and hypothesis. I have 2
> variables (A and B) from an experiment that was observed for a certain
> period time. I need to form a statistical model that will model these
> two variables.
Seems to me you're
"Chia C Chong" <[EMAIL PROTECTED]> wrote in message
news:<9rsn26$98h$[EMAIL PROTECTED]>...
> I am a beginner in the statistical analysis and hypothesis. I have 2
> variables (A and B) from an experiment that was observed for a certain
> period time. I need to form a statistical model that will mo
Are all the questions you post related to the same problem?
Why not let us in on what you're actually doing, so we have more
of a clue how to answer your questions?
Glen
=
Instructions for joining and leaving this list and remarks
"Chia C Chong" <[EMAIL PROTECTED]> wrote in message
news:<9rrv0e$4hk$[EMAIL PROTECTED]>...
> Does anyone know any good reference book about non-parametric statistical
> hypothesis test??
>
> Thanks
>
> CCC
Read more than one. Here are some that I got
some value from, though I do have argum
"Rich Ulrich" <[EMAIL PROTECTED]> wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
> On Tue, 30 Oct 2001 21:10:02 -, "Chia C Chong"
> <[EMAIL PROTECTED]> wrote:
>
> [ ... ]
> >
> > The observations were numbers. To be specified, the 2 variables are
DELAY
> > and ANGLE. So, basica
On Tue, 30 Oct 2001 21:10:02 -, "Chia C Chong"
<[EMAIL PROTECTED]> wrote:
[ ... ]
>
> The observations were numbers. To be specified, the 2 variables are DELAY
> and ANGLE. So, basically I am looking into some raw measurement data
> captured in the real environment and after post-proceesing
Try "Practical Nonparametric Statistics" by W.J. Conover
"Chia C Chong" <[EMAIL PROTECTED]> wrote in message
9rrv0e$4hk$[EMAIL PROTECTED]">news:9rrv0e$4hk$[EMAIL PROTECTED]...
> Does anyone know any good reference book about non-parametric statistical
> hypothesis test??
>
> Thanks
>
> CCC
>
>
On Thu, 1 Nov 2001 17:00:31 -, "Chia C Chong"
<[EMAIL PROTECTED]> wrote:
:Does anyone know any good reference book about non-parametric statistical
:hypothesis test??
:
:Thanks
:
:CCC
:
:
Try any one of
@BOOK{leach79,
author = {Leach, C},
year = 1979,
title = {Introduction to statis
> Now, lets say I specify a target correlation matrix as follows:
>
>
> A B C
> A 1
> B 1 1
> C 1 -1 1
>
> The problem with above matirx is that we want large values of 'A' to
> be paired with large values of 'B' and also large values of 'A' to
> be paired with large values of 'C'.
> BUT
[ I have rearranged Zar's note.] After this one,
> >>> Harold W Kerster <[EMAIL PROTECTED]> 10/29/01 04:31PM >>>
> If you define the range as max - min, you get zero, not one. What
> definition are you using.
On 29 Oct 2001 16:11:15 -0800, [EMAIL PROTECTED] (Jerrold Zar)
wrote:
> I was r
Data mining , by and large, seems to use fairly conventional
multivatiate stats tools along with a bunch of clustering procedures.
In addtion there is a lot of use of neural nets (mostly as a lazy man's
tool or a last resort, but occasionally sensibly). Data prep.
(including transformations) seem
On Wed, 31 Oct 2001, Glen Barnett wrote, in response to my comment:
> > On Sun, 28 Oct 2001, Melady Preece wrote:
> >
MP> Hi. I want to compare the percentage of correct identifications (taste
MP> test) to the percentage that would be correct by chance 50%? (only two
MP> items being tasted). C
Donald Burrill <[EMAIL PROTECTED]> wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
> On Sun, 28 Oct 2001, Melady Preece wrote:
>
> > Hi. I want to compare the percentage of correct identifications (taste
> > test) to the percentage that would be correct by chance 50%? (only two
>
Glen Barnett <[EMAIL PROTECTED]> wrote in message
9rndu1$gqq$[EMAIL PROTECTED]">news:9rndu1$gqq$[EMAIL PROTECTED]...
> I'd probably suggest not trying to group the data and do a chi-squared
measure
> of association (you're throwing away the ordering, where most of the
information
> will be), exce
Chia C Chong <[EMAIL PROTECTED]> wrote in message
news:9rn4vc$8v2$[EMAIL PROTECTED]...
>
> "Glen" <[EMAIL PROTECTED]> wrote in message
> [EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
> > "Chia C Chong" <[EMAIL PROTECTED]> wrote in message
> news:<9rjs94$lht$[EMAIL PROTECTED]>...
> > > I have 2 var
On 29 Oct 2001 08:01:13 -0800, [EMAIL PROTECTED] (Dennis Roberts) wrote:
I have a Ph.D. in economics, equivalent of 3 semesters of calculus,
plus experience in stochastic calculus. Finally, took excellent
senior-level math stats sequence at NC State.
Matt.
>At 02:08 PM 10/29/01 +, Jason Ow
In reviewing some not-yet-deleted email, I came across this one, and have
no record of its error(s) having been corrected.
On Sat, 29 Sep 2001, John Jackson wrote:
> How do describe the data that does not reside in the area
> described by the confidence interval?
>
> For example, you have a tw
901 - 1000 of 5520 matches
Mail list logo