Re: The False Placebo Effect

2001-05-29 Thread Elliot Cramer

Robert J. MacG. Dawson <[EMAIL PROTECTED]> wrote:
YES

: Elliot Cramer wrote:

:> I believe the point of the Danes 
:> was that a placebo should be used in
:> research but that physicians should 
:   ?"not"?
:> think that they can "cure" people with   
:> placebos;  I agree.

:   -Robert Dawson


: =
: Instructions for joining and leaving this list and remarks about
: the problem of INAPPROPRIATE MESSAGES are available at
:   http://jse.stat.ncsu.edu/
: =


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: The False Placebo Effect

2001-05-28 Thread Elliot Cramer

J. Williams  wrote:
: Correct me if I'm wrong, but as I understand it, you have the ability
: to alter your diastolic reading by +/-  20 mm Hg for 3 minutes 
No; I said I raised it once.  I doubt that it lasted long.  All sorts of
things raise blood pressure temporarily.  I'm told that meditation lowers
it and I wouldn't be surprised.  BP is notoriously variable.  

 I doubt if there
: would be a statistically significant difference between a placebo
: treatment and a control (no-treatment) vis a vis the diastolic reading
I disagree.  Of course there NEVER is NO TREATMENT; you just don't know
what else is going on.  A randomized placebo study controls for something
one thinks is important and randomizes everything else, exactly the
concept that Fisher first introduced.

: As I understand your position, you maintain the diastolic readings may
: be subjective as well and can be "willed" up or down even in a
: controlled lab setting.  
Certainly up and probably down (but I havn't done a controlled experiment)

I believe the point of the Danes was that a placebo should be used in
research but that physicians should think that they can "cure" people with
placebos;  I agree.


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: The False Placebo Effect

2001-05-27 Thread Elliot Cramer



On Sun, 27 May 2001, Rich Ulrich wrote:

> > I don't see how RTM can explain the average change in a prepost design
> 
>  - explanation:  whole experiment is conducted on patients
> who are at their *worst*  because the flare-up is what sent 
> them to a doctor.
ok

>  - I'm not sure what that last phrase means... "both "
> 30% or so of acutely depressed patients will get quite a bit better.
depressions are self-limiting;  people get better unless they kill
themselves

> The experience of being in a research trial, by the way, seems 
> to produce a placebo effect, according to what people have told me.
> (I think that careful scientists attribute that one to the extra time
> and attention given to those subjects.)
This is the historic psychological explanation.  The interest in any
experiment is not a comparison with what the S's were before but
with whatt they would have been like absent the intervention ie with a
placebo (or alternate treatment)



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: The False Placebo Effect

2001-05-27 Thread Elliot Cramer

J. Williams  wrote:
 My hunch
: is the placebo group would not differ significantly on the diastolic
: reading from the no-treatment group.  Even though the placebo patients
: "think" they are being treated, I wager they can't "fake" a diastolic
: reading.  

It isn't a question of faking.  A basic prnciple of experimental design
going back to Fisher is to control the important variables than might
affect your results.  Giving someone a pill with the expectation that it
will help them is such a variable.


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: The False Placebo Effect

2001-05-26 Thread Elliot Cramer

J. Williams  wrote:
In article <[EMAIL PROTECTED]> you wrote:

:>: do you suppose a person receiving a placebo can actually
:>: change his/her  diastolic reading?
:>
:>sure;  I raised mine 20 points yesterday just thinking about someone
:>misusing statistics.  cholesterol is another thing.
:>just sitting for 3 minutes before testing will lower it

: Are you sure you're not thinking about your systolic reading?
No; I raised both. I've been checking it regularly and it has been
averaging  131/72.  I  measured it  180/90.  an hour later it was back
down


  I seriously doubt if someone misusing statistics
: could hike your diastolic reading by 20 mm Hg :-))  If so, get
: treatment---fast--before you stroke out.

I take statistics seriously.
I believe that the standards you quote are for average reading over
time.  I doubt that I'm in danger



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: The False Placebo Effect

2001-05-25 Thread Elliot Cramer

J. Williams  wrote:
: On 25 May 2001 19:39:50 GMT, Elliot Cramer <[EMAIL PROTECTED]>
: wrote:

: do you suppose a person receiving a placebo can actually
: change his/her  diastolic reading? 

sure;  I raised mine 20 points yesterday just thinking about someone
misusing statistics.  cholesterol is another thing.

just sitting for 3 minutes before testing will lower it



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: The False Placebo Effect

2001-05-25 Thread Elliot Cramer

Rich Ulrich <[EMAIL PROTECTED]> wrote:
:  - I was a bit surprised by the newspaper coverage.   I tend to 
: forget that most people, including scientists, do *not*  blame
: regression-to-the-mean, as the FIRST suspicious cause 
: whenever there is a pre-post design:  because they have 
: scarce heard of it.

I don't see how RTM can explain the average change in a prepost design
those above the pre population mean will tend to be closer to the post
population mean but this doesn't say anything about the average
change. Any depression study is apt to show both a placebo AND a no
treatment effect after 6 weeks



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: The False Placebo Effect

2001-05-25 Thread Elliot Cramer

I am not impressed.  I don't think much of people who compare placebo with
no treatment;  seems stupid to me.  I would expect a "placebo" in any case
in which the evaluation is a human judgement or one's expectation could
reasonably be expected to affect a measured response.  Thus I think you
could easily get an effect in a blood pressure measurement but not
cholesterol.

Much ado about nothing.



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Intepreting MANOVA and legitimacy of ANOVA

2001-05-22 Thread Elliot Cramer

auda <[EMAIL PROTECTED]> wrote:
: Hi, all,
: In my experiment, two dependent variables were measured (say, DV1 and DV2).
: I found that when analyzed sepeartely with ANOVA, independent variable (say,
: IV and had two levels IV_1 and IV_2) modulated DV1 and DV2 differentially:

I don't have a clue as to what you are talking about.  ANOVA tests
interactions, main effects, and contrasts.  You have factors with levels.
that's it.


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: 2x2 tables in epi. Why Fisher test?

2001-05-14 Thread Elliot Cramer

In sci.stat.math Juha Puranen <[EMAIL PROTECTED]> wrote:
: Hhen  N is small this is not true. Here a small example By Survo

the example is irreelevaant; there are different tests of the same
hypothesis eg do a t test with only the first 10 observations.  Both tests
are valid,  the large n test is more powerful


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: additional variance explained (SPSS)

2001-05-14 Thread Elliot Cramer

Dianne Worth <[EMAIL PROTECTED]> wrote:

: I have a multiple regression y=a+b1+b2+b3+b4+b5.  My Adj. R-sq is .403.  

you can't decompose adjusted R-sqs.  The only additive decomposition (and
the only decomposition that makes sense) is the stepwise composition of
R-sq, 
adding additional variables in a specified order.  This answers a
well-defined question: how much does a set of variables add to a model
given another set of variables.  There are no other questions that can be
answered by regression tests.  The various SAS tests are all special cases
and DO NOT test the same hypothesis for a particular effect test.


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: 2x2 tables in epi. Why Fisher test?

2001-05-13 Thread Elliot Cramer

In sci.stat.consult Juha Puranen <[EMAIL PROTECTED]> wrote:
:> 
:>  Please clarify what is meant by "the distribution does not
:> involve [the fixed marginals]".  I am not clear on this:
:> the Fisher test statistic (hypergeometric upper tail probability)
:> certainly *does* depend on the fixed marginals in this
:> case -- they appear in every term in that tail sum.
sorry;  didn't say it right

: Usual the assumptions for Fishers exact test  are  not true. 

: What you can fix  are the row margins, or column margins or grand total
These aren't assumptions any more than specific fixed x values are
assumptions in linear regression

Kendall and Stuart say (under exact test of independence 2x2 table)

We may now demonstrate the remarkable result, first given by Tocher
(1950) that the exact test based on the Case I probabilities actually
gives UMPU tests for Cases II and III

The probability statements for case I (fixed marginals) are valid
conditional on the marginals  for every set of marginals and do not
involve the nuisance parameters for Cases II and III and thus are valid
unconditionally for all three cases.

This is exactly analagous to the regression model
y = bx + e
where you derive the t test for b conditional on the specific x values you
observe, treating them as fixed.  The statistic (a function of the
x's) has the same t distribution regardless of what x values you observe,
even if they happen to be sampled from ANY probability distribution
Thus the regression test for fixed x values is valid for random x values


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: 2x2 tables in epi. Why Fisher test?

2001-05-10 Thread Elliot Cramer

In sci.stat.consult Ronald Bloom <[EMAIL PROTECTED]> wrote:
Herman as usual is absolutely correct; the validity of the Fisher test is
analagous to the validity of regression tests which are derived
conditional on x but, since the distribution does not involve x, are valid
unconditionally even if the x's are random.


Incidentally, if one randomizes to get an exact p value, the Fisher test
is uniformly most powerful. Herman can tell us if this is for all three
cases.


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Simple ? on standardized regression coeff.

2001-04-24 Thread Elliot Cramer

In sci.stat.consult d.u. <[EMAIL PROTECTED]> wrote:
: I now think that the betas would have to be within [-1,+1]. Suppose you do a
: standarized regression with response Y, and have p variables (X matrix) already in.

you're wrong;  
1 varb = r sy/sx = r between -1 and 1
but for 2 var everything is partialed and while partial r is betwee -1
and 1, partial sigmas are not

see kendall and stuart
 
I think this is a counter example

y  x1 x2
2  1  1
-1 0 -1
-1 -1 0



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: ANCOVA vs. sequential regression

2001-04-23 Thread Elliot Cramer

Paul Swank <[EMAIL PROTECTED]> wrote:
 An interaction is always a test of parallel
lines whether it is factoral anova, ancova, regression, or profile analysis.

Not really. interaction was invented by RA Fisher for ANOVA where there
are no lines.  that's like saying that ANOVA is regression.  It isn't and
many people have screwed up by thinking it is.



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: ANCOVA vs. sequential regression

2001-04-21 Thread Elliot Cramer

William B. Ware <[EMAIL PROTECTED]> wrote:
: sequential/hierarchical regression as you note below... however, ANCOVA
: has at least two assumptions that your situation does not meet.  First, it
: assumes that assignment to treatment condition is random.  Second, it
: assumes that the measurement on the covariate is independent of
: treatment.  That is, the covariate should be measured before the treatment
: is implemented.  Thus, I believe that you should implement the
: hierarchical regression... but I'm not certain what question you are

They aren't assumptions but they do affect interpretations.  either way is
ANCOVA  which will answer a question.  Write the model comparison and
you'll see that.
  Whether it's the question you want to answer is another


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Simple ? on standardized regression coeff.

2001-04-17 Thread Elliot Cramer

In sci.stat.consult d.u. <[EMAIL PROTECTED]> wrote:
: Hi everyone. In the case of standardized regression coefficients (beta),
: do they have a range that's like a correlation coefficient's? In other
: words, must they be within (-1,+1)? And why if they do? Thanks!

Only for 1 x variable where it is r.  In othere cases it can be anything




=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: normal approx. to binomial

2001-04-09 Thread Elliot Cramer

James Ankeny <[EMAIL PROTECTED]> wrote:
:  My question is, are they saying that the sampling
: distribution of a binomial rv is approximately normal for large n?
: 
It's a special case of the CLT for a binary variable with probability p,
taking the sum of n observations



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Statistics teacher/professional Needed $$$$$$$$$$$$$

2001-03-31 Thread Elliot Cramer

Marina G. Roussou <[EMAIL PROTECTED]> wrote:
: A Statistics teacher/tutor/professional is needed to complete an 11 lesson
: assignement paper. Each lesson comprises with approximately 5-10 questions.
:what is this for???




=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: stan error of r

2001-03-29 Thread Elliot Cramer

Elliot Cramer <[EMAIL PROTECTED]> wrote:
: dennis roberts <[EMAIL PROTECTED]> wrote:
: : anyone know off hand quickly ... what the formula might be for the standard 
: : error for r would be IF the population rho value is something OTHER than zero?

correctiont: the variance is
 (1/n)*(1-rho^2)^2 


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: stan error of r

2001-03-29 Thread Elliot Cramer

dennis roberts <[EMAIL PROTECTED]> wrote:
: anyone know off hand quickly ... what the formula might be for the standard 
: error for r would be IF the population rho value is something OTHER than zero?

It's (1/n)*(1-rho^2)^2 


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Data Reduction

2001-03-27 Thread Elliot Cramer

Dianne Worth <[EMAIL PROTECTED]> wrote:
: Question:  Should I perform Principal Components and then Factor 
: Analysis to determine the new constructs?  I (barely) use SAS and can never 

probably not



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Most Common Mistake In Statistical Inference

2001-03-22 Thread Elliot Cramer



On Fri, 23 Mar 2001, Alan McLean wrote:

> The second sentence here ensures that generalisability to a population
> IS an issue for statistics. And a big issue, usually overlooked.
> 
It is not a statistical issue with a non-random sample;  it is a matter of
experimental judgement

> For that matter, many applications of statistics do use sampling, not
> random assignment (market surveys, for example) and in these
> applications Dennis' observtion is spot on.

I was referring to inferential statistics rather than estimating
probabilities



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Most Common Mistake In Statistical Inference

2001-03-22 Thread Elliot Cramer

given random assignment the generalizability of results to a population is
not an issue for statistics.  It's a question of what a plausible
population is, given the procedure for obtaining subjects


On Thu, 22 Mar 2001, dennis roberts wrote:
> 
> using and interpreting inference procedures under the assumption of SRS 
>  simple random samples ... when they just can't be
> 



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Most Common Mistake In Statistical Inference

2001-03-21 Thread Elliot Cramer

W. D. Allen Sr. <[EMAIL PROTECTED]> wrote:

: Either the Chi Square or S-K test, as appropriate, should be conducted to
: determine normality before interpreting population percentages using
: standard deviations.

I don't understand why one would want to use the normal distribution for
interpreting population percetages;  I've never wanted to






=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: misues of statistics

2001-03-16 Thread Elliot Cramer

On Fri, 16 Mar 2001, Rich Ulrich wrote:

> Elliot,
> 
> It appears to me that Arnold Barnett is guilty 
> of a serious misuse of statistical argument.
> 
> I don't think readers are apt to be misled by the media
> reports;  there is a very LOW rate of capital punishment 
> in the US, so the likelihoods are (indeed) essentially 
> the same as Odds Ratios.

This is not a RAW difference which, I'm sure you agree, is not relevant to
anything.  It's the same problem as comparing University salaries between
men and women.  Men are much more likely to be full professors for
historical reasons (40 years ago few women got PhDs).  Faculty rank is
relevant to salary as are a host of other variables and must be taken into
account.  In the death penalty situation the nature of the homicide must
be taken into account (you don't get the death penalty for running over
someone with a car.  The odds ratio quoted is for a model that includes a
host of relevant variables IN ADDITION to race.  I don't think that there
is ANY good statistical evidence of racial discrimination in the death
penalty, but that's another issue.  The point here is that there are many
situations( such as the one illustrated) for which a death penalty has a
high probablility which is VERY different from the odds ratio.

> 
> When I saw mention of these data a few years ago, my first tendency
> was to doubt the "what-if."  P[death sentence] = 0.99?  not
> generally  Rates of executions are low, as I said earlier.
not for serial killers or for the type of homicides which lead to the
death penalty

> 
>  - Now, the author is asserting that 1% versus 4%  is far different
> from 99% versus 96%.  Statisticians should be leery of that.  
> 
NO; I think he is asserting that 20% vs 80% is far different

>  - the judges and journalists missed the word; they missed the math
> that would have made the word important; so they ended up with the
> right conclusion.
> 
I don't think they ended up with the right conclusion at all.  Heinous
murderers tend to get the death penalty whether they murder blacks or
whites.

The point of the article is that the Supreme Court apparently understood
the odd ratio to be a probability ratio.  The US district court did not
make this mistake and issued a devastating critique of the Baldus Study
which used linear regression instead of logistic regression, amongh other
things.  It was VERY inadequate in dealing with nature of the crime which
is the most important consideration in the death penalty.

Interestingly most murder are within race;  blacks murder blacks and
whites murder whites.  Baldus finds no discrimination based on race of the
murderer, only of the victim.



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



misues of statistics

2001-03-15 Thread Elliot Cramer

Someone had wanted a source for examples;  I found this looking up Arnold
Barnett in Google.com.  He has other interesting examples.

>From http://209.58.177.220/articles/oct94/barnett.html
Arnold Barnett

The Odds of Execution
A powerful example of the first problem arose in 1987, when the
U.S. Supreme Court issued its controversial McClesky v. Kemp ruling
concerning racial discrimination in the imposition of the death
penalty. The Court was presented with an extensive study of Georgia death
sentencing, the main finding of which was explained by the New York Times
as follows: "Other things being as equal as statisticians can make them,
someone who killed a white person in Georgia was four times as likely to
receive a death sentence as someone who killed a black." 
The Supreme Court understood the study the same way. Its majority opinion
noted that "even after taking account of 39 nonracial variables,
defendants charged with killing white victims were 4.3 times as likely to
receive a death sentence as defendants charged with killing blacks." 

But the Supreme Court, the New York Times, and countless other newspapers
and commentators were laboring under a major misconception. In fact, the
statistical study in McClesky v. Kemp never reached the "factor of
four" conclusion so widely attributed to it. What the analyst did conclude
was that the odds of a death sentence in a white-victim case were 4.3
times the odds in a black-victim case. The difference between
"likelihood" and "odds" (defined as the likelihood that an event will
happen divided by the likelihood that it will not) might seem like a
semantic quibble, but it is of major importance in understanding the
results. 

The likelihood, or probability, of drawing a diamond from a deck of cards,
for instance, is 1 in 4, or 0.25. The odds are, by definition, 0.25/0.75,
or 0.33. Now consider the likelihood of drawing any red card (heart or
diamond) from the deck. This probability is 0.5, which corresponds to an
odds ratio of 0.5/0.5, or 1.0. In other words, a doubling of probability
from 0.25 to 0.5 results in a tripling of the odds. 

The death penalty analysis suffered from a similar, but much more serious,
distortion. Consider an extremely aggravated homicide, such as the torture
and killing of a kidnapped stranger by a prison escapee. Represent as PW
the probability that a guilty defendant would be sentenced to death if the
victim were white, and as PB the probability that the defendant would
receive the death sentence if the victim were black. Under the "4.3 times
as likely" interpretation of the study, the two values would be related by
the equation: 

 

If, in this extreme killing, the probability of a death sentence is very
high, such that PW = 0.99 (that is, 99 percent), then it would follow that
PB = 0.99/4.3 = 0.23. In other words, even the hideous murder of a black
would be unlikely to evoke a death sentence. Such a disparity would
rightly be considered extremely troubling. 

But under the "4.3 times the odds" rule that reflects the study's actual
findings, the discrepancy between PW and PB would be far less
alarming. This yields the equation: 

 

If PW = 0.99, the odds ratio in a white-victim case is 0.99/0.01; in other
words, a death sentence is 99 times as likely as the alternative. But even
after being cut by a factor of 4.3, the odds ratio in the case of a black
victim would take the revised value of 99/4.3 = 23, meaning that the
perpetrator would be 23 times as likely as not to be sentenced to
death. That is: 

 

Work out the algebra and you find that PB = 0.96. In other words, while a
death sentence is almost inevitable when the murder victim is white, it is
also so when the victim is black - a result that few readers of the "four
times as likely" statistic would infer. While not all Georgia killings are
so aggravated that PW = 0.99, the quoted study found that the heavy
majority of capital verdicts came up in circumstances when PW, and thus
PB, is very high. 

None of this is to deny that there is some evidence of race-of-victim
disparity in sentencing. The point is that the improper interchange of two
apparently similar words greatly exaggerated the general understanding of
the degree of disparity. Blame for the confusion should presumably be
shared by the judges and the journalists who made the mistake and the
researchers who did too little to prevent it. 

(Despite its uncritical acceptance of an overstated racial disparity, the
Supreme Court's McClesky v. Kemp decision upheld Georgia's death
penalty. The court concluded that a defendant must show race prejudice in
his or her own case to have the death sentence countermanded as
discriminatory.) 





=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=

Re: Avoiding Linear Dependencies in Artificial Data Sets

2001-03-12 Thread Elliot Cramer

I'm not clear on what your design is but it seems that
the problem is in the between S effect not within.  Note that you only
have 4 df within and  4 dependent variables




=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: multivariate normality

2001-03-08 Thread Elliot Cramer

yogab <[EMAIL PROTECTED]> wrote:
: in particular or comments about mutivariate testing ? or any
: better way to do mutivariate normality testing ?

why do you want to test it


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: algorithm cross correlation

2001-02-15 Thread Elliot Cramer

Hanke <[EMAIL PROTECTED]> wrote:
: Does anyone know a algorithm for cross-correlation between two time
: series

how about something like
Do 10 i = 1,n-2
r1 = corr(a(1),b(i),n-i+1)
10 r2 = corr(b(1),a(i),n-i+1)

where corr computes the r for n obs


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Recommend multiple regression text please

2001-02-07 Thread Elliot Cramer

In sci.stat.edu Jim Kroger <[EMAIL PROTECTED]> wrote:

I think you need a statistician rather than a book


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Levels of measurement.

2001-02-05 Thread Elliot Cramer

Rich Ulrich <[EMAIL PROTECTED]> wrote:
: I agree, you have been thinking about it "too much."  
MUCH too much

: I think you have to take Stevens's hierarchy of scaling more lightly.
even to the point of forgetting it



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=