Linda wrote in message <[EMAIL PROTECTED]>...
>I want to generate a series of random variables, X with exponential
>PDF with a given mean,MU value. However, I only want X to be in some
>specified lower and upper limit?? Say between 0 -> 150 i.e. rejected
>anything outside this range Does anyone ha
In article <[EMAIL PROTECTED]>,
[EMAIL PROTECTED] (Linda) wrote:
>I want to generate a series of random variables, X with exponential
>PDF with a given mean,MU value. However, I only want X to be in some
>specified lower and upper limit?? Say between 0 -> 150 i.e. rejected
>anything outside this
George Marsaglia wrote in message
<0l7b8.42092$[EMAIL PROTECTED]>...
>
.. chunk deleted
>
>The Monty Python method is not quite as fast as as the Ziggurat.
>
>Some may think that Alan Miller's somewhat vague reference to
>a source for the ziggurat article suggests disdain. The source is
>Journ
Art Kendall <[EMAIL PROTECTED]> wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
> I tend to be more concerned with the "apparent randomness" of the results than with
>the speed
of the algorithm.
>
> As a thought experiment, what is the cumulative time difference in a run using th
[ snip, previous problem]
>
> This is similar to a problem I have come across: the measurement of a
> serum value against exposure.
> My theory is that they are correlated. But the data says that they
> have an R^2 of 0.02 even though the p-value for the beta is p=1E-40
> (ie. zero).
>
> As y
and will you not, by this approach, wind up making a _lot_ of pairwise comparisions,
with all the implications that have recently been disucssed even here at edstat?
Messing with weakly formed data rarely strethens it. I love some transformations, but
take them for what they are.
Jay
Thomas
n my book, if they simply happen to have been
recorded or measured in the wrong units/ scale -- so they
will be fixed by transformation. You probably ought to be
re-thinking your whole scientific hypothesis and test, if
your problem is worse than that.
TS>
> If the data are approxima
On Thu, 14 Feb 2002 23:48:02 +0100, "Matthias" <[EMAIL PROTECTED]>
wrote:
> Hello,
>
> would be nice if someone can give me some advice with regard to the
> following problem:
>
> I would like to compare the means of two independent numerical sets of data
> whether they are significantly differ
In SPSS output ignore the lines for equal variances, and use the lines for
unequal variances.
Matthias wrote:
> Hello,
>
> would be nice if someone can give me some advice with regard to the
> following problem:
>
> I would like to compare the means of two independent numerical sets of data
> w
I tend to be more concerned with the "apparent randomness" of the results than with
the speed of the algorithm.
As a thought experiment, what is the cumulative time difference in a run using the
fastest vs the slowest algorithm? A
whole minute? A second? A fractional second?
Glen wrote:
> "A
Excuse the bad grammar or typo noted below... It's been a "long
morning" already, and it's still not 9 am...
:)
Bill
On Fri, 15 Feb 2002, William B. Ware wrote:
> What are your samples sizes? If there are equal or nearly so, the t-test
*they*
> is robust wit
Marsaglia's ziggurat and MCW1019 generators are
available in the R package SuppDists. The gcc
compiler was used.
George Marsaglia wrote:
>
> Glen <[EMAIL PROTECTED]> wrote in message
> [EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
> > "Alan Miller" <[EMAIL PROTECTED]> wrote in message
> news:
What are your samples sizes? If there are equal or nearly so, the t-test
is robust with regard to unequal variances.
On the other hand, you could just read the part of the output that reports
results for "equal variances not assumed." You might also consider using
a nonparametric procedure such
Glen <[EMAIL PROTECTED]> wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
> "Alan Miller" <[EMAIL PROTECTED]> wrote in message
news:...
> > The fastest way to generate random normals and exponentials is to use George
> > Marsaglia's ziggurat algorithm.
Glen wrote in message ...
>"Alan Miller" <[EMAIL PROTECTED]> wrote in message
news:...
>> The fastest way to generate random normals and exponentials is to use
George
>> Marsaglia's ziggurat algorithm.
>
>I've seen both ziggurat and Monty Python approaches claimed as
There's a multiple comparison procedure called Games-Howell that is similar to the
Aspin-Welch-Satterthwaite statistic in that it has no assumption about variances.
-Original Message-
From: "Thomas Souers" <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
Date: Thu, 14 Feb 2002 16:47:05 -0800
"Matthias" <[EMAIL PROTECTED]> wrote in message
news:...
> Hello,
>
> would be nice if someone can give me some advice with regard to the
> following problem:
>
> I would like to compare the means of two independent numerical sets of data
> whether they are signifi
it's called the behrens-fisher problem ... there is nothing that says that
population variances HAVE to be equal
essentially what you do is to be a bit more conservative in your degrees of
freedom ... most software packages do this as the default ... or at least
give you the choice between mak
On 13 Feb 2002 09:48:41 -0800, [EMAIL PROTECTED] (Dennis Roberts) wrote:
> At 09:21 AM 2/13/02 -0600, Mike Granaas wrote:
> >On Fri, 8 Feb 2002, Thomas Souers wrote:
> > >
> > > 2) Secondly, are contrasts used primarily as planned comparisons? If
> > so, why?
> > >
> >
> >I would second those wh
In article <[EMAIL PROTECTED]>,
Shahram Hosseini <[EMAIL PROTECTED]> wrote:
>Hi everybody,
>The discrete random process n(t), uniformly distributed in the interval
>(-0.5,05), is filtered by a first order AR system to generate the
>sequence s(t)=a*s(t-1)+n(t). What is the probability density func
Rishabh Gupta <[EMAIL PROTECTED]> wrote in message
a4eje9$ip8$[EMAIL PROTECTED]">news:a4eje9$ip8$[EMAIL PROTECTED]...
> Hi All,
> I'm a research student at the Department Of Electronics, University Of
> York, UK. I'm working a project related to music analysis and
> classification.
===
Hi all,
I recieved numerous replies to my query. I can't thanks everyone
individually so I want to thank everyone who has replied. I am now looking
through the information and links that you have provided.
Many Thanks For All Your Help!!
Rishabh
"Rishabh Gupta" <[EMAIL PROTECTED]> wrote in me
can you bit a bit more specific here? f tests AND t tests are used for a
variety of things
give us some context and perhaps we can help
at a minimum of course, one is calling for using a test that involves
looking at the F distribution for critical values ... the other calls for
using a t dis
"Richard Wright" <[EMAIL PROTECTED]> wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
> Genres are presumably groups. So linear combinations of variables that
> best separate the genres would be more effectively found by linear
> canonical variates analysis (aka discriminant analysis)
You might consider a form of PLS - your measurmenets may be highly correlated,
and only a very few can do you any good. You have a great many output vars,
and few enough inputs.
Jay
Rishabh Gupta wrote:
> Hi All,
> I'm a research student at the Department Of Electronics, University Of
> Yo
May be, [ http://www.gloriamundi.org/var/wps.html, Levin, Alex working
paper] will help you?
Bests,
AL
"Chia C Chong" <[EMAIL PROTECTED]> wrote in message
a4f57c$su1$[EMAIL PROTECTED]">news:a4f57c$su1$[EMAIL PROTECTED]...
> Hi!
>
> I want to generate a set of random numbers from a joint PDF,f(
Genres are presumably groups. So linear combinations of variables that
best separate the genres would be more effectively found by linear
canonical variates analysis (aka discriminant analysis).
Richard Wright
On Thu, 14 Feb 2002 03:18:48 GMT, "Jim Snow" <[EMAIL PROTECTED]>
wrote:
snipped
>
"Rishabh Gupta" <[EMAIL PROTECTED]> wrote in message
a4eje9$ip8$[EMAIL PROTECTED]">news:a4eje9$ip8$[EMAIL PROTECTED]...
> Hi All,
> I'm a research student at the Department Of Electronics, University Of
> York, UK. I'm working a project related to music analysis and
> classification. I am at
Chia C Chong wrote in message ...
>Hi!
>
>I want to generate a set of random numbers from a joint PDF,f(A,B) in which
>f(A,B)=f(A|B)f(B). f(A|B) is Gaussian PDF in with zero mean,MU and
>stdev,SIGMA varies with B according to a Weibull equation and f(B) is an
>exponential PDF.How can I do that?
>
Rich Ulrich wrote:
>
> On Mon, 11 Feb 2002 13:56:46 +0100, "nikolov"
> <[EMAIL PROTECTED]> wrote:
>
> > hello,
> >
> > i want to test the difference between two proportions. The problem is that
> > some elements of these proportions are dependent (i can not isolate them).
> > That is, the t-stat
classification is a specialized field go to
http://www.pitt.edu/~csna/
and click on
although this is the Classification Society of North America members of the
British Classification Society also follow it.
SPSS should be able to handle what you want to do. However, you need
face-to-face consul
In sci.stat.math Rishabh Gupta <[EMAIL PROTECTED]> wrote:
[ snip ]
It seems that you are new to the field of pattern recognition.
In that case, you may want to check out the classic book
"Pattern Classification" by Duda, Hart and Stork.
There is a second edition that came out in 2001. It is a c
"Rishabh Gupta" <[EMAIL PROTECTED]> wrote in
a4eje9$ip8$[EMAIL PROTECTED]:">news:a4eje9$ip8$[EMAIL PROTECTED]:
> Hi All,
> I'm a research student at the Department Of Electronics, University
> Of
> York, UK. I'm working a project related to music analysis and
> classification. I am at t
"Robert J. MacG. Dawson" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
>
>
> "Wuensch, Karl L" wrote:
> >
> > How about simply using the M.A.D.? No, not the mad spouse who noticed
she
> > was getting short-shrimped, rather the mean absolute deviation of
individual
> > shrimp fro
Thank you. I will search for it.
max
---
"kjetil halvorsen" <[EMAIL PROTECTED]> wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
> A book which seems to comply with your requirements are
> P Whittle's "Probability based on expectation", in its 4.th edition from
> Springer. Kjeti
At 09:21 AM 2/13/02 -0600, Mike Granaas wrote:
>On Fri, 8 Feb 2002, Thomas Souers wrote:
> >
> > 2) Secondly, are contrasts used primarily as planned comparisons? If
> so, why?
> >
>
>I would second those who've already indicated that planned comparisons are
>superior in answering theoretical que
Thomas Souers wrote:
>
> Hello, I have two questions regarding multiple comparison tests for a one-way ANOVA
>(fixed effects model).
>
> 1) Consider the "Protected LSD test," where we first use the F statistic to test the
>hypothesis of equality of factor level means. Here we have a type I err
On Fri, 8 Feb 2002, Thomas Souers wrote:
>
> 2) Secondly, are contrasts used primarily as planned comparisons? If so, why?
>
I would second those who've already indicated that planned comparisons are
superior in answering theoretical questions and add a couple of comments:
1) an omnibus test
A book which seems to comply with your requirements are
P Whittle's "Probability based on expectation", in its 4.th edition from
Springer. Kjetil Halvorsen
maximus wrote:
>
> Thank you. I will look for the book and others with advanced level of
> difficulty.
>
> max
>
> -
>
> "Rich Ulric
Hola!
For a more robust test, which not assumes equal centers, use the
fligner-Killeen test.
Kjetil Halvorsen
Glen Barnett wrote:
>
> Rich Ulrich <[EMAIL PROTECTED]> wrote in message
> [EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
> > On Sat, 09 Feb 2002 16:59:34 GMT, Johannes Fichtinger
> > <[
[EMAIL PROTECTED] wrote:
>
> WHY WAIT FOR THE SUNDAY PAPER?
> >
Huh...and here I though the poster managed to get him/herself
excommunicated!
=
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPR
> low-fat vegan diet" would be close). However, the incidence of heterozygous
> familal hypercholesterolemia is only 1:500,000, so this exposure contributes
> little to the variance in serum cholesterol in the population; its r^2 would
> be small.
>
> -Jay
Thanks,
This is similar to a problem
In article ,
Alex Levin <[EMAIL PROTECTED]> wrote:
>Hi all,
>Does anybody know if a Generalized Gamma (GG) distribution is infinitely
>divisible for ALL power parameters Nu: -inf(GG r.v. is a Nu-s power of a Gamma distributed r.v. G(Alpha,Beta):
>GG(Nu,Alpha,Beta)=
Is there any optimality or other reason for the choice of the two below
distances?
There are surely many other possibilities (e.g. Mallow's distance), which,
however,
might not be as appropriate, but at the moment I do not see any reasoning.
Could you please comment/advise on this?
TIA
Robert N
nikolov <[EMAIL PROTECTED]> wrote:
> i want to test the difference between two proportions. The problem is that
> some elements of these proportions are dependent (i can not isolate them).
> That is, the t-statistics does not work. How could i do? Do other kind of
> tests exist? Is there a book o
J. Random Loser in Dnepropetrowsk wrote:
>
> The "Listsoft & Co" company offers save your money.
> We prepositionals the softwere.
Ah. That really fills me with confidence.
The are :
>
> 1. MS WINDOWS 2000 PROFESSIONAL + (SERVICE PACK 2)- 1C
Herman Rubin wrote:
>
>
> I would tend to reject any book which does data analysis;
> I consider cookbook statistics to be putting a loaded gun
> in the hands of an someone who is totally ignorant about
> guns; not necessarily an idiot, as the idiot cannot learn.
> For data analysis, change "gu
Thank you. I will look for the book and others with advanced level of
difficulty.
max
-
"Rich Ulrich" <[EMAIL PROTECTED]> wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
> On Sat, 9 Feb 2002 01:17:14 +0900, "maximus" <[EMAIL PROTECTED]> wrote:
>
> > It may seem odd with the qu
Rich Ulrich <[EMAIL PROTECTED]> wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
> On Sat, 09 Feb 2002 16:59:34 GMT, Johannes Fichtinger
> <[EMAIL PROTECTED]> wrote:
>
> > Dear NG!
> > I have been searching for a description of the Ansari-Bradley dispersion
test up to now for
> > ana
The "Listsoft & Co" company offers save your money.
We prepositionals the softwere. The are :
1. MS WINDOWS 2000 PROFESSIONAL + (SERVICE PACK 2)- 1CD -$15
2. MS WINDOWS 2000 SERVER + (SERVICE PACK 2)- 1CD -$15
3. MS WINDOWS 98 SE 1CD -$13
4. MS WINDOWS MIL
On Sat, 9 Feb 2002 01:17:14 +0900, "maximus" <[EMAIL PROTECTED]> wrote:
> It may seem odd with the question in the title, but I want to read and have
> some more
> practice with (applied) math with expectation/variance, which is in many
> forms, for example
> with max/min, integration (inside or
On Sat, 09 Feb 2002 16:59:34 GMT, Johannes Fichtinger
<[EMAIL PROTECTED]> wrote:
> Dear NG!
> I have been searching for a description of the Ansari-Bradley dispersion test up to
>now for
> analysing a psychological research. I am searching for a description of this test,
>specially a
> descrip
It would be very imporatnt to get a good background in logic and
epistemology. A wide liberal arts background that taught critical
thinking in general would be invaluable.
In order to put statistics in perspective, a good self-teaching effort
would be to scan the abstracts for the Joint Statist
In article ,
Michael Hochster <[EMAIL PROTECTED]> wrote:
>Here are my thoughts on this. The most important mathematical
>requirements are calculus, real analysis, and linear algebra.
>You need to to know these topics thoroughly. Whatever
>textbooks are used for under
Here are my thoughts on this. The most important mathematical
requirements are calculus, real analysis, and linear algebra.
You need to to know these topics thoroughly. Whatever
textbooks are used for undergraduate math majors wherever you
are are probably fine. You also need to know non-measure
t
Wuzzy <[EMAIL PROTECTED]> wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
> > And that sounds impossible. I suspect a programming error.
> >
> > -Jay
>
> you're right i programmed a food database incorrectly but i've redone
> it and yep the correlation was only 0.20 for kcal or so.
picture..
Much like "golden standard" method of deattenuation.. It didn't work.
it is interesting to re-assign food frequencies to people by using
that whichi is predicted by 24hr..
anyway it was fun to try..
> And that sounds impossible. I suspect a programming error.
>
> -Jay
you're right i programmed a food database incorrectly but i've redone
it and yep the correlation was only 0.20 for kcal or so.
it is hard to program a database *into* another database easy to make
errors..
i've made many err
First of all thank you for all replying to my original question. Out
of curiousity, at what textbook level should one's understanding of
analysis, linear algebra, statistics, probability, etc be upon
entering a a typical PhD program. I am trying to gauge which gaps in
my background I need to fil
Wuzzy <[EMAIL PROTECTED]> wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
> Hi Rich, okay i'll post the reason why I ask:
>
> It is because I am validating a 24hr dietary recall questionnaire
> using
> a food frequency questionnaire:
It doesn't make sense to do that.
> Amazingly I
Title: Message
Well,
if this had happened in my house, and my wife observed what I was doing, the
statistic to look at would be spousal homicide in North Carolina,
2002.
reg
-Original Message-From:
[EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On
Behalf Of Karl L. Wuensch
Hi
On 8 Feb 2002, Thomas Souers wrote:
> 2) Secondly, are contrasts used primarily as planned
> comparisons? If so, why?
There are a great many possible contrasts even with a relatively
small number of means. If you examine the data and then decide
what contrasts to do, then you have in some i
At 10:37 AM 2/8/02 -0800, Thomas Souers wrote:
>2) Secondly, are contrasts used primarily as planned comparisons? If so, why?
well, in the typical rather complex study ... all pairs of possible mean
differences (as one example) are NOT equally important to the testing of
your theory or notions
You have to keep in mind that the LSD is concerned with familywise error
rate, which is the probability that you will make at least one
type I error in your set of conclusions. For the familywise error rate, 3
errors are no worse than 1.
Suppose that you have three groups. If the omnibus null is t
Hi Rich, okay i'll post the reason why I ask:
It is because I am validating a 24hr dietary recall questionnaire
using
a food frequency questionnaire:
as someone else pointed out i got an error, also a perfect correlation
for pearsons.
it is much more complicated than this but that is the scoo
On 5 Feb 2002 18:01:15 -0800, [EMAIL PROTECTED] (Wuzzy) wrote:
> > You made a model with the "exact same exposure in different units",
> > which is something that no one would do,
>
> Hehe, translation is don't post messages until you've thought them
> through.
>
> Anyway, turns out that the a
In article <[EMAIL PROTECTED]>,
Wuzzy <[EMAIL PROTECTED]> wrote:
>Is it possible that multicollinearity can force a correlation that
>does not exist?
>I have a very large sample of n=5,000
>and have found that
>disease= exposure + exposure + exposure + exposure R^2=0.45
>where all 4 exposures a
In article ,
Francis Dermot Sweeney <[EMAIL PROTECTED]> wrote:
>If I have two normal distributions N(m1, s1) and N(m2, s2), what is a
>good measure of the distance between them? I was thinking of something
>like a K-S distance like max|phi1-phi2|. I know it probably
7;m using 3.1 and it
> opens the 4.0 workfile no problem but when I hit the estimate button
> (via FIML) it returns: error, near singular matrix. As far as I know
> the workfile was unchanged from the one that worked fine in 4.0. Do I
> need to create a new workfile in 3.1 and re-enter the equ
On Thu, 7 Feb 2002 13:20:44 +0100, "Anna Axmon"
<[EMAIL PROTECTED]> wrote:
> Hi,
>
> does anyone know if there is a textbook on case-cross over design?
>
"case crossover design" (in quotes) gets 258 hits reported in google.
I did not notice a textbook review, but those should lead you
to wh
than some vague references to "an appropriate" method. Can someone
> recommend an approachable reference?
On careful re-reading, this is not the question that I expected
from its first line.
Now I gather that the precise lack of uniformity does not bother
you for its lack
JJ Diamond <[EMAIL PROTECTED]> wrote:
: sensitivity analysis is not the same as sensitivity and specificity
: from epidemiology. these latter terms are used when describing the
: characteristics of a diagnostic test and they ultimately relate to the
: utility of a test for diagnosis. my memory s
tivity analysis. Further info can be obtained from P Armitage & G
> >Berry, "Statistical Methods in Medical Research", Blackwell Scientific
> >Publications. Judy Conn
> >
> > > -Original Message-
> > > From: Rich Ulrich [SMTP:[EMAIL PR
Francis Dermot Sweeney wrote:
=
> If I have two normal distributions N(m1, s1) and N(m2, =
> s2), what is a good measure of the distance between them? =
> I was thinking of something like a K-S distance like =
> max|phi1-phi2|. I know it probably depende on what I
> want it for, or what
seems, as you have said, depends what you want to do with it
if there is considerable overlap, then whatever distance you use will have
some of both distributions included ... if there is essentially no overlap
... then any pair of values ... one from each ...will reflect a real difference
of
I think of when talking
>sensitivity analysis. Further info can be obtained from P Armitage & G
>Berry, "Statistical Methods in Medical Research", Blackwell Scientific
>Publications. Judy Conn
>
> > -Original Message-
> > From: Rich Ulrich [SMTP:[E
I can't help it. the last paragraph in this post absolutely _demands_ a
response.
Wuzzy wrote:
> > You made a model with the "exact same exposure in different units",
> > which is something that no one would do,
>
> Hehe, translation is don't post messages until you've thought them
> through.
>
t;, Blackwell Scientific
Publications. Judy Conn
> -Original Message-
> From: Rich Ulrich [SMTP:[EMAIL PROTECTED]]
> Sent: Wednesday, February 06, 2002 9:55 AM
> To: [EMAIL PROTECTED]
> Subject: Re: Sensitivity Analysis
>
> On 31 Jan 2002 10:06:36 -0800, [EMAI
On 31 Jan 2002 10:06:36 -0800, [EMAIL PROTECTED]
(Christopher J. Mecklin) wrote:
> I had a colleague (a biologist) ask me about sensitivity analysis. I am
> not familiar with the technique (above and beyond knowing that the
> technique exists). What books/articles/websites/etc. would be good s
To: [EMAIL PROTECTED]
Date sent: 5 Feb 2002 18:15:00 -0800
From: [EMAIL PROTECTED] (Wuzzy)
Organization: http://groups.google.com/
Subject: Re: can multicollinearity force a correlation?
> In my own defense:
>
> I
In my own defense:
I was asking a simple question:
will highly correlated cause an irregularly high R^2.
My answer to my own question is "no" it can't..
No-one here was able to give me this answer and I believe it is
correct: if your sample is large enough,(as mine is) then "no",
multicolline
> You made a model with the "exact same exposure in different units",
> which is something that no one would do,
Hehe, translation is don't post messages until you've thought them
through.
Anyway, turns out that the answer to my question is "No"..
Multicollinearity cannot force a correlation.
Would you please post the 5 * 5 R matrix?
Wuzzy wrote:
> Is it possible that multicollinearity can force a correlation that
> does not exist?
>
> I have a very large sample of n=5,000
> and have found that
>
> disease= exposure + exposure + exposure + exposure R^2=0.45
>
> where all 4 exposures
On 4 Feb 2002 16:14:11 -0800, [EMAIL PROTECTED] (Wuzzy) wrote:
> >
> > In biostatistical studies, either version of beta is pretty worthless.
> > Generally speaking.
>
> If I may be permitted to infer a reason:
> if you have
>
> bodyweight= -a(drug) - b(exercise) + food
>
> Then the standar
Title: RE: can multicollinearity force a correlation?
> Is it possible that multicollinearity can force a correlation that
> does not exist?
>
> I have a very large sample of n=5,000
> and have found that
>
> disease= exposure + exposure + exposure + exposure R^2=
On 5 Feb 2002 08:28:05 -0800, [EMAIL PROTECTED] (Wuzzy) wrote:
> Is it possible that multicollinearity can force a correlation that
> does not exist?
>
> I have a very large sample of n=5,000
> and have found that
>
> disease= exposure + exposure + exposure + exposure R^2=0.45
>
> where all 4
Cengiz:
I'd say pure and applied mathematics by which I mean real analysis,
linear algebra and numerical methods.
--
Rodney Sparapani Medical College of Wisconsin
Sr. Biostatistician Patient Care & Outcomes Research (PCOR)
[EMAIL PROTECTED] http://www.mcw.edu/
I'm curious to know why you're using the same exact exposure in different
units. I've included a dichotomized version of a continuous exposure
variable to look at potential threshold effects, but I've never heard of
anyone doing what you've described.
At 08:28 AM 2/5/02 -0800, Wuzzy wrote:
>
In article <[EMAIL PROTECTED]>,
Roland Pesch <[EMAIL PROTECTED]> wrote:
>HI,
>I'm trying to perform factor analysis on mosses from 1028 moss
>monitoring sites, each of which was chemically anaylsed on 20 heavy
>metal elements. All of these samples do not follow a normal distribution
>pattern, th
The word "prove" has at least two meanings. One is to test (as in
"the proof of the pudding is in the eating"). It is not unreasonable
to guess that Darwin might have been using the word in this somewhat
archaic sense.
Thom
Stu wrote:
>
> > Was Darwin's statement "It has been experimentally pro
> Was Darwin's statement "It has been experimentally proved that if a plot of
> ground be sown with one species of grass, and a similar plot be sown with
> several distinct genera of grasses, a greater number of plants and a greater
> weight of dry herbage can be raised", a valid statement?
I emp
<[EMAIL PROTECTED]> wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
> 2001ÄêÍøÂçƱѡ½Ü³öMLMÕ¾µã£¬Îñ±ØÒªÄÍÐÄä¯ÀÀŶ£º
>
> http://www.kelvin.13800.com
> ÐÂÓ¯ÀûÉÌÎñÐÅÏ¢ÍøÂç
> http://www.kelvin.uurr.com
> ´ó¼Ò׬ÉÌÎñÐÅÏ¢ÍøÂç
> http://www.kelvin.17951.com
> ½ðӯͨÉÌÎñÐÅÏ¢ÍøÂç
> http://www.
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of Wuzzy
Sent: Monday, February 04, 2002 4:14 PM
To: [EMAIL PROTECTED]
Subject: Re: Interpreting mutliple regression Beta is only way?
...
I've heard of "ridge regression" will try
Jennifer Golbeck wrote:
>
> i hope someone can help me with this. i have finished a computer science
> study that examines swarming behavior. my claim is that the swarming
> algorithm that i use produces a gaussian distribution - on a grid, the
> frequency that each area is visited is recorded. g
>
> In biostatistical studies, either version of beta is pretty worthless.
> Generally speaking.
If I may be permitted to infer a reason:
if you have
bodyweight= -a(drug) - b(exercise) + food
Then the standardized coefficients will affect bodyweight but they
will also affect each other. The
"R. Stegers" <[EMAIL PROTECTED]> wrote in message news:...
> I'm trying to understand some medical paper and they used both Log-rank and
> Mantel-Haenszel. Could anybody briefly explain what is measured by these
> specific tests? Why are they used (in general) en what
"R. Stegers" <[EMAIL PROTECTED]> wrote in message news:...
> I'm trying to understand some medical paper and they used both Log-rank and
> Mantel-Haenszel. Could anybody briefly explain what is measured by these
> specific tests? Why are they used (in general) en what
Wuzzy wrote:
>
> > Walter Willett has a whole chapter on this subject in his book Nutritional
> > Epidemiology. It should be considered required reading before attempting to
> > model anything that has to do with diet.
>
> Thanks this is a really good book, not just for ppl wanting to study
> n
Or maybe I didn't understand Don's response to Jan. Pressing ever
onward, though
I had suggested using
> DEFINITION: There is a *relationship* between the vari-
> ables x and y if for at least one pair of values x'
> and x" of x
>
> E(y|x') ~= E(y|x").
On 3 Feb 2002 03:34:35 -0800, [EMAIL PROTECTED] wrote:
[ snip, previous examples ... ]
>
> Asked about statistics practice problems. Search took 0.35 seconds
> Got about 945,000. hits.
>
> Nuff said,
- Right idea, sloppy technique; I would say in my critique.
Google hits 966,000 for me,
201 - 300 of 5520 matches
Mail list logo