. a
Product/Company) and calculate the different correlations dependend
from the thing is rated as 1./2./3. or all togehter!
Not a surprise for me are the different values in correlation, but
make the use of correlations for this doubtful.
What is more a cause for this results'S ?
(1) The small N's
(2
My tentative conclusion is that your 2% effect really
is a small one; it should be difficult to discern among
likely artifacts; and therefore, it is hardly worth mentioning
I agree to me it makes sense as well: fasting insulin should have more
to do with error and genetics than food
On 18 Feb 2002 16:29:27 -0800, [EMAIL PROTECTED] (Wuzzy) wrote:
You should take note that R^2 is *not* a very good measure
of 'effect size.'
Hi Rich, you asked to see my data,
- I don't remember doing that -
i've posted the visual at
http://www.accessv.com/~joemende/insulin2.gif
Appologies, i also forgot to divide the KCAL in food by the 31 as this
represents kcal. It seems to me logical to advise decreasing food
intake and increasing physical activity to improve insulin
sensitivity. I would probably avoid reporting the
You should take note that R^2 is *not* a very good measure
of 'effect size.'
Hi Rich, you asked to see my data, i've posted the visual at the
following location http://www.accessv.com/~joemende/insulin2.gif note
that the r^2 is low despite the fact that it agrees with common sense:
Insulin
Wuzzy wrote:
http://www.accessv.com/~joemende/insulin2.gif
Appologies, i also forgot to divide the KCAL in food by the 31 as this
represents kcal. It seems to me logical to advise decreasing food
intake and increasing physical activity to improve insulin
sensitivity. I would probably
[ snip, previous problem]
This is similar to a problem I have come across: the measurement of a
serum value against exposure.
My theory is that they are correlated. But the data says that they
have an R^2 of 0.02 even though the p-value for the beta is p=1E-40
(ie. zero).
As you
low-fat vegan diet would be close). However, the incidence of heterozygous
familal hypercholesterolemia is only 1:500,000, so this exposure contributes
little to the variance in serum cholesterol in the population; its r^2 would
be small.
-Jay
Thanks,
This is similar to a problem I have
And that sounds impossible. I suspect a programming error.
-Jay
you're right i programmed a food database incorrectly but i've redone
it and yep the correlation was only 0.20 for kcal or so.
it is hard to program a database *into* another database easy to make
errors..
i've made many
Jay Tanzman [EMAIL PROTECTED] wrote in message
news:a42e88$1bthp5$[EMAIL PROTECTED]...
Wuzzy [EMAIL PROTECTED] wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
It is because I am validating a 24hr dietary recall questionnaire
using
a food frequency questionnaire:
It was
Wuzzy [EMAIL PROTECTED] wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
And that sounds impossible. I suspect a programming error.
-Jay
you're right i programmed a food database incorrectly but i've redone
it and yep the correlation was only 0.20 for kc
ly I got a perfect correlation between the two, you would think
that the 24hr would be at least a bit attenuated but I got a perfect
correlation or error
And that sounds impossible. I suspect a programming error.
-Jay
=
Inst
In article [EMAIL PROTECTED],
Wuzzy [EMAIL PROTECTED] wrote:
Is it possible that multicollinearity can force a correlation that
does not exist?
I have a very large sample of n=5,000
and have found that
disease= exposure + exposure + exposure + exposure R^2=0.45
where all 4 exposures
to my question is No..
Well, I think I speak for several statisticians when I say that
we still don't know what you refer to as 'multi collinearity'. Do you
mean 100%, as in your question? What *are* you asking?
Multicollinearity cannot force a correlation. It turns out that ONE
Hi Rich, okay i'll post the reason why I ask:
It is because I am validating a 24hr dietary recall questionnaire
using
a food frequency questionnaire:
as someone else pointed out i got an error, also a perfect correlation
for pearsons.
it is much more complicated than
To: [EMAIL PROTECTED]
Date sent: 5 Feb 2002 18:15:00 -0800
From: [EMAIL PROTECTED] (Wuzzy)
Organization: http://groups.google.com/
Subject:Re: can multicollinearity force a correlation?
In my own defense:
I was asking a simple
, turns out that the answer to my question is No..
Multicollinearity cannot force a correlation. It turns out that ONE
of the variables *was* correlated With R^2=0.45 and so
multicollinearity had no effect on overall R^2.
I'm sure no-one is interested in my data as it has nothing to do
Is it possible that multicollinearity can force a correlation that
does not exist?
I have a very large sample of n=5,000
and have found that
disease= exposure + exposure + exposure + exposure R^2=0.45
where all 4 exposures are the exact same exposure in different units
like ug/dL or mg/dL
:
Is it possible that multicollinearity can force a correlation that
does not exist?
I have a very large sample of n=5,000
and have found that
disease= exposure + exposure + exposure + exposure R^2=0.45
where all 4 exposures are the exact same exposure in different units
like ug/dL or mg/dL
Title: RE: can multicollinearity force a correlation?
Is it possible that multicollinearity can force a correlation that
does not exist?
I have a very large sample of n=5,000
and have found that
disease= exposure + exposure + exposure + exposure R^2=0.45
where all 4 exposures
You made a model with the exact same exposure in different units,
which is something that no one would do,
Hehe, translation is don't post messages until you've thought them
through.
Anyway, turns out that the answer to my question is No..
Multicollinearity cannot force a correlation
In my own defense:
I was asking a simple question:
will highly correlated cause an irregularly high R^2.
My answer to my own question is no it can't..
No-one here was able to give me this answer and I believe it is
correct: if your sample is large enough,(as mine is) then no,
Isn't it the same as getting the variance of the product of the independant
uncorrelated variables A B ?
Y.
John Smith [EMAIL PROTECTED] wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
If I have 3 variables defined as follows:
A, B as independent, uncorrelated values of 0 or
If I have 3 variables defined as follows:
A, B as independent, uncorrelated values of 0 or 1
C defined as the logical AND of AB, such that C=1 if and only if both
A B =1, and 0 otherwise.
Example
A=1, B=0 then C=0
A=0, B=1 then C=0
A=0, B=1 then C=0
A=1, B=1 then C=1
My question is, what is
+ **
-
- *
-
--+-+-+-+-+-+C16
304050607080
MTB corr c16 c17
Pearson correlation of C16 and C17 = -0.005
P-Value = 0.959
then i did the sum
The answer is E(CA)=EA*EB. This is why:
You have C=A*B. Therefore, E(CA)=E((A**2)*B))=E(A*B)=EA*EB.
The second to last equality holds because A**2=A, and the last one is
correct because A and B are independent.
Vadim
[EMAIL PROTECTED] (John Smith) wrote:
If I have 3 variables defined as
m of 2-D joint PDF f(X,Y)=f(Y|X)f(X).I
would
like to test the correlation between them to see whether there are
correlated or not. Do I simply find the correlation coeffient between
these
two variables or are there other ways that I could use to test
correlation??
If you're interested in ass
their
correlation between them by simply use the Pearson correlation coefficent, r
formulae. However, r values was so small (i.e. r=0.08). This means that X
Y are not linearly dependent, which is expected from the conditinoal and
marginal PDFs that I have derived. So, does this mean in my case finding r
value
Hi!
I have 2 random varaibles (X and Y) obtained from some experiments. I have
expressed these 2 RVs in ternm of 2-D joint PDF f(X,Y)=f(Y|X)f(X).I would
like to test the correlation between them to see whether there are
correlated or not. Do I simply find the correlation coeffient between
janne [EMAIL PROTECTED] wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
I have a correlation formula I don't get to work. And we must use this
formula on the test. Let me give you an example: Let's say X and Y
are:
xy
1 68
2 91
3 102
3 107
4 105
4
the sum of the crossproducts.
Stephen Clark wrote:
janne [EMAIL PROTECTED] wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
I have a correlation formula I don't get to work. And we must use this
formula on the test. Let me give you an example: Let's say X and Y
are:
x
In sci.stat.consult janne [EMAIL PROTECTED] wrote:
: I have a correlation formula I don't get to work. And we must use this
: formula on the test. Let me give you an example: Let's say X and Y
If you don't know with x(with a line above) MEANS, you need to STUDY your
text. Also your instructor
sum of deviations around a mean always = 0
X-X
1-3.5=2.5
2-3.5=-1.5
3-3.5=-0.5
3-3.5=-0.5
4-3.5=0.5
4-3.5=0.5
5-3.5=1.5
6-3.5=2.5
0
As you see the answer is zero. What do I do wrong? and the same with
Y-Y(with a line above). It turns out to be zero. Please help me to tell
how I
janne [EMAIL PROTECTED] wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
How do I do the first (X-X(with a line above))? I have tried to take
_
X-X
*snip*
0
As you see the answer is zero. What do I do wrong?
You calculate
SUM[(x-x_bar)] * SUM[(y-y_bar)]
instead
I have a correlation formula I don't get to work. And we must use this
formula on the test. Let me give you an example: Let's say X and Y
are:
xy
1 68
2 91
3 102
3 107
4 105
4 114
5 115
6 127
_ ___
28 829
__
X is =3.5 and Y is =103.625
Now to my
I have a correlation formula I don't get to work. And we must use this
formula on the test. Let me give you an example: Let's say X and Y
are:
xy
1 68
2 91
3 102
3 107
4 105
4 114
5 115
6 127
_ ___
28 829
__
X is =3.5 and Y is =103.625
Now to my
In article [EMAIL PROTECTED],
janne [EMAIL PROTECTED] wrote:
I have a correlation formula I don't get to work. And we must use this
formula on the test. Let me give you an example: Let's say X and Y
are:
[example omitted]
As you see the answer is zero. What do I do wrong?
Nothing. For any
Hello,
I'm doing a project that requires me to utilize the both Spearman and Pearson
correlation formulas. Does anyone know a good software program that will let me
do so? I have tried using Winks but it does not do the job too well. Any and
all help is greatly appreciated.
Thank you in advance
Andrew Morse wrote:
Who was the first to say Correlation does not imply causation in so many
words? I know that the idea dates back to David Hume, but Hume did his
work about a century before the term correlation acquired its modern
statitical meaning.
It certainly wasn't Hume, who's argument
In article [EMAIL PROTECTED],
EugeneGall [EMAIL PROTECTED] wrote:
Andrew Morse wrote:
Who was the first to say Correlation does not imply causation in so many
words? I know that the idea dates back to David Hume, but Hume did his
work about a century before the term correlation acquired its
Andrew Morse wrote:
Who was the first to say Correlation does not imply causation in so many
words? I know that the idea dates back to David Hume, but Hume did his
work about a century before the term correlation acquired its modern
statitical meaning.
It certainly wasn't Hume, who's argument
Hi
On 6 Dec 2001, David Heiser wrote:
Most of the focus is on structural equation modeling (SEM). For
statisticians, a quick referral to Jim Steiger's article Driving Fast in
Reverse in JASA March 2001, p331-p338 (if you have it around) is a quick
discourse on SEM and the inherent problems
.
Jay
[EMAIL PROTECTED] wrote:
Classic study: Correlation between local stork population and local births.
-Original Message-
From: Stu [mailto:[EMAIL PROTECTED]]
Sent: Thursday, December 06, 2001 1:08 AM
To: [EMAIL PROTECTED]
Subject: Re: When does correlation imply causation?
My
Whether we can get causal inferences out of correlation and equations has
been a dispute between two camps:
For causation: Clark Glymour (Philosopher), Pearl (Computer scientist),
James Woodward (Philosopher)
Against: Nancy Cartwright (Economist and philosopher), David Freedman
the
value of some form of correlation coefficient? I certainly agree with
that.
- - -
The observation that a correlation exists is one point in an inductive
argument about causation. The entire argument shows causation. If we
cannot show some form of correlation an important element of a casual
statistic).
Does correlation (phi is not equal to zero) imply causation in this case?
That is, can I conclude that turning the lights on affects my ability to
read fine print?
I modify my experiment such that Y is now the reading on an
instrument that measure the intensity of light
Classic study: Correlation between local stork population and local births.
-Original Message-
From: Stu [mailto:[EMAIL PROTECTED]]
Sent: Thursday, December 06, 2001 1:08 AM
To: [EMAIL PROTECTED]
Subject: Re: When does correlation imply causation?
My favorite original example
computed
with dichotomous data). Phi = .5. I test and reject the null hypothesis
that phi is zero in the population (using chi-square as the test statistic).
Does correlation (phi is not equal to zero) imply causation in this case?
That is, can I conclude that turning the lights on affects my ability
Stu wrote:
Silvert, Henry wrote:
Might I go one step further and point out the correlation does not establish
a causal relationship primarily because it does not point to directionality,
at least not without a working hypothesis and some background support.
Absolutely. Without both
On Wed, 5 Dec 2001, Karl L. Wuensch wrote:
much stuff snipped
So why is it that many persons believe that one can make causal inferences
with confidence from the results of two-group t tests and ANOVA but not with the
results of correlation/regression techniques. I believe
correlation NEVER implies causation ...
and i agree with mike totally
At 09:01 AM 12/5/01 -0600, Mike Granaas wrote:
We really need to emphasize over and over that it is the manner in which
you collect the data and not the statistical technique that allows one to
make causal inferences
At 07:36 AM 12/5/01 -0500, Karl L. Wuensch wrote:
Accordingly, I argue that correlation is a necessary but not a
sufficient condition to make causal inferences with reasonable
confidence. Also necessary is an appropriate method of data
collection. To make such causal
Dennis warns the problem with this is ... does higher correlation mean MORE
cause? lower r mean LESS cause?
in what sense can think of cause being more or less? you HAVE to think that
way IF you want to use the r value AS an INDEX MEASURE of cause ...
Dennis is not going to like this, since he
.
If correlation (in the broad sense) remains after taking into account
(controlling, rendering unlikely) plausible rival hypotheses, it does
imply (support, suggest, indicate, make plausible) causation.
In experimental studies, active manipulation of independent variables,
and random assignment to conditions
Correlations: dosage, res1
Pearson correlation of dosage and res1 = 1.000
P-Value = *
but, for another set of data we get
MTB plot c3 c1
Plot
res2- *
-
-
1.00
... and observe the same result
we want to say that the impact to the FACE ... CAUSED the person on the
left to fall down
but, did it? in a sense, this is like a perfect correlation in that ...
when the person swung to the RIGHT .. the person on the left NEVER fell
backwards and down
with an r = 1, karl
On 5 Dec 2001 08:52:41 -0800, [EMAIL PROTECTED] (Dennis Roberts) wrote:
correlation NEVER implies causation ...
That is true
- in the strong sense, and
- in formal logic, and
- as a famous quotation among researchers.
(And, reported as wrongly contrasted to 'ANOVA'.)
Or, correlation always
Dennis Roberts [EMAIL PROTECTED] wrote in sci.stat.edu:
personally, i think it is dangerous in ANY case to say that r = cause ...
Hear, hear!
My favorite original example is the correlation between number of
annual murders in a city and number of books in its libraries.
Students have
My favorite original example is the correlation between number of
annual murders in a city and number of books in its libraries.
Students have no trouble seeing that the two are going to have a
fairly high correlation coefficient(*), but murders don't make
people read and books don't make
Might I go one step further and point out the correlation does not establish
a causal relationship primarily because it does not point to directionality,
at least not without a working hypothesis and some background support.
Henry M. Silvert Ph.D.
Research Statistician
The Conference Board
845
Hi
On 3 Dec 2001, Karl L. Wuensch wrote:
I think that phrase has created much misunderstanding. I try
to convince my students that correlation is necessary but not
sufficient for establishing a causal relationship.
And I teach that NEITHER presence NOR absence of _simple_
correlation can
Nice example. Perhaps I should have said partial (and not necessarily
linear) correlation.
- Original Message -
From: jim clark [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, December 04, 2001 1:37 PM
Subject: Re: Who said Correlation does not imply causation.
Hi
On 3 Dec
[EMAIL PROTECTED] (Andrew Morse) wrote in message
news:[EMAIL PROTECTED]...
Who was the first to say Correlation does not imply causation in so many
words? I know that the idea dates back to David Hume, but Hume did his
work about a century before the term correlation acquired its modern
Andrew Morse [EMAIL PROTECTED] wrote:
Who was the first to say Correlation does not imply causation in so many
Kendall and Stuart says ...Yule (1926) frightened statisticians by adducing
cases of very high correlations which were obviously not causal..., in
time-series data, as it happens
---BeginMessage---
Silvert, Henry wrote:
Might I go one step further and point out the correlation does not establish
a causal relationship primarily because it does not point to directionality,
at least not without a working hypothesis and some background support.
Absolutely. Without
I think that phrase has created much
misunderstanding. I try to convince my students that correlation is
necessary but not sufficient for establishing a causal
relationship.
Karl L. Wuensch, Department of
Psychology,East Carolina University, Greenville NC
27858-4353Voice: 252-328-4102
I don't recall who coined that phrase. However, it is frequently misused.
Sometimes it is used to put down bad researchers who use correlational
methods (including ordinary regression) and good researchers who use ANOVA
methods. Sometimes it is used to mean that if there is correlation
I don't recall who coined that phrase. However, it is frequently misused.
Sometimes it is used to put down bad researchers who use correlational
methods (including ordinary regression) and good researchers who use ANOVA
methods. Sometimes it is used to mean that if there is correlation
I don't recall who coined that phrase. However, it is frequently misused.
Sometimes it is used to put down bad researchers who use correlational
methods (including ordinary regression) and good researchers who use ANOVA
methods. Sometimes it is used to mean that if there is correlation
Who was the first to say Correlation does not imply causation in so many
words? I know that the idea dates back to David Hume, but Hume did his
work about a century before the term correlation acquired its modern
statitical meaning. I've seen many sources that crdit Karl Pearson with
banishing
*** post for FREE via your newsreader at post.newsfeeds.com ***
subj? I find books only with a case of regular distributive law
Any references to online resources are welcome
-= Posted via Newsfeeds.Com, Uncensored Usenet News =-
http://www.newsfeeds.com - The #1 Newsgroup Service in
Gardburyb [EMAIL PROTECTED] wrote:
: Hi all,
: I'm new to the group. I'm doing my dissertation, and I am doing a canonical
: correlation analysis. My question is, what is the best way to compare canonical
The test of parallelism in mancova is an equivalent test
Elliot Cramer wrote:
Gardburyb [EMAIL PROTECTED] wrote:
: Hi all,
: I'm new to the group. I'm doing my dissertation, and I am doing a canonical
: correlation analysis. My question is, what is the best way to compare canonical
The test of parallelism in mancova is an equivalent test
this is, in reference
to what test (what computer program, or what textbook)?
I'd like to ask a follow-up question then. MANCOVA uses least squares as
its objective function to estimate relationships, while canonical
correlation uses a different objective function. They don't seem
equivalent
are in a
particular order, or if that is randomized - between patients, or
within a patient.
Is the balance test pass-fail, or does it produce a score
on some scale (as 'correlation' implies)?
Is there a 'Gold Standard' for performance? Does a
performance, after going 'beyond' learning, post
Dickin
Sent: Monday, July 23, 2001 10:08 PM
To: [EMAIL PROTECTED]
Subject: Interclass Correlation??
I am trying to determine the reliability of a balance test for individuals
with Alzheimer's disease. The test involves six different conditions, with
each condition consisting of three trials (6 x 3
I am trying to determine the reliability of a balance test for individuals
with Alzheimer's disease. The test involves six different conditions, with
each condition consisting of three trials (6 x 3). Each individual has
performed the complete test twice, which gives me 6 trials for each of the 6
Dear Members,
How can I construct a confidence interval about Pearson correlation using
standard error and t value? What is the formula?
Regards,
Alexandre Moura.
=
Instructions for joining and leaving this list and remarks
construct a confidence interval about Pearson correlation using
standard error and t value? What is the formula?
Regards,
Alexandre Moura.
=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE
Matti Overmark wrote:
I have fitted a 3 rd degree curve to a sample (least square method), and
I want to compare this particular R2 with that of
a (similarily) fitted 2 degree polynom.
I can assure you that the 3rd degree polynomial will fit as well or
better than the 2nd degree polynomial,
Hi group!
I´m new to this group, so...just you know.
I have fitted a 3 rd degree curve to a sample (least square method), and
I want to compare this particular R2 with that of
a (similarily) fitted 2 degree polynom.
I want to see which of the two models is the best.
Any suggestion of a good
Judd McClelland, _Data Analysis: A Model Comparison Approch_, chapter 8.
MG
On 18 Jun 2001, Matti Overmark wrote:
Hi group!
I´m new to this group, so...just you know.
I have fitted a 3 rd degree curve to a sample (least square method), and
I want to compare this particular R2 with
the square terms, or all the product
terms? What is the 1_2 inner product? How should the complex result be
interpreted--what exactly is the meaning?
| Can the linear correlation coefficient equation (Pearson's) be used for
| calculating the correlation coefficient of complex variables
If it makes sense to represent your data as 4 column vectors, say y1 and y2
for the first set and x1 and x2 for the second, then you may consider to
calculate the canonical correlation between (y1,y2) and (x1,x2).
Jos Jansen
"Peter J. Wahle" [EMAIL PROTECTED] wrote in message
WcQA6.
for
independence and need some sort of measure of correlation.
=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
http://jse.stat.ncsu.edu/
=
The cross product is another vector.
Two vectors are orthogonal if their dot product is zero.
This is not what I'm looking for.
I have two sets of 2-D vectors that I need to
determine their correlation or dependence.
If I remember correctly two vectors are independent if their cross product
Does anyone know a algorithm for cross-correlation between two time
series
to identify leading and lagging indicators and the size of the lag? Do
you
know a algorithm in C or f77. SAS adn SPSS just calculate this and
quickly do it for each lag and lead, but i need a algorithm for own
Hi there.a colleague needs a recent estimate of height/weight
correlation of adults (age 18 or over)..I've searched the various major
websites (CDC, etc.), and one huge dataset had weight/height but not
according to the specs my colleague needed..any datasets that you can
recommend
I don't knwo Matlab, but:
I create two random signals (each 100 points from gaussian distribution
from -1 to 1)
You mean 100 IID Gaussian-distributed points, indexed by values from -1
to 1? (I'm assuming this, anyway.)
and find the maximum cross-correlation value (either
On Mon, 30 Oct 2000 18:54:12 -0800, "G. Anthony Reina" [EMAIL PROTECTED]
wrote:
I'm having a problem concerning cross-correlation and was hoping someone
could help explain.
Here's what I'm doing:
I create two random signals (each 100 points from gaussian distribution
fr
I'm having a problem concerning cross-correlation and was hoping someone
could help explain.
Here's what I'm doing:
I create two random signals (each 100 points from gaussian distribution
from -1 to 1) and find the maximum cross-correlation value (either
negative or positive, whichever has
use of them):1) It
demonstrates that a correlation problem in which one variable is dichotomous
is equivalent to a two-group mean-difference problem."
You all may find this hard to believe, but, in my experience, a large
proportion of social scientists have the delusion that if you conduct a
tra
in the continuous variable is caused by
alteration of the dichotomous variable), but if you analyze the same
variables with a correlation analysis you cannot make a causal inference. I
show my students and colleagues the equivalence of testing the null that the
point biserial is zero and testing
, fn_different(t).
I'm interested in preforming some sort of correlation/statistical analysis
of the data, that can tell me how the part of the data in fn(t), that are
varying (ie. fn_different(t)) are dependent of eachother, ie. are the parts
statically independent, or not and if not with which distribution do
h varyhing fn(t)? Or is that part of what is
to be inferred from the data?
I'm interested in performing some sort of correlation/statistical
analysis of the data, that can tell me how the part of the data in
fn(t), that are varying (ie. fn_different(t)) are dependent of
each other, ie. are
I think I understand your question better now than in my first reply:
In article [EMAIL PROTECTED],
castlemaster [EMAIL PROTECTED] wrote:
there is a test of model with chi-sq value, d.f. and p-value provided
for the polyserial correlation coefficients in prelis 2.
is it a test of close fit
hi,
there is a test of model with chi-sq value, d.f. and p-value provided
for
the polyserial correlation coefficients in prelis 2.
is it a test of close fit or a test of significance?
does a test of significance exist?
any reference?
many thanks
In article [EMAIL PROTECTED] (Donald Burrill) wrote:
Sounds like a prediction or calibration kind of problem. As Joe Ward
pointed out, raw regression coefficients, and standard errors of
measurement, are more stable than correlation coefficients.
Yes, that's right. Regression
t, readers will.
We want to report the correlation between two methods of measurement
for about 20-30 different response variables. It isn't expected that
the methods will give the same individual or average values, but the
degree of linear correlation is useful in future large studi
; and even if you don't, readers will.
We want to report the correlation between two methods of measurement for
about 20-30 different response variables. It isn't expected that the
methods will give the same individual or average values, but the degree
of linear correlation is useful in future lar
1 - 100 of 163 matches
Mail list logo