On 28/05/2010 9:24 AM, (Ted Harding) wrote:
An experiment:
sort(c(AACD,A CD))
# [1] AACD A CD
sort(c(ABCD,A CD))
# [1] ABCD A CD
sort(c(ACCD,A CD))
# [1] ACCD A CD
sort(c(ADCD,A CD))
# [1] A CD ADCD
sort(c(AECD,A CD))
# [1] A CD AECD
## (with results for AFCD, ...
On 28-May-10 14:37:39, Duncan Murdoch wrote:
On 28/05/2010 9:24 AM, (Ted Harding) wrote:
An experiment:
sort(c(AACD,A CD))
# [1] AACD A CD
sort(c(ABCD,A CD))
# [1] ABCD A CD
sort(c(ACCD,A CD))
# [1] ACCD A CD
sort(c(ADCD,A CD))
# [1] A CD ADCD
sort(c(AECD,A
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
On Behalf Of Ted Harding
Sent: Friday, May 28, 2010 1:15 PM
To: r-help@r-project.org
Cc: carslaw
Subject: Re: [R] difference in sort order linux/Windows (R.2.11.0)
On 28-May-10 14:37:39
Linux problem solved! (For me at any rate). Thanks to some hints
from my Linux contacts it transpires that the problem with
sort EOT
ABCD
A CD
EOT
# ABCD
# A CD
sort EOT
ADCD
A CD
EOT
# A CD
# ADCD
arises because, by default, the is ignored in sorting. Therefore
in the first case it sorted
I was looking for a function which would take the difference along a vector?
a-c(1,12,23,44,15,28,7,8,9,10)
if I set the number difference to 3 would return
43
2
5
-37
-7
-19
3
or do I need to write my own function for this.
--
View this message in context:
?diff and look at argument 'lag'.
On 2010-05-12 13:06, Clark Johnston wrote:
I was looking for a function which would take the difference along a vector?
a-c(1,12,23,44,15,28,7,8,9,10)
if I set the number difference to 3 would return
43
2
5
-37
-7
-19
3
or do I need to write my own
help.search(difference)
would lead you to
?diff, see the lag argument
Clark Johnston wrote:
I was looking for a function which would take the difference along a vector?
a-c(1,12,23,44,15,28,7,8,9,10)
if I set the number difference to 3 would return
43
2
5
-37
-7
-19
3
or do I need to
On May 12, 2010, at 2:06 PM, Clark Johnston wrote:
I was looking for a function which would take the difference along a vector?
a-c(1,12,23,44,15,28,7,8,9,10)
if I set the number difference to 3 would return
43
2
5
-37
-7
-19
3
or do I need to write my own function for this.
Try this:
n - 3
apply(embed(a, n + 1)[,c(1, n + 1)], 1, diff)
On Wed, May 12, 2010 at 4:06 PM, Clark Johnston clarks...@clarktx.comwrote:
I was looking for a function which would take the difference along a
vector?
a-c(1,12,23,44,15,28,7,8,9,10)
if I set the number difference to 3 would
Hi Karine,
time1 - as.POSIXct(2007-02-21 05:19:00)
time2 - as.POSIXct(2007-02-20 14:21:53)
difftime(time1, time2)
should get you started.
-Ista
On Tue, Feb 23, 2010 at 9:48 AM, karine heerah karine.hee...@hotmail.fr wrote:
Hi,
I have date and time in a format like this: 2007-02-21
dd = as.POSIXlt(c(2007-02-21 05:19:00, 2007-02-20 14:21:53),
format=%Y-%m-%d %H:%M:%S)
dd[1]-dd[2]
b
On Tue, Feb 23, 2010 at 2:48 PM, karine heerah karine.hee...@hotmail.fr wrote:
Hi,
I have date and time in a format like this: 2007-02-21 05:19:00.
Do you which function i can use to
I think I have an answer: SPSS uses absolute deviations from the _mean_ in
Levene's test.
(See calculation in
http://www.uvm.edu/~dhowell/gradstat/psych340/Lectures/Anova/anova2.html)
R uses absolute deviations from the _median_ (R help).
So the difference.
Ravi
--
View this message in
I forgot to reply to Peter Ehler's question: I am using Levene's test in the
car package.
Ravi
--
View this message in context:
http://n4.nabble.com/Difference-in-Levene-s-test-between-R-and-SPSS-tp1555725p1556016.html
Sent from the R help mailing list archive at Nabble.com.
Hello,
I notice that when I do Levene's test to test equality of variances across
levels of a factor, I get different answers in R and SPSS 16.
e.g.: For the chickwts data, in R, levene.test(weight, feed) gives
F=0.7493, p=0.5896.
SPSS 16 gives F=0.987, p=0.432
Why this difference? Which
Ravi Kulkarni wrote:
Hello,
I notice that when I do Levene's test to test equality of variances across
levels of a factor, I get different answers in R and SPSS 16.
e.g.: For the chickwts data, in R, levene.test(weight, feed) gives
F=0.7493, p=0.5896.
SPSS 16 gives F=0.987, p=0.432
Why
Hi everybody,
From my experience (which is limited), it seems to me that '
(apostrophe) and (quotation marks) are the same in R: they are both
used for strings. The only difference would be taste.
But I've recently read about the difference between = and - and I
thought that there might be a
On Wed, 27 Jan 2010 14:15:28 +0100 Ivan Calandra ivan.calan...@uni-
hamburg.de wrote:
From my experience (which is limited), it seems to me that '
(apostrophe) and (quotation marks) are the same in R: they are both
used for strings. The only difference would be taste.
But I've recently
For example, subtracting 1:2 from the rows of a two-column matrix:
t(apply(matrix(1:6,ncol=2),MARGIN=1,function(y) y - 1:2))
[,1] [,2]
[1,]02
[2,]13
[3,]24
sweep(matrix(1:6,ncol=2),MARGIN=2,1:2,FUN=-)
[,1] [,2]
[1,]02
[2,]13
[3,]24
Is
On Wed, 16 Dec 2009, Levi Waldron wrote:
For example, subtracting 1:2 from the rows of a two-column matrix:
t(apply(matrix(1:6,ncol=2),MARGIN=1,function(y) y - 1:2))
[,1] [,2]
[1,]02
[2,]13
[3,]24
sweep(matrix(1:6,ncol=2),MARGIN=2,1:2,FUN=-)
[,1] [,2]
[1,]
Hi
A quick question. Standard errors reported by gee/yags differs from the ones in
geeglm (geepack).
require(gee)
require(geepack)
require(yags)
mm - gee(breaks ~ tension, id=wool, data=warpbreaks,
corstr=exchangeable)
mm2 - geeglm(breaks ~ tension, id=wool, data=warpbreaks,
Yes, thanks, that works perfectly!
great command
b.
jholtman wrote:
Try this:
x - read.table(textConnection(ID YEAR
+ 13 2007
+ 15 2003
+ 15 2006
+ 15 2008
+ 21 2006
+ 21 2007), header=TRUE)
x$diff - ave(x$YEAR, x$ID, FUN=function(a) c(diff(a), NA))
x
ID YEAR diff
1 13
Dear R user,
I'd like to calculate the difference of two rows, where ID is the same.
eg.: I've got the following dataframe:
ID YEAR
13 2007
15 2003
15 2006
15 2008
21 2006
21 2007
and I'd like to get the difference, like this:
ID YEAR diff
13 2007 NA
15 2003 3
15 2006 2
15
You want to use tapply
?tapply
This is a simple example
dat = data.frame(a=sample(1:10,100,T),b=rnorm(100,0,1))
tapply(dat$b,dat$a,mean)
Hope that helps,
Sam
On Wed, Nov 25, 2009 at 11:55 AM, clion birt...@hotmail.com wrote:
Dear R user,
I'd like to calculate the difference of two rows,
Try this:
x - read.table(textConnection(ID YEAR
+ 13 2007
+ 15 2003
+ 15 2006
+ 15 2008
+ 21 2006
+ 21 2007), header=TRUE)
x$diff - ave(x$YEAR, x$ID, FUN=function(a) c(diff(a), NA))
x
ID YEAR diff
1 13 2007 NA
2 15 20033
3 15 20062
4 15 2008 NA
5 21 20061
6 21 2007 NA
On
I think your problem is with plotting, not with naming.
Tell the list what kind of plot you're doing
(with example code, of course) and where you need
to see names on the plot.
(What do you have in mind when you say names for
the whole matrix? There are row names, and
column names, and
Hi all...
I built a matrix binding vectors with rbind, and have something like this:
[,1] [,2][,3] [,4] [,5] [,6] [,7] [,8]
CLS 3.877328 4.087636 4.72089 4.038361 3.402942 2.786285 2.671222 3.276419
ORD NaN NaN NaN NaN 5.770780 5.901113
You should be doing
colnames(tester) -
c(uno,dos,tres,cuatro,cinco,seis,siete,ocho)
On Tue, Jun 30, 2009 at 7:08 PM, Germán Bonilla germa...@gmail.com wrote:
Hi all...
I built a matrix binding vectors with rbind, and have something like this:
[,1] [,2][,3] [,4] [,5]
Hi, i have this file
pressure,k,eps,zeta,f,velocity:0,velocity:1,velocity:2,vtkValidPointMask,Point
Coordinates:0,Point Coordinates:1,Point Coordinates:2,vtkOriginalIndices
0.150545,0.000575811,0.0231277,0.000339049,-0.0193008,0.00318629,-6.24066e-07,5.39599e-05,^A,7,0,0,0
my goal is to compute this variable
tauw = 0.00095* velocity:0 (of the second row: 0.00367781) / (0.0003035 - 0)
and
tauw1 = 0.00095* velocity:0 (of the third row : 0.232017) / (0.0003035 - 0)
and put tauw e tauw1 in a new variable
2009/6/17 Carletto Rossi nuovo...@gmail.com
Hi, i have
tdf - read.table(textConnection(pressure , k , eps , zeta , f ,
velocity0 , velocity1 , velocity2 , vtkValidPointMask ,
PointCoordinates0 , PointCoordinates1 , PointCoordinates2\n
0.150545 , 0.000575811 , 0.0231277, 0.000339049, -0.0193008,
0.00318629, -6.24066e-07, 5.39599e-05, A, 7, 0, 0,
On Jun 17, 2009, at 5:49 PM, Carletto Rossi wrote:
my goal is to compute this variable
tauw = 0.00095* velocity:0 (of the second row: 0.00367781) /
(0.0003035 - 0)
I'm sorry, you have exceeded your daily quota of homework help. Read
some introductory material and come back after you
On 21/03/2009, at 3:19 AM, Ravi Varadhan wrote:
snip
I also tried a number of other things including changing the
family, and parameters in
loess.control, but to no avail. I looked at the Fortran codes
from both loess and gam.
They are daunting, to say the least. They are dense,
lawnboy34 wrote:
I am having trouble with the difference between default graphic settings
on
my client machine and the instance of R on our company's server. I created
a
script locally that output graphs, but when I run it on the server the
output graphs have titles running past the
of Medicine
Johns Hopkins University
Ph. (410) 502-2619
email: rvarad...@jhmi.edu
- Original Message -
From: Kevin E. Thorpe kevin.tho...@utoronto.ca
Date: Thursday, March 19, 2009 8:23 pm
Subject: Re: [R] Difference between gam() and loess().
To: Rolf Turner r.tur...@auckland.ac.nz
Cc: R-help
Subject: Re: [R] Difference between gam() and loess().
To: Rolf Turner r.tur...@auckland.ac.nz
Cc: R-help Forum r-help@r-project.org
Rolf Turner wrote:
It seems that in general
gam(y~lo(x)) # gam() from the gam package.
and
loess(y~x)
give slightly different results
at the Fortran.
I guess one simple parameter change may not quite do it. :-)
Kevin
- Original Message -
From: Kevin E. Thorpe kevin.tho...@utoronto.ca
Date: Thursday, March 19, 2009 8:23 pm
Subject: Re: [R] Difference between gam() and loess().
To: Rolf Turner r.tur
It seems that in general
gam(y~lo(x)) # gam() from the gam package.
and
loess(y~x)
give slightly different results (in respect of the predicted/fitted
values).
Most noticeable at the endpoints of the range of x.
Can anyone enlighten me about the reason for this difference?
Hello,
I am having trouble with the difference between default graphic settings on
my client machine and the instance of R on our company's server. I created a
script locally that output graphs, but when I run it on the server the
output graphs have titles running past the margins, legends
Rolf Turner wrote:
It seems that in general
gam(y~lo(x)) # gam() from the gam package.
and
loess(y~x)
give slightly different results (in respect of the predicted/fitted
values).
Most noticeable at the endpoints of the range of x.
Can anyone enlighten me about the reason for this
Edna Bell wrote:
Dear R Gurus:
What is the difference between a Primitive and a Generic, please?
They are talking about different things (though both look like
functions): generic talks about the user interface, primitive talks
about the internal implementation.
A generic is a
Dear R Gurus:
What is the difference between a Primitive and a Generic, please?
Thanks,
Edna Bell
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
Hi,
thanks for the link.
In the bottom part of the relevant section, you say:
Standard advice is to avoid using '=' when you mean '-'
Is this a formal, generally accepted (R community) advice, or does it
reflect you personal opinion?
Note I am not asking this question as to criticize by
Since this topic came up, I've been thinking that
that sentence needs more work.
The standard is not from me -- I'm a bit more
agnostic than the statement although I personally
always use '-'. I'm thinking a revised version
might be something along the lines of:
Standard advice from most
On Mon, 23 Feb 2009, Patrick Burns wrote:
Since this topic came up, I've been thinking that
that sentence needs more work.
The standard is not from me -- I'm a bit more
agnostic than the statement although I personally
always use '-'. I'm thinking a revised version
might be something along
Thomas Lumley wrote:
Although it's probably true that most long-time R users use -, this
is at least in part because a long-time R user would initially have
had to use -, since = wasn't available in the distant past.
I would say that it's entirely a matter of taste -- the things that
Wacek Kusnierczyk Waclaw.Marcin.Kusnierczyk at idi.ntnu.no writes:
Thomas Lumley wrote:
Although it's probably true that most long-time R users use -, this
is at least in part because a long-time R user would initially have
had to use -, since = wasn't available in the distant past.
Ken Knoblauch wrote:
Wacek Kusnierczyk Waclaw.Marcin.Kusnierczyk at idi.ntnu.no writes:
Thomas Lumley wrote:
Although it's probably true that most long-time R users use -, this
is at least in part because a long-time R user would initially have
had to use -, since = wasn't
It's easier to read. Better machine-human interaction.
ergonomic: (esp. of workplace design) intended to provide optimum
comfort and to avoid stress or injury.
Quoting Wacek Kusnierczyk waclaw.marcin.kusnierc...@idi.ntnu.no:
Ken Knoblauch wrote:
Wacek Kusnierczyk
Hi,
Both operators - and = can be used to make an assignment. My question
is: Is there a semantic difference between these two? Some time ago, I
remember I have read that because of some reason, one should be given
preference over the other - but I cannot remember the source, nor the
Patrick Burns wrote:
'The R Inferno' page 78 is one source you can
look at.
Patrick Burns
wow .. nice! .. thanks for posting this reference.
Esmail
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read
On Wed, 18 Feb 2009, jjh21 wrote:
Hello,
I know that two possible approaches to dealing with clustered data would be
GEE or a robust cluster covariance matrix from a standard regression. What
are the differences between these two methods, or are they doing the same
thing? Thanks.
There are
Hello,
I know that two possible approaches to dealing with clustered data would be
GEE or a robust cluster covariance matrix from a standard regression. What
are the differences between these two methods, or are they doing the same
thing? Thanks.
--
View this message in context:
Hi all,
For my research I have to use a Multinomial Probit model. I saw that
there are two packages, that include a method to estimate my
parameters. The first one is the MNP-package of Imai and van Dyk. The
second one is part of the bayesm-package of Rossi.
The results for both packages are not
Rolf Turner wrote:
On 3/02/2009, at 12:45 PM, David Epstein wrote:
I'm sure I've read about the difference between a[[i]] and a[i] in R,
but I
cannot recall what I read. Even more disturbing is the fact that I don't
know how to search the newsgroup for this. All the different
combinations
I think the thing that escaped me for quite a while was tracking down
the syntax to specify elements of a list element:
x[[5]][3:7] to get items from within the 5th element of x.
Seems IIRC there are some types of variables for which the form
x$thing[3:7] fails and others for which it
I'm sure I've read about the difference between a[[i]] and a[i] in R, but I
cannot recall what I read. Even more disturbing is the fact that I don't
know how to search the newsgroup for this. All the different combinations I
tried were declared not to be valid search syntax.
1. What sort of
On 02/02/2009 6:45 PM, David Epstein wrote:
I'm sure I've read about the difference between a[[i]] and a[i] in R, but I
cannot recall what I read. Even more disturbing is the fact that I don't
know how to search the newsgroup for this. All the different combinations I
tried were declared not to
Hi,
what exactly is the difference between the computation of intercept
and slope coefficents in a standard bivariate regression via the lm()
function and the line() function?
__
R-help@r-project.org mailing list
Dear R users,
I used sm.density function in the sm package and kde2d() in the MASS package
to estimate the bivariate density. Then I calculated the Kullback leibler
divergence meassure between a distribution and the each of the estimated
densities, but the asnwers are different. Is there any
Hello
I have 2 data frames DF1 and DF2 where DF2 is a subset of DF1:
DF1= data.frame(V1=1:6, V2= letters[1:6])
DF2= data.frame(V1=1:3, V2= letters[1:3])
How do I create a new data frame of the difference between DF1 and DF2
newDF=data.frame(V1=4:6, V2= letters[4:6])
In my real data, the rows are
Hi Joseph,
Try this:
DF1[!DF1$V1%in%DF2$V1,]
subset(DF1,!V1%in%DF2$V1)
HTH,
Jorge
On Sun, Sep 14, 2008 at 12:49 PM, joseph [EMAIL PROTECTED] wrote:
Hello
I have 2 data frames DF1 and DF2 where DF2 is a subset of DF1:
DF1= data.frame(V1=1:6, V2= letters[1:6])
DF2= data.frame(V1=1:3, V2=
PROTECTED]
To: joseph [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Sunday, September 14, 2008 10:07:48 AM
Subject: RE: [R] difference of two data frames
Hi: If you mean a dataframe of the rows in DF1 that are not in DF2 , then I
think below will work for the letters, which , according to what I'm
]
To: joseph [EMAIL PROTECTED]
Cc: r-help@r-project.org
Sent: Sunday, September 14, 2008 10:23:33 AM
Subject: Re: [R] difference of two data frames
Hi Joseph,
Try this:
DF1[!DF1$V1%in%DF2$V1,]
subset(DF1,!V1%in%DF2$V1)
HTH,
Jorge
On Sun, Sep 14, 2008 at 12:49 PM, joseph [EMAIL PROTECTED] wrote
Velez [EMAIL PROTECTED]
To: joseph [EMAIL PROTECTED]
Cc: r-help@r-project.org
Sent: Sunday, September 14, 2008 10:23:33 AM
Subject: Re: [R] difference of two data frames
Hi Joseph,
Try this:
DF1[!DF1$V1%in%DF2$V1,]
subset(DF1,!V1%in%DF2$V1)
HTH,
Jorge
On Sun, Sep 14, 2008 at 12:49 PM, joseph
be considered when
calculating the difference.
- Original Message
From: Jorge Ivan Velez [EMAIL PROTECTED]
To: joseph [EMAIL PROTECTED]
Sent: Sunday, September 14, 2008 11:14:11 AM
Subject: Re: [R] difference of two data frames
Hi Joseph,
I'm not sure if I understood your point, but try
I thought the difference is to big too, so I tried both breslow and
efron with same different result, and exact goes for ever, which is
strange as I'm only using one dependent varable here. Could be that
n~50.000, and I haven't got the most powerful computer either. I'm not
aiming to get equal
My apologies for asking slightly about SPSS in addition to R...
Could not find an exact answer in the archives on whether R and SPSS may
give different p-vals when output for coeffs and conf-intervals are the
same.
Amyway, a colleague and I are doing a very simple coxreg analyses and
get the same
Kåre Edvardsen wrote:
My apologies for asking slightly about SPSS in addition to R...
Could not find an exact answer in the archives on whether R and SPSS may
give different p-vals when output for coeffs and conf-intervals are the
same.
Amyway, a colleague and I are doing a very simple coxreg
That is a larger difference in p-values than I would expect due to
numerical differences and stopping criteria. My guess is that you are
running across the different approximations for tied failure times. If
so, you will get better agreement with SPSS by using method=breslow in
coxph().
Dear all,
It appears that MASS::polr() and Design::lrm() return the same point
estimates but different st.errs when fitting proportional odds models,
grade-c(4,4,2,4,3,2,3,1,3,3,2,2,3,3,2,4,2,4,5,2,1,4,1,2,5,3,4,2,2,1)
Dear Vito
No, you are not wrong, but you should center score prior to model estimation:
summary(fm1 - polr(factor(grade)~I(score - mean(score
which gives the same standard errors as do lrm. Now the intercepts
refer the median score rather than some potential unrealistic score of
0.
You can
Hi Vito
(question to the authors of MASS below)
2008/6/30 vito muggeo [EMAIL PROTECTED]:
Dear Haubo,
many thanks for your reply.
Yes you are right, by scaling the score, I get the same results.
However it sounds strange to me. I understand that the SE and/or t-ratio of
the intercepts depend
Thank you for those details, the only optimization routine I've come accross
outside of CRAN is:
http://www.stat.umn.edu/geyer/trust/
Personally I only use nlminb for the estimation of Time Series models, which
typically have well defined limits for the elements of the parameter vector
- so in
I believe nlminb() performs *constrained* optimization, where as nlm() is for
*unconstrained* opimization
So I guess nlm() is for solving min(f[a,b]), and nlminb() min(f[a,b]) given
a+b = c
FYI I think optim() also does constrained optimization, well I've used for
min(f[a,b]) given a = a* and
nlminb provides unconstrained optimization and optimization subject to
box constraints (i.e. upper and/or lower constraints on individual
elements of the parameter vector). The nlm function provides
unconstrained optimization.
I created the nlminb function because I was unable to get reliable
Dear R-users,
I use Sweave for quite a long time but I still wonder what is the difference
between snw and rnw files. Why these two file extensions ? I searched the
web without success.
Thanks for your answers.
Delphine Fontaine
__
On 6/2/2008 7:25 AM, Delphine Fontaine wrote:
Dear R-users,
I use Sweave for quite a long time but I still wonder what is the difference
between snw and rnw files. Why these two file extensions ? I searched the
web without success.
Thanks for your answers.
There is no difference from the
8:51 PM
To: [EMAIL PROTECTED]
Subject: Re: [R] difference between 2 ecdfs
In article
[EMAIL PROTECTED],
[EMAIL PROTECTED] wrote:
hi,
a) i have something like:
ecdfgrp1-ecdf(subset(mydata,TMT_GRP==1)$Y);
ecdfgrp2-ecdf(subset(mydata,TMT_GRP==2)$Y);
how can i plot the difference
it to
*look like* a step function. Any suggestion?
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of David Winsemius
Sent: Friday, March 21, 2008 8:51 PM
To: [EMAIL PROTECTED]
Subject: Re: [R] difference between 2 ecdfs
hi,
a) i have something like:
ecdfgrp1-ecdf(subset(mydata,TMT_GRP==1)$Y);
ecdfgrp2-ecdf(subset(mydata,TMT_GRP==2)$Y);
how can i plot the difference between these 2 step functions?
i could begin with ecdfrefl-function(x){ecdfgrp2(x)-ecdfgrp1(x);} ...
what next?
b) if i have a vector with
In article
[EMAIL PROTECTED],
[EMAIL PROTECTED] wrote:
hi,
a) i have something like:
ecdfgrp1-ecdf(subset(mydata,TMT_GRP==1)$Y);
ecdfgrp2-ecdf(subset(mydata,TMT_GRP==2)$Y);
how can i plot the difference between these 2 step functions?
i could begin with
I am running lrm() with a single factor. I then run anova() on the fitted
model to obtain a p-value associated with having that factor in the model.
I am noticing that the Model L.R. in the lrm results is almost the same
as the Chi-Square in the anova results, but not quite; the latter value
is
Quoting Frank E Harrell Jr [EMAIL PROTECTED]:
anova (anova.Design) computes Wald statistics. When the log-likelihood
is very quadratic, these statistics will be very close to log-likelihood
ratio chi-square statistics. In general LR chi-square tests are better;
we use Wald tests for speed.
[EMAIL PROTECTED] wrote:
Quoting Frank E Harrell Jr [EMAIL PROTECTED]:
anova (anova.Design) computes Wald statistics. When the log-likelihood
is very quadratic, these statistics will be very close to log-likelihood
ratio chi-square statistics. In general LR chi-square tests are better;
we
I am running lrm() with a single factor. I then run anova() on the fitted
model to obtain a p-value associated with having that factor in the model.
I am noticing that the Model L.R. in the lrm results is almost the same
as the Chi-Square in the anova results, but not quite; the latter value
is
[EMAIL PROTECTED] wrote:
I am running lrm() with a single factor. I then run anova() on the fitted
model to obtain a p-value associated with having that factor in the model.
I am noticing that the Model L.R. in the lrm results is almost the same
as the Chi-Square in the anova results, but
Hello all. I'm currently working with mixed models, and have noticed
a curious difference between the nlme and lmer packages. While I
realize that model selection with mixed models is a tricky issue, the
two packages currently produce different AIC scores for the same
model, but they
Hallo,
fit12-lmFit(qrg[,1:2])
t12-toptable(fit12,adjust=fdr,number=25,genelist=qrg$genes[,1])
t12
ID logFC t P.Value adj.P.ValB
522PLAU_OP -6.836144 -8.420414 5.589416e-05 0.01212520 2.054965
1555 CD44_WIZ -6.569622 -8.227938 6.510169e-05 0.01212520
To: r-help@r-project.org
Subject: [R] Difference between P.Value and adj.P.Value
Hallo,
fit12-lmFit(qrg[,1:2])
t12-toptable(fit12,adjust=fdr,number=25,genelist=qrg$genes[,1])
t12
ID logFC t P.Value adj.P.ValB
522PLAU_OP -6.836144 -8.420414 5.589416e-05
Hi,
I have fitted a model using a glm() approach and using a gls() approach
(but without correcting for spatially autocorrelated errors). I have
noticed that although these models are the same (as they should be), the
AIC value differs between glm() and gls(). Can anyone tell me why they
differ?
On Tue, 27 Nov 2007, Geertje Van der Heijden wrote:
I have fitted a model using a glm() approach and using a gls() approach
(but without correcting for spatially autocorrelated errors). I have
noticed that although these models are the same (as they should be), the
AIC value differs between
Dear Prof. Ripley,
Thanks for your response! I used the REML method. If I estimate the gls
models using ML estimation, the AIC values are equal.
Many thanks,
Geertje
On Tue, 27 Nov 2007, Geertje Van der Heijden wrote:
I have fitted a model using a glm() approach and using a gls()
approach
There are several different formulae for AIC. They are all monotone
transformations of the basic penalized log likelihood or likelihood.
Thus when compared over different models the maximum (or minimum)
occurs at the same specification. If you use the exact same
estimation technique in different
201 - 293 of 293 matches
Mail list logo