[R] standard error for regression coefficients corresponding to factor levels

2017-03-16 Thread li li
Hi all,
  I have the following data called "data1". After fitting the ancova model
with different slopes and intercepts for each region, I calculated the
regression coefficients and the corresponding standard error. The standard
error (for intercept or for slope) are all the same for different regions.
Is there something wrong?
  I know the SE is related to (X^T X)^-1, where X is design matrix. So does
this happen whenever each factor level has the same set of values for
"week"?
 Thanks.
 Hanna



> mod <- lm(response ~ region*week, data1)> tmp <- coef(summary(mod))> res <- 
> matrix(NA, 5,4)> res[1,1:2] <- tmp[1,1:2]> res[2:5,1] <- tmp[1,1]+tmp[2:5,1]> 
> res[2:5,2] <- sqrt(tmp[2:5,2]^2-tmp[1,2]^2)> res[1,3:4] <- tmp[6,1:2]> 
> res[2:5,3] <- tmp[6,1]+tmp[7:10,1]> res[2:5,4] <- 
> sqrt(tmp[7:10,2]^2-tmp[6,2]^2)

> colnames(res) <- c("intercept", "intercept SE", "slope", "slope SE")> 
> rownames(res) <- letters[1:5]> res   intercept intercept SEslope   
> slope SE
a 0.18404464   0.08976301 -0.018629310 0.01385073
b 0.17605666   0.08976301 -0.022393789 0.01385073
c 0.16754130   0.08976301 -0.022367770 0.01385073
d 0.12554452   0.08976301 -0.017464385 0.01385073
e 0.06153256   0.08976301  0.007714685 0.01385073







> data1week region response
5  3  c  0.057325067
6  6  c  0.066723632
7  9  c -0.025317808
12 3  d  0.024692613
13 6  d  0.021761492
14 9  d -0.099820335
19 3  c  0.119559235
20 6  c -0.054456186
21 9  c  0.078811180
26 3  d  0.091667189
27 6  d -0.053400777
28 9  d  0.090754363
33 3  c  0.163818085
34 6  c  0.008959741
35 9  c -0.115410852
40 3  d  0.193920693
41 6  d -0.087738914
42 9  d  0.004987542
47 3  a  0.121332285
48 6  a -0.020202707
49 9  a  0.037295785
54 3  b  0.214304603
55 6  b -0.052346480
56 9  b  0.082501222
61 3  a  0.053540767
62 6  a -0.019182819
63 9  a -0.057629113
68 3  b  0.068592791
69 6  b -0.123298216
70 9  b -0.230671818
75 3  a  0.330741562
76 6  a  0.013902905
77 9  a  0.190620360
82 3  b  0.151002874
83 6  b  0.086177696
84 9  b  0.178982656
89 3  e  0.062974799
90 6  e  0.062035391
91 9  e  0.206200831
96 3  e  0.123102197
97 6  e  0.040181790
98 9  e  0.121332285
1033  e  0.147557564
1046  e  0.062035391
1059  e  0.144965770

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Standard error of quasi-Poisson regression coefficients in mgcv

2015-01-24 Thread YuHao
Hi, I am analyzing the relationship between animal density and several 
environmental factors using GAM in mgcv. I used Poisson distribution and 
quasi-Poisson model separately because there is overdispersion. In my model, 
there is a categorical predictor variable "Year". However, when I compared the 
outputs from Poisson and quasi-Poisson models, I realized that the standard 
errors of "Year" from quasi-Poisson model were even a little smaller than those 
from the Poisson model. Based on my knowledge, the standard errors of 
quasi-Poisson regression coefficients should be larger than those from Poisson 
model due to dispersion. Could anybody help me figure out what I was doing 
wrong? Thank you very much!Hao
  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] standard error of survfit.coxph()

2014-07-21 Thread array chip


Dear Terry/All,

I was trying to use your explanation of the standard error estimate from 
survfit.coxph() to verify the standard error estimates for the method of 
log(log(S)), but couldn't get the estimates correct. Here is an example using 
the lung dataset:

> fit<-coxph(Surv(time,status)~wt.loss,lung)
> surv<-survfit(fit,newdata=lung[2:5,])$surv

> logstd<-survfit(fit,newdata=lung[2:5,])$std.err  
> logstd.time<-survfit(fit,newdata=lung[2:5,])$time


    ## std error of cumulative hazard at time=60

> logstd<-logstd[logstd.time==60,]
> logstd
[1] 0.01965858 0.01965858 0.01941871 0.01969248

    ## survival S at months 60

> surv<-surv[logstd.time==60,]  
> surv
    2 3 4 5 
0.9249238 0.9249238 0.9253038 0.9263392

Since survfit()$std.err was for cumulative hazard H=log(S), thus based on your 
explanation below, the standard error for log-log method is: (1/S^2)var(H)

> loglogstd<-sqrt(1/surv^2*(logstd^2))
> loglogstd

 2  3  4  5 
0.02125427 0.02125427 0.02098631 0.02125839

    ## the upper confidence interval should be
> exp(-exp(log(-log(surv))-qnorm(0.975)*loglogstd))
    2
 3 4 5 
0.9278737 0.9278737 0.9282031 0.9292363

But this is different from the output using summary.survfit:

> summary(survfit(fit,newdata=lung[2:5,],conf.type='log-log'),times=60)$upper[1,]
    2 3 4 5 
0.9534814 0.9534814 0.9535646 0.9548478

Can you please suggest what I did wrong in the calculation?

Thanks very much!

John




 From: "Therneau, Terry M., Ph.D." 

Sent: Monday, June 30, 2014 6:04 AM
Subject: Re:  standard error of survfit.coxph()


1. The computations "behind the scenes" produce the variance of the cumulative 
hazard. 
This is true for both an ordinary Kaplan-Meier and a Cox model.  
Transformations to other 
scales are done using simple Taylor series.

   H = cumulative hazard = log(S);  S=survival
   var(H) = var(log(S))  = the starting point
   S = exp(log(S)), so  var(S) is approx [deriv of exp(x)]^2 * var(log(S)) = 
S^2 var(H)
   var(log(log(S)) is approx (1/S^2) var(H)

2. At the time it was written, summary.survfit was used only for printing out 
the survival 
curve at selected times, and the audience for the printout wanted std(S).   
True, that was 
20 years ago, but I don't recall anyone ever asking for summary to do anything 
else.  Your 
request is not a bad idea.
   Note however that the primary impact of using log(S) or S or log(log(S)) 
scale is is on 
the confidence intervals, and they do appear per request in the summary output.

Terry T.


On 06/28/2014 05:00 AM, r-help-requ...@r-project.org wrote:
> Message: 9
> Date: Fri, 27 Jun 2014 12:39:29 -0700

> To:"r-help@r-project.org"  
> Subject: [R] standard error of survfit.coxph()
> Message-ID:
>     <1403897969.91269.yahoomail...@web122906.mail.ne1.yahoo.com>
> Content-Type: text/plain
>
> Hi, can anyone help me to understand the standard errors printed in the 
> output of survfit.coxph()?
>
> time<-sample(1:15,100,replace=T)
>
> status<-as.numeric(runif(100,0,1)<0.2)
> x<-rnorm(100,10,2)
>
> fit<-coxph(Surv(time,status)~x)
> ??? ### method 1
>
> survfit(fit, newdata=data.frame(time=time,status=status,x=x)[1:5,], 
> conf.type='log')$std.err
>
>  [,1]??? [,2]??? [,3]??? [,4]?? [,5]
> ?[1,] 0.0 0.0 0.0 0.0 0.
> ?[2,] 0.008627644 0.008567253 0.008773699 0.009354788 0.01481819
> ?[3,] 0.008627644 0.008567253 0.008773699 0.009354788 0.01481819
> ?[4,] 0.013800603 0.013767977 0.013889971 0.014379928 0.02353371
> ?[5,] 0.013800603 0.013767977 0.013889971 0.014379928 0.02353371
> ?[6,] 0.013800603 0.013767977 0.013889971 0.014379928 0.02353371
> ?[7,] 0.030226811 0.030423883 0.029806263 0.028918817 0.05191161
> ?[8,] 0.030226811 0.030423883 0.029806263 0.028918817 0.05191161
> ?[9,] 0.036852571 0.037159980 0.036186931 0.034645002 0.06485394
> [10,] 0.044181716 0.044621159 0.043221145 0.040872939 0.07931028
> [11,] 0.044181716 0.044621159 0.043221145 0.040872939 0.07931028
> [12,] 0.055452631 0.056018832 0.054236881 0.051586391 0.10800413
> [13,] 0.070665160 0.071363749 0.069208056 0.066655730 0.14976433
> [14,] 0.124140400 0.125564637 0.121281571 0.118002021 0.30971860
> [15,] 0.173132357 0.175309455 0.168821266 0.164860523 0.46393111
>
> survfit(fit, newdata=data.frame(time=time,status=status,x=x)[1:5,], 
> conf.type='log')$time
> ?[1]? 1? 2? 3? 4? 5? 6? 7? 8? 9 10 11 12 13 14 15
>
> ??? ### method 2
>
> summary(survfit(fit, newdata=data.frame(time=time,status=status,x=x)[1:5,], 
> conf.type='log'),time=10)$std.err
>
> 

Re: [R] standard error of survfit.coxph()

2014-07-21 Thread array chip
Terry, I figured out that variance of log(-log(S)) should be (1/H^2)var(H), not 
(1/S^2)var(H)!

Thanks

John






e...@mayo.edu>; "r-help@r-project.org"  
Sent: Monday, July 21, 2014 11:41 AM
Subject: Re:  standard error of survfit.coxph()



Dear Terry,

I was trying to use your explanation of the standard error estimate from 
survfit.coxph() to verify the standard error estimates for the method of 
log(log(S)), but couldn't get the estimates correct. Here is an example using 
the lung dataset:

> fit<-coxph(Surv(time,status)~wt.loss,lung)
> surv<-survfit(fit,newdata=lung[2:5,])$surv

> logstd<-survfit(fit,newdata=lung[2:5,])$std.err  
> logstd.time<-survfit(fit,newdata=lung[2:5,])$time


    ## std error of cumulative hazard at time=60

> logstd<-logstd[logstd.time==60,]
> logstd
[1] 0.01965858 0.01965858 0.01941871 0.01969248

    ## survival S at months 60

> surv<-surv[logstd.time==60,]  
> surv
    2 3 4 5 
0.9249238 0.9249238 0.9253038 0.9263392

Since survfit()$std.err was for cumulative hazard H=log(S), thus based on your 
explanation below, the standard error for log-log method is: (1/S^2)var(H)

> loglogstd<-sqrt(1/surv^2*(logstd^2))
> loglogstd

 2  3  4  5 
0.02125427 0.02125427 0.02098631 0.02125839

    ## the upper confidence interval should be
> exp(-exp(log(-log(surv))-qnorm(0.975)*loglogstd))
    2
 3 4 5 
0.9278737 0.9278737 0.9282031 0.9292363

But this is different from the output using summary.survfit:

> summary(survfit(fit,newdata=lung[2:5,],conf.type='log-log'),times=60)$upper[1,]
    2 3 4 5 
0.9534814 0.9534814 0.9535646 0.9548478

Can you please suggest what I did wrong in the calculation?

Thanks very much!

John





To: "Therneau, Terry M., Ph.D." ; "r-help@r-project.org" 
 
Sent: Monday, June 30, 2014 10:46 AM
Subject: Re:  standard error of survfit.coxph()



[[elided Yahoo spam]]

John




 From: "Therneau, Terry M., Ph.D." 

Sent: Monday, June 30, 2014 6:04 AM
Subject: Re:  standard error of survfit.coxph()


1. The computations "behind the scenes" produce the variance of the cumulative 
hazard. 
This is true for both an ordinary Kaplan-Meier and a Cox model.  
Transformations to other 
scales are done using simple Taylor series.

   H = cumulative hazard = log(S);  S=survival
   var(H) = var(log(S))  = the starting point
   S = exp(log(S)), so  var(S) is approx [deriv of exp(x)]^2 * var(log(S)) = 
S^2 var(H)
   var(log(log(S)) is approx (1/S^2) var(H)

2. At the time it was written, summary.survfit was used only for
 printing out the survival 
curve at selected times, and the audience for the printout wanted std(S).   
True, that was 
20 years ago, but I don't recall anyone ever asking for summary to do anything 
else.  Your 
request is not a bad idea.
   Note however that the primary impact of using log(S) or S or log(log(S)) 
scale is is on 
the confidence intervals, and they do appear per request in the summary output.

Terry T.


On 06/28/2014 05:00 AM, r-help-requ...@r-project.org wrote:
> Message: 9
> Date: Fri, 27 Jun 2014 12:39:29 -0700

> To:"r-help@r-project.org"  
> Subject: [R] standard error of survfit.coxph()
> Message-ID:
>     <1403897969.91269.yahoomail...@web122906.mail.ne1.yahoo.com>
> Content-Type: text/plain
>
> Hi, can anyone help me to understand the standard errors printed in the 
> output of survfit.coxph()?
>
> time<-sample(1:15,100,replace=T)
>
> status<-as.numeric(runif(100,0,1)<0.2)
> x<-rnorm(100,10,2)
>
>
 fit<-coxph(Surv(time,status)~x)
> ??? ### method 1
>
> survfit(fit, newdata=data.frame(time=time,status=status,x=x)[1:5,], 
> conf.type='log')$std.err
>
>  [,1]??? [,2]??? [,3]???
 [,4]?? [,5]
> ?[1,] 0.0 0.0 0.0 0.0 0.
> ?[2,] 0.008627644 0.008567253 0.008773699 0.009354788 0.01481819
> ?[3,] 0.008627644 0.008567253 0.008773699 0.009354788 0.01481819
> ?[4,] 0.013800603 0.013767977 0.013889971 0.014379928 0.02353371
> ?[5,] 0.013800603 0.013767977 0.013889971 0.014379928 0.02353371
> ?[6,] 0.013800603 0.013767977 0.013889971 0.014379928 0.02353371
> ?[7,] 0.030226811 0.030423883 0.029806263 0.028918817 0.05191161
> ?[8,] 0.030226811 0.030423883 0.029806263 0.028918817 0.05191161
> ?[9,] 0.036852571 0.037159980 0.036186931 0.034645002 0.06485394
> [10,] 0.044181716 0.044621159 0.043221145 0.040872939 0.07931028
> [11,] 0.044181716 0.044621159 0.043221145 0.040872939 0.07931028
> [12,] 0.055452631 0.056018832 0.054236881 0.051586391 0.10800413
> [13,] 0.070665160 0.071363749 0.069208056 

Re: [R] standard error of survfit.coxph()

2014-07-21 Thread array chip
Dear Terry,

I was trying to use your explanation of the standard error estimate from 
survfit.coxph() to verify the standard error estimates for the method of 
log(log(S)), but couldn't get the estimates correct. Here is an example using 
the lung dataset:

> fit<-coxph(Surv(time,status)~wt.loss,lung)
> surv<-survfit(fit,newdata=lung[2:5,])$surv

> logstd<-survfit(fit,newdata=lung[2:5,])$std.err  
> logstd.time<-survfit(fit,newdata=lung[2:5,])$time


    ## std error of cumulative hazard at time=60

> logstd<-logstd[logstd.time==60,]
> logstd
[1] 0.01965858 0.01965858 0.01941871 0.01969248

    ## survival S at months 60

> surv<-surv[logstd.time==60,]  
> surv
    2 3 4 5 
0.9249238 0.9249238 0.9253038 0.9263392

Since survfit()$std.err was for cumulative hazard H=log(S), thus based on your 
explanation below, the standard error for log-log method is: (1/S^2)var(H)

> loglogstd<-sqrt(1/surv^2*(logstd^2))
> loglogstd

 2  3  4  5 
0.02125427 0.02125427 0.02098631 0.02125839

    ## the upper confidence interval should be
> exp(-exp(log(-log(surv))-qnorm(0.975)*loglogstd))
    2 3 4 5 
0.9278737 0.9278737 0.9282031 0.9292363

But this is different from the output using summary.survfit:

> summary(survfit(fit,newdata=lung[2:5,],conf.type='log-log'),times=60)$upper[1,]
    2 3 4 5 
0.9534814 0.9534814 0.9535646 0.9548478

Can you please suggest what I did wrong in the calculation?

Thanks very much!

John





To: "Therneau, Terry M., Ph.D." ; "r-help@r-project.org" 
 
Sent: Monday, June 30, 2014 10:46 AM
Subject: Re:  standard error of survfit.coxph()



[[elided Yahoo spam]]

John




 From: "Therneau, Terry M., Ph.D." 

Sent: Monday, June 30, 2014 6:04 AM
Subject: Re:  standard error of survfit.coxph()


1. The computations "behind the scenes" produce the variance of the cumulative 
hazard. 
This is true for both an ordinary Kaplan-Meier and a Cox model.  
Transformations to other 
scales are done using simple Taylor series.

   H = cumulative hazard = log(S);  S=survival
   var(H) = var(log(S))  = the starting point
   S = exp(log(S)), so  var(S) is approx [deriv of exp(x)]^2 * var(log(S)) = 
S^2 var(H)
   var(log(log(S)) is approx (1/S^2) var(H)

2. At the time it was written, summary.survfit was used only for
 printing out the survival 
curve at selected times, and the audience for the printout wanted std(S).   
True, that was 
20 years ago, but I don't recall anyone ever asking for summary to do anything 
else.  Your 
request is not a bad idea.
   Note however that the primary impact of using log(S) or S or log(log(S)) 
scale is is on 
the confidence intervals, and they do appear per request in the summary output.

Terry T.


On 06/28/2014 05:00 AM, r-help-requ...@r-project.org wrote:
> Message: 9
> Date: Fri, 27 Jun 2014 12:39:29 -0700

> To:"r-help@r-project.org"  
> Subject: [R] standard error of survfit.coxph()
> Message-ID:
>     <1403897969.91269.yahoomail...@web122906.mail.ne1.yahoo.com>
> Content-Type: text/plain
>
> Hi, can anyone help me to understand the standard errors printed in the 
> output of survfit.coxph()?
>
> time<-sample(1:15,100,replace=T)
>
> status<-as.numeric(runif(100,0,1)<0.2)
> x<-rnorm(100,10,2)
>
> fit<-coxph(Surv(time,status)~x)
> ??? ### method 1
>
> survfit(fit, newdata=data.frame(time=time,status=status,x=x)[1:5,], 
> conf.type='log')$std.err
>
>  [,1]??? [,2]??? [,3]???
 [,4]?? [,5]
> ?[1,] 0.0 0.0 0.0 0.0 0.
> ?[2,] 0.008627644 0.008567253 0.008773699 0.009354788 0.01481819
> ?[3,] 0.008627644 0.008567253 0.008773699 0.009354788 0.01481819
> ?[4,] 0.013800603 0.013767977 0.013889971 0.014379928 0.02353371
> ?[5,] 0.013800603 0.013767977 0.013889971 0.014379928 0.02353371
> ?[6,] 0.013800603 0.013767977 0.013889971 0.014379928 0.02353371
> ?[7,] 0.030226811 0.030423883 0.029806263 0.028918817 0.05191161
> ?[8,] 0.030226811 0.030423883 0.029806263 0.028918817 0.05191161
> ?[9,] 0.036852571 0.037159980 0.036186931 0.034645002 0.06485394
> [10,] 0.044181716 0.044621159 0.043221145 0.040872939 0.07931028
> [11,] 0.044181716 0.044621159 0.043221145 0.040872939 0.07931028
> [12,] 0.055452631 0.056018832 0.054236881 0.051586391 0.10800413
> [13,] 0.070665160 0.071363749 0.069208056 0.066655730
 0.14976433
> [14,] 0.124140400 0.125564637 0.121281571 0.118002021 0.30971860
> [15,] 0.173132357 0.175309455 0.168821266 0.164860523 0.46393111
>
> survfit(fit, newdata=data.frame(time=time,status=status,x=x)[1:5,], 
> conf.type='log')$time
&g

Re: [R] standard error of survfit.coxph()

2014-06-30 Thread array chip
Thank you Terry for the explanation!

John




 From: "Therneau, Terry M., Ph.D." 

Sent: Monday, June 30, 2014 6:04 AM
Subject: Re:  standard error of survfit.coxph()


1. The computations "behind the scenes" produce the variance of the cumulative 
hazard. 
This is true for both an ordinary Kaplan-Meier and a Cox model.  
Transformations to other 
scales are done using simple Taylor series.

   H = cumulative hazard = log(S);  S=survival
   var(H) = var(log(S))  = the starting point
   S = exp(log(S)), so  var(S) is approx [deriv of exp(x)]^2 * var(log(S)) = 
S^2 var(H)
   var(log(log(S)) is approx (1/S^2) var(H)

2. At the time it was written, summary.survfit was used only for printing out 
the survival 
curve at selected times, and the audience for the printout wanted std(S).   
True, that was 
20 years ago, but I don't recall anyone ever asking for summary to do anything 
else.  Your 
request is not a bad idea.
   Note however that the primary impact of using log(S) or S or log(log(S)) 
scale is is on 
the confidence intervals, and they do appear per request in the summary output.

Terry T.


On 06/28/2014 05:00 AM, r-help-requ...@r-project.org wrote:
> Message: 9
> Date: Fri, 27 Jun 2014 12:39:29 -0700

> To:"r-help@r-project.org"  
> Subject: [R] standard error of survfit.coxph()
> Message-ID:
>     <1403897969.91269.yahoomail...@web122906.mail.ne1.yahoo.com>
> Content-Type: text/plain
>
> Hi, can anyone help me to understand the standard errors printed in the 
> output of survfit.coxph()?
>
> time<-sample(1:15,100,replace=T)
>
> status<-as.numeric(runif(100,0,1)<0.2)
> x<-rnorm(100,10,2)
>
> fit<-coxph(Surv(time,status)~x)
> ??? ### method 1
>
> survfit(fit, newdata=data.frame(time=time,status=status,x=x)[1:5,], 
> conf.type='log')$std.err
>
>  [,1]??? [,2]??? [,3]??? [,4]?? [,5]
> ?[1,] 0.0 0.0 0.0 0.0 0.
> ?[2,] 0.008627644 0.008567253 0.008773699 0.009354788 0.01481819
> ?[3,] 0.008627644 0.008567253 0.008773699 0.009354788 0.01481819
> ?[4,] 0.013800603 0.013767977 0.013889971 0.014379928 0.02353371
> ?[5,] 0.013800603 0.013767977 0.013889971 0.014379928 0.02353371
> ?[6,] 0.013800603 0.013767977 0.013889971 0.014379928 0.02353371
> ?[7,] 0.030226811 0.030423883 0.029806263 0.028918817 0.05191161
> ?[8,] 0.030226811 0.030423883 0.029806263 0.028918817 0.05191161
> ?[9,] 0.036852571 0.037159980 0.036186931 0.034645002 0.06485394
> [10,] 0.044181716 0.044621159 0.043221145 0.040872939 0.07931028
> [11,] 0.044181716 0.044621159 0.043221145 0.040872939 0.07931028
> [12,] 0.055452631 0.056018832 0.054236881 0.051586391 0.10800413
> [13,] 0.070665160 0.071363749 0.069208056 0.066655730 0.14976433
> [14,] 0.124140400 0.125564637 0.121281571 0.118002021 0.30971860
> [15,] 0.173132357 0.175309455 0.168821266 0.164860523 0.46393111
>
> survfit(fit, newdata=data.frame(time=time,status=status,x=x)[1:5,], 
> conf.type='log')$time
> ?[1]? 1? 2? 3? 4? 5? 6? 7? 8? 9 10 11 12 13 14 15
>
> ??? ### method 2
>
> summary(survfit(fit, newdata=data.frame(time=time,status=status,x=x)[1:5,], 
> conf.type='log'),time=10)$std.err
>
> ? 1? 2? 3? 4? 5
> [1,] 0.04061384 0.04106186 0.03963184 0.03715246 0.06867532
>
> By reading the help of ?survfit.object and ?summary.survfit, the standard 
> error provided in the output of method 1 (survfit()) was for cumulative 
> hazard-log(survival), while the standard error provided in the output of 
> method 2 (summary.survfit()) was for survival itself, regardless of how you 
> choose the value for "conf.type" ('log', 'log-log' or 'plain'). This explains 
> why the standard error output is different between method 1 (10th row) and 
> method 2.
>
> My question is how do I get standard error estimates for log(-log(survival))?
>
> Thanks!
>
> John
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] standard error of survfit.coxph()

2014-06-30 Thread Therneau, Terry M., Ph.D.
1. The computations "behind the scenes" produce the variance of the cumulative hazard. 
This is true for both an ordinary Kaplan-Meier and a Cox model.  Transformations to other 
scales are done using simple Taylor series.


  H = cumulative hazard = log(S);  S=survival
  var(H) = var(log(S))  = the starting point
  S = exp(log(S)), so  var(S) is approx [deriv of exp(x)]^2 * var(log(S)) = S^2 
var(H)
  var(log(log(S)) is approx (1/S^2) var(H)

2. At the time it was written, summary.survfit was used only for printing out the survival 
curve at selected times, and the audience for the printout wanted std(S).   True, that was 
20 years ago, but I don't recall anyone ever asking for summary to do anything else.  Your 
request is not a bad idea.
  Note however that the primary impact of using log(S) or S or log(log(S)) scale is is on 
the confidence intervals, and they do appear per request in the summary output.


Terry T.


On 06/28/2014 05:00 AM, r-help-requ...@r-project.org wrote:

Message: 9
Date: Fri, 27 Jun 2014 12:39:29 -0700
From: array chip
To:"r-help@r-project.org"  
Subject: [R] standard error of survfit.coxph()
Message-ID:
<1403897969.91269.yahoomail...@web122906.mail.ne1.yahoo.com>
Content-Type: text/plain

Hi, can anyone help me to understand the standard errors printed in the output 
of survfit.coxph()?

time<-sample(1:15,100,replace=T)

status<-as.numeric(runif(100,0,1)<0.2)
x<-rnorm(100,10,2)

fit<-coxph(Surv(time,status)~x)
??? ### method 1

survfit(fit, newdata=data.frame(time=time,status=status,x=x)[1:5,], 
conf.type='log')$std.err

 [,1]??? [,2]??? [,3]??? [,4]?? [,5]
?[1,] 0.0 0.0 0.0 0.0 0.
?[2,] 0.008627644 0.008567253 0.008773699 0.009354788 0.01481819
?[3,] 0.008627644 0.008567253 0.008773699 0.009354788 0.01481819
?[4,] 0.013800603 0.013767977 0.013889971 0.014379928 0.02353371
?[5,] 0.013800603 0.013767977 0.013889971 0.014379928 0.02353371
?[6,] 0.013800603 0.013767977 0.013889971 0.014379928 0.02353371
?[7,] 0.030226811 0.030423883 0.029806263 0.028918817 0.05191161
?[8,] 0.030226811 0.030423883 0.029806263 0.028918817 0.05191161
?[9,] 0.036852571 0.037159980 0.036186931 0.034645002 0.06485394
[10,] 0.044181716 0.044621159 0.043221145 0.040872939 0.07931028
[11,] 0.044181716 0.044621159 0.043221145 0.040872939 0.07931028
[12,] 0.055452631 0.056018832 0.054236881 0.051586391 0.10800413
[13,] 0.070665160 0.071363749 0.069208056 0.066655730 0.14976433
[14,] 0.124140400 0.125564637 0.121281571 0.118002021 0.30971860
[15,] 0.173132357 0.175309455 0.168821266 0.164860523 0.46393111

survfit(fit, newdata=data.frame(time=time,status=status,x=x)[1:5,], 
conf.type='log')$time
?[1]? 1? 2? 3? 4? 5? 6? 7? 8? 9 10 11 12 13 14 15

??? ### method 2

summary(survfit(fit, newdata=data.frame(time=time,status=status,x=x)[1:5,], 
conf.type='log'),time=10)$std.err

? 1? 2? 3? 4? 5
[1,] 0.04061384 0.04106186 0.03963184 0.03715246 0.06867532

By reading the help of ?survfit.object and ?summary.survfit, the standard error provided 
in the output of method 1 (survfit()) was for cumulative hazard-log(survival), while the 
standard error provided in the output of method 2 (summary.survfit()) was for survival 
itself, regardless of how you choose the value for "conf.type" ('log', 
'log-log' or 'plain'). This explains why the standard error output is different between 
method 1 (10th row) and method 2.

My question is how do I get standard error estimates for log(-log(survival))?

Thanks!

John


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] standard error of survfit.coxph()

2014-06-27 Thread array chip
Hi, can anyone help me to understand the standard errors printed in the output 
of survfit.coxph()?

time<-sample(1:15,100,replace=T)

status<-as.numeric(runif(100,0,1)<0.2)
x<-rnorm(100,10,2)

fit<-coxph(Surv(time,status)~x)
    ### method 1

survfit(fit, newdata=data.frame(time=time,status=status,x=x)[1:5,], 
conf.type='log')$std.err

 [,1]    [,2]    [,3]    [,4]   [,5]
 [1,] 0.0 0.0 0.0 0.0 0.
 [2,] 0.008627644 0.008567253 0.008773699 0.009354788 0.01481819
 [3,] 0.008627644 0.008567253 0.008773699 0.009354788 0.01481819
 [4,] 0.013800603 0.013767977 0.013889971 0.014379928 0.02353371
 [5,] 0.013800603 0.013767977 0.013889971 0.014379928 0.02353371
 [6,] 0.013800603 0.013767977 0.013889971 0.014379928 0.02353371
 [7,] 0.030226811 0.030423883 0.029806263 0.028918817 0.05191161
 [8,] 0.030226811 0.030423883 0.029806263 0.028918817 0.05191161
 [9,] 0.036852571 0.037159980 0.036186931 0.034645002 0.06485394
[10,] 0.044181716 0.044621159 0.043221145 0.040872939 0.07931028
[11,] 0.044181716 0.044621159 0.043221145 0.040872939 0.07931028
[12,] 0.055452631 0.056018832 0.054236881 0.051586391 0.10800413
[13,] 0.070665160 0.071363749 0.069208056 0.066655730 0.14976433
[14,] 0.124140400 0.125564637 0.121281571 0.118002021 0.30971860
[15,] 0.173132357 0.175309455 0.168821266 0.164860523 0.46393111

survfit(fit, newdata=data.frame(time=time,status=status,x=x)[1:5,], 
conf.type='log')$time
 [1]  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15

    ### method 2

summary(survfit(fit, newdata=data.frame(time=time,status=status,x=x)[1:5,], 
conf.type='log'),time=10)$std.err

  1  2  3  4  5
[1,] 0.04061384 0.04106186 0.03963184 0.03715246 0.06867532

By reading the help of ?survfit.object and ?summary.survfit, the standard error 
provided in the output of method 1 (survfit()) was for cumulative 
hazard-log(survival), while the standard error provided in the output of method 
2 (summary.survfit()) was for survival itself, regardless of how you choose the 
value for "conf.type" ('log', 'log-log' or 'plain'). This explains why the 
standard error output is different between method 1 (10th row) and method 2.

My question is how do I get standard error estimates for log(-log(survival))?

Thanks!

John
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Standard Error and P-Value from cor()

2013-04-18 Thread Pascal Oettli
Hi,

?cor.test

Regards,
Pascal



2013/4/19 Gundala Viswanath 

> Is there a native way to produce SE of correlation in  R's cor() functions
> and p-value from T-test?
>
> As explained in this web
> http://www.sjsu.edu/faculty/gerstman/StatPrimer/correlation.pdf
> (page 14.6)
>
> The standard error is sqrt((1-r^2)/(n-2)), where n- is the number of
> sample.
>
> - G.V.
>
> [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Standard Error and P-Value from cor()

2013-04-18 Thread Gundala Viswanath
Is there a native way to produce SE of correlation in  R's cor() functions
and p-value from T-test?

As explained in this web
http://www.sjsu.edu/faculty/gerstman/StatPrimer/correlation.pdf
(page 14.6)

The standard error is sqrt((1-r^2)/(n-2)), where n- is the number of sample.

- G.V.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Standard error of normalmixEM fit?

2013-03-31 Thread Stat Tistician
I fitted a mixture denstiy of two gaussians two my data. I now want to
calculated the standard errors of the estimates via the boot.se command of
the mixtools package. My question is now, if the output is correct? It
seems a bit odd to me, so is this correct what I am doing and can I rely on
the values?

My data: http://s000.tinyupload.com/?file_id=09285782882980618119

My code:

normalmix<-normalmixEM(dat,k=2,lambda=c(0.99024,(1-0.99024)),fast=FALSE,maxit=1,epsilon
= 1e-16,maxrestarts=1000)
normalmix$loglik
normalmix$lambda

 se<-boot.se(normalmix,B=1000)

  se$lambda.se
  se$mu.se
  se$sigma.se

final results:

lambdahat = 0.990238663

mu1hat= -0.00115

mu2hat= 0.040176949

sigma1hat= 0.012220305

sigma2hat= 0.003247102



My problem now is - and thats why I feel uncomfortable about relying on the
values - that the output of boot.se(normalmix) varies quite strong. So
without changing the code and rerun it (with the same normalmix, so
normalmix is not rerun again) I get different estimates of the standard
error. I increased the default value for B from 100 to 1000. In the manual
there is nothing said about any other randomness. So where does it come
from? What should I do now?

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] standard error very high in maximum liklihood fitting

2013-02-12 Thread Ben Bolker
Abu Naser  hotmail.com> writes:

> I have been trying to fit my data (only right censored)
>  with gumbel distribution using fitdistrplus. I am
> getting very high standard error. I have been wondering why.
> The followings are the outputs:
> 
> fit1=fitdistcens(dr0, "gumbel", start=list(a=99, b=0.6), 
> optim.method= "L-BFGS-B", lower = 0.0,
> upper = Inf)
> > summary(fit1)

  [snip]

> estimate Std. Error
> a 97.82603713115.09
> b  0.1115094 173.79
> Loglikelihood:  -9.749883e-10   AIC:  4   BIC:  21.21178 

 [snip]

 Impossible to say without a reproducible example.  It seems 
very suspicious that your log-likelihood is almost exactly zero.
Do you have a very small data set?

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] standard error very high in maximum liklihood fitting

2013-02-12 Thread Abu Naser









 Dear all,

I have been trying to fit my data (only right censored) with gumbel 
distribution using fitdistrplus. I am getting very high standard error. I have 
been wondering why.
The followings are the outputs:

fit1=fitdistcens(dr0, "gumbel", start=list(a=99, b=0.6), optim.method= 
"L-BFGS-B", lower = 0.0, upper = Inf)
> summary(fit1)
FITTING OF THE DISTRIBUTION ' gumbel ' BY MAXIMUM LIKELIHOOD ON CENSORED DATA 
PARAMETERS
estimate Std. Error
a 97.82603713115.09
b  0.1115094 173.79
Loglikelihood:  -9.749883e-10   AIC:  4   BIC:  21.21178 
Correlation matrix:
   a  b
a  1.000 -0.6693773
b -0.6693773  1.000
  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] standard error for quantile

2012-11-12 Thread Thomas Lumley
On Tue, Nov 13, 2012 at 11:59 AM, Rolf Turner wrote:

>
> My apologies for returning to this issue after such a considerable
> length of time ... but I wanted to check the result in Cramer's book,
> and only yesterday managed to get myself organised to go the
> library and  check it out.
>
> What bothers me is what happens when f(Q.p) = 0.  The formula
> that you give --- which is exactly the same as that which appears
> in Cramer, page 369, would appear to imply that the variance is
> infinite when f(Q.p) = 0.  This doesn't feel right to me.
>

That analysis is only first order, so really all it says is that n *
variance goes to infinity rather than to a finite constant as when
f(Q.p)>0.  I expect that if you looked at different sample sizes you'd find
that variance eventually decreases slower than 1/n, perhaps n^(-2/3) or
something

For p=0.51, the asymptotics probably aren't going to work until n is large
enough that Q.50 is well out in the tails of the normal distribution
centered around Q.51


> I did a wee experiment with f(x) = 15*x^2*(1-x^2)/4 for -1 <= x <= 1,
> which makes f(Q.50) = f(0) = 0.
>

With this density you can write down the density of the median or other
order statistic and thus write down an integral that gives the exact
variance.  Better still, it's a polynomial, so you could evaluate the
integral exactly.


-thomas

-- 
Thomas Lumley
Professor of Biostatistics
University of Auckland

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] standard error for quantile

2012-11-12 Thread Rolf Turner


My apologies for returning to this issue after such a considerable
length of time ... but I wanted to check the result in Cramer's book,
and only yesterday managed to get myself organised to go the
library and  check it out.

What bothers me is what happens when f(Q.p) = 0.  The formula
that you give --- which is exactly the same as that which appears
in Cramer, page 369, would appear to imply that the variance is
infinite when f(Q.p) = 0.  This doesn't feel right to me.

I did a wee experiment with f(x) = 15*x^2*(1-x^2)/4 for -1 <= x <= 1,
which makes f(Q.50) = f(0) = 0.

I simulated 1000 samples of size 1000 from this distribution, and
calculated the variance of the empirical quantiles for prob=0.5, 0.51,
and 0.75.

For prob =0.75 the variance of the empirical quantiles was 0.0002274762,
and the formula gave 0.0002266639  --- very nice agreement.

For prob = 0.51 the empirical variance was 0.03743684 and the formula
gave 0.01167684 --- which is pretty much out to luntch.

For prob = 0.50 the empirical variance was 0.04545603 and the formula
of course gives Inf.

Cramer does not (as far as I can see) mention anything about the necessity
for f(Q.p) to be non zero.  Note that f(Q.51) = 0.1462919 which is not all
*that* close to 0, but still the resulting answer from the formula is pretty
crummy.  With a sample size of 1000 I would have thought (naive young
thing that I am) that the asymptotics would have well and truly kicked in.

Any thoughts on this?  Is there anything that one can do in instances where
f(Q.p) = 0?  Just curious ..

cheers,

Rolf

P. S. The reference for Cramer's book is "Mathematical Methods of 
Statistics",

by Harald Cramer, Princeton University Press, 1961.

Note that Cramer should really have an acute accent on the "e" in all of
the above.

R.

P^2. S.  The histograms of the prob = 0.5 and prob = 0.51 empirical 
quantiles

are *very* strongly bimodal.  Nothing like Gaussian.

R.



On 31/10/12 06:41, (Ted Harding) wrote:




The general asymptotic result for the pth quantile (0



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] standard error for quantile

2012-10-31 Thread Ted Harding
[see in-line below]

On 31-Oct-2012 10:26:14 PIKAL Petr wrote:
> Hi Ted
> 
>> -Original Message-
>> From: ted@deb [mailto:ted@deb] On Behalf Of Ted Harding
>> Sent: Tuesday, October 30, 2012 6:41 PM
>> To: r-help@r-project.org
> 
> 
> 
>> 
>> The general asymptotic result for the pth quantile (0> sample of size n is that it is asymptotically Normally distributed with
>> mean the pth quantile Q.p of the parent distribution and
>> 
>>   var(X.p) = p*(1-p)/(n*f(Q.p)^2)
>> 
>> where f(x) is the probability density function of the parent
>> distribution.
> 
> So if I understand correctly p*(1-p) is biggest when p=0.5 and decreases with
> smaller or bigger p. The var(X.p) then depends on ratio to parent
> distribution at this p probability. For lognorm distribution and 200 values
> the resulting var is
> 
>> (0.5*(1-.5))/(200*qlnorm(.5, log(200), log(2))^2)
> [1] 3.125e-08
>> (0.1*(1-.1))/(200*qlnorm(.1, log(200), log(2))^2)
> [1] 6.648497e-08
> 
> so 0.1 var is slightly bigger than 0.5 var. For different distributions this
> can be reversed as Jim pointed out.
> 
> Did I manage to understand?
> 
> Thank you very much.
> Regards
> Petr 

Yes, it looks as though you understand! As a further illustration,
here is some R code applied to examples where the parent distrbution
is uniform or Normal. For each case, the reraults are stated as
first: simulated; second: by the formula. It can be seen that for
n=200 the formula and the simulations are close.
Ted.

###
## Test of formula for var(quantile)
varQ <- function(p,n,f.p) {
  p*(1-p)/(n*(f.p^2))
}

## Test 1: Uniform (0,1), n = 200
n <- 200
## Pick one of (a), (b), (c):
## (a)# p <- 0.50 ; q <- 100 ; f.p <- 1
## (b)# p <- 0.25 ; q <-  50 ; f.p <- 1
## (c)# p <- 0.10 ; q <-  25 ; f.p <- 1
Nsim <- 1000
Qs   <- numeric(Nsim)
for( i in (1:Nsim) ){
  Qs[i] <- sort(runif(n))[q]
}
var(Qs)
varQ(p,n,f.p)
## (a) 0.001239982
## 0.00125
## (b) 0.0008877879
## 0.0009375
## (c) 0.0005619348
## 0.00045

## Test 2: N(0,1), n = 200
n <- 200
## Pick one of (a), (b), (c):
## (a)# p <- 0.50 ; q <- 100 ; f.p <- dnorm(qnorm(0.50))
## (b)# p <- 0.25 ; q <-  50 ; f.p <- dnorm(qnorm(0.25))
## (c)# p <- 0.10 ; q <-  20 ; f.p <- dnorm(qnorm(0.10))
Nsim <- 1000
Qs   <- numeric(Nsim)
for( i in (1:Nsim) ){
  Qs[i] <- sort(rnorm(n))[q]
}
var(Qs)
varQ(p,n,f.p)
## (a) 0.007633568
## 0.007853982
## (b) 0.009370099
## 0.009283837
## (c) 0.01420517
## 0.01461055
###

>> This is not necessarily very helpful for small sample sizes (depending
>> on the parent distribution).
>> 
>> However, it is possible to obtain a general result giving an exact
>> confidence interval for Q.p given the entire ordered sample, though
>> there is only a restricted set of confidence levels to which it
>> applies.
>> 
>> If you'd like more detail about the above, I could write up derivations
>> and make the write-up available.
>> 
>> Hoping this helps,
>> Ted.
>> 
>> -
>> E-Mail: (Ted Harding) 
>> Date: 30-Oct-2012  Time: 17:40:55
>> This message was sent by XFMail
>> -
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

-
E-Mail: (Ted Harding) 
Date: 31-Oct-2012  Time: 18:10:16
This message was sent by XFMail

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] standard error for quantile

2012-10-31 Thread Roger Koenker
The rank test inversion option that you are trying to use won't
work with only one coefficient, and therefore with univariate
quantiles,  if you use summary(rq(rnorm(50) ~ 1, tau = .9), cov = TRUE)
you will have better luck.

url:www.econ.uiuc.edu/~rogerRoger Koenker
emailrkoen...@uiuc.eduDepartment of Economics
vox: 217-333-4558University of Illinois
fax:   217-244-6678Urbana, IL 61801

On Oct 30, 2012, at 9:42 AM, Koenker, Roger W wrote:

> Petr,
> 
> You can  do:
> 
> require(quantreg)
> summary(rq(x ~ 1, tau = c(.10,.50,.99))
> 
> 
> url:www.econ.uiuc.edu/~rogerRoger Koenker
> emailrkoen...@uiuc.eduDepartment of Economics
> vox: 217-333-4558University of Illinois
> fax:   217-244-6678Urbana, IL 61801
> 
> On Oct 30, 2012, at 9:37 AM, Bert Gunter wrote:
> 
>> Petr:
>> 
>> 1. Not an R question.
>> 
>> 2. You want the distribution of order statistics. Search on that. It's
>> basically binomial/beta.
>> 
>> -- Bert
>> 
>> On Tue, Oct 30, 2012 at 6:46 AM, PIKAL Petr  wrote:
>>> Dear all
>>> 
>>> I have a question about quantiles standard error, partly practical
>>> partly theoretical. I know that
>>> 
>>> x<-rlnorm(10, log(200), log(2))
>>> quantile(x, c(.10,.5,.99))
>>> 
>>> computes quantiles but I would like to know if there is any function to
>>> find standard error (or any dispersion measure) of these estimated
>>> values.
>>> 
>>> And here is a theoretical one. I feel that when I compute median from
>>> given set of values it will have lower standard error then 0.1 quantile
>>> computed from the same set of values.
>>> 
>>> Is it true? If yes can you point me to some reasoning?
>>> 
>>> Thanks for all answers.
>>> Regards
>>> Petr
>>> 
>>> PS.
>>> I found mcmcse package which shall compute the standard error but which
>>> I could not make to work probably because I do not have recent R-devel
>>> version installed
>>> 
>>> Error in eval(expr, envir, enclos) :
>>> could not find function ".getNamespace"
>>> Error : unable to load R code in package 'mcmcse'
>>> Error: package/namespace load failed for 'mcmcse'
>>> 
>>> Maybe I will also something find in quantreg package, but I did not
>>> went through it yet.
>>> 
>>> __
>>> R-help@r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-help
>>> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
>>> and provide commented, minimal, self-contained, reproducible code.
>> 
>> 
>> 
>> -- 
>> 
>> Bert Gunter
>> Genentech Nonclinical Biostatistics
>> 
>> Internal Contact Info:
>> Phone: 467-7374
>> Website:
>> http://pharmadevelopment.roche.com/index/pdb/pdb-functional-groups/pdb-biostatistics/pdb-ncb-home.htm
>> 
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] standard error for quantile

2012-10-31 Thread PIKAL Petr
Hi Roger

Thanks for your answer. I tried it with partial success. I got values for tau 
but I did not manage to evaluate errors or variances of these values.

from help page this shall compute median 

rq(rnorm(50) ~ 1, ci=FALSE)

and I assumed some kind of confidence interval is computed when ci=TRUE, but no 
avail.

In the mean time I have got several other answers which helped me to understand 
the topic.

Thanks again.

Regards
Petr

> -Original Message-
> From: Roger Koenker [mailto:rkoen...@illinois.edu]
> Sent: Tuesday, October 30, 2012 3:42 PM
> To: PIKAL Petr
> Cc: r-help@r-project.org help
> Subject: Re: [R] standard error for quantile
> 
> Petr,
> 
> You can  do:
> 
> require(quantreg)
> summary(rq(x ~ 1, tau = c(.10,.50,.99))
> 
> 
> url:www.econ.uiuc.edu/~rogerRoger Koenker
> emailrkoen...@uiuc.eduDepartment of Economics
> vox: 217-333-4558University of Illinois
> fax:   217-244-6678Urbana, IL 61801
> 
> On Oct 30, 2012, at 9:37 AM, Bert Gunter wrote:
> 
> > Petr:
> >
> > 1. Not an R question.
> >
> > 2. You want the distribution of order statistics. Search on that.
> It's
> > basically binomial/beta.
> >
> > -- Bert
> >
> > On Tue, Oct 30, 2012 at 6:46 AM, PIKAL Petr 
> wrote:
> >> Dear all
> >>
> >> I have a question about quantiles standard error, partly practical
> >> partly theoretical. I know that
> >>
> >> x<-rlnorm(10, log(200), log(2))
> >> quantile(x, c(.10,.5,.99))
> >>
> >> computes quantiles but I would like to know if there is any function
> >> to find standard error (or any dispersion measure) of these
> estimated
> >> values.
> >>
> >> And here is a theoretical one. I feel that when I compute median
> from
> >> given set of values it will have lower standard error then 0.1
> >> quantile computed from the same set of values.
> >>
> >> Is it true? If yes can you point me to some reasoning?
> >>
> >> Thanks for all answers.
> >> Regards
> >> Petr
> >>
> >> PS.
> >> I found mcmcse package which shall compute the standard error but
> >> which I could not make to work probably because I do not have recent
> >> R-devel version installed
> >>
> >> Error in eval(expr, envir, enclos) :
> >>  could not find function ".getNamespace"
> >> Error : unable to load R code in package 'mcmcse'
> >> Error: package/namespace load failed for 'mcmcse'
> >>
> >> Maybe I will also something find in quantreg package, but I did not
> >> went through it yet.
> >>
> >> __
> >> R-help@r-project.org mailing list
> >> https://stat.ethz.ch/mailman/listinfo/r-help
> >> PLEASE do read the posting guide
> >> http://www.R-project.org/posting-guide.html
> >> and provide commented, minimal, self-contained, reproducible code.
> >
> >
> >
> > --
> >
> > Bert Gunter
> > Genentech Nonclinical Biostatistics
> >
> > Internal Contact Info:
> > Phone: 467-7374
> > Website:
> > http://pharmadevelopment.roche.com/index/pdb/pdb-functional-
> groups/pdb
> > -biostatistics/pdb-ncb-home.htm
> >
> > __
> > R-help@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide
> > http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] standard error for quantile

2012-10-31 Thread PIKAL Petr
Hi Ted

> -Original Message-
> From: ted@deb [mailto:ted@deb] On Behalf Of Ted Harding
> Sent: Tuesday, October 30, 2012 6:41 PM
> To: r-help@r-project.org



> 
> The general asymptotic result for the pth quantile (0 sample of size n is that it is asymptotically Normally distributed with
> mean the pth quantile Q.p of the parent distribution and
> 
>   var(X.p) = p*(1-p)/(n*f(Q.p)^2)
> 
> where f(x) is the probability density function of the parent
> distribution.

So if I understand correctly p*(1-p) is biggest when p=0.5 and decreases with 
smaller or bigger p. The var(X.p) then depends on ratio to parent distribution 
at this p probability. For lognorm distribution and 200 values the resulting 
var is

> (0.5*(1-.5))/(200*qlnorm(.5, log(200), log(2))^2)
[1] 3.125e-08
> (0.1*(1-.1))/(200*qlnorm(.1, log(200), log(2))^2)
[1] 6.648497e-08

so 0.1 var is slightly bigger than 0.5 var. For different distributions this 
can be reversed as Jim pointed out.

Did I manage to understand?

Thank you very much.
Regards
Petr 


> 
> This is not necessarily very helpful for small sample sizes (depending
> on the parent distribution).
> 
> However, it is possible to obtain a general result giving an exact
> confidence interval for Q.p given the entire ordered sample, though
> there is only a restricted set of confidence levels to which it
> applies.
> 
> If you'd like more detail about the above, I could write up derivations
> and make the write-up available.
> 
> Hoping this helps,
> Ted.
> 
> -
> E-Mail: (Ted Harding) 
> Date: 30-Oct-2012  Time: 17:40:55
> This message was sent by XFMail
> -

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] standard error for quantile

2012-10-31 Thread PIKAL Petr
Hi Bert


> -Original Message-
> From: Bert Gunter [mailto:gunter.ber...@gene.com]
> Sent: Tuesday, October 30, 2012 3:37 PM
> To: PIKAL Petr
> Cc: r-help@r-project.org
> Subject: Re: [R] standard error for quantile
> 
>  Petr:
> 
> 1. Not an R question.

Partly. I asked also for pointing me to some R functions which can compute such 
standard error, which is by my humble opinion valid R question.

> 
> 2. You want the distribution of order statistics. Search on that. It's
> basically binomial/beta.

Hm. I looked at some web info, which is quite good for trained statistician but 
at the edge of my understanding as chemist (and sometimes beyound:-).

Thanks anyway.
Regards
Petr

> 
> -- Bert
> 
> On Tue, Oct 30, 2012 at 6:46 AM, PIKAL Petr 
> wrote:
> > Dear all
> >
> > I have a question about quantiles standard error, partly practical
> > partly theoretical. I know that
> >
> > x<-rlnorm(10, log(200), log(2))
> > quantile(x, c(.10,.5,.99))
> >
> > computes quantiles but I would like to know if there is any function
> > to find standard error (or any dispersion measure) of these estimated
> > values.
> >
> > And here is a theoretical one. I feel that when I compute median from
> > given set of values it will have lower standard error then 0.1
> > quantile computed from the same set of values.
> >
> > Is it true? If yes can you point me to some reasoning?
> >
> > Thanks for all answers.
> > Regards
> > Petr
> >
> > PS.
> > I found mcmcse package which shall compute the standard error but
> > which I could not make to work probably because I do not have recent
> > R-devel version installed
> >
> > Error in eval(expr, envir, enclos) :
> >   could not find function ".getNamespace"
> > Error : unable to load R code in package 'mcmcse'
> > Error: package/namespace load failed for 'mcmcse'
> >
> > Maybe I will also something find in quantreg package, but I did not
> > went through it yet.
> >
> > __
> > R-help@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide
> > http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> 
> 
> 
> --
> 
> Bert Gunter
> Genentech Nonclinical Biostatistics
> 
> Internal Contact Info:
> Phone: 467-7374
> Website:
> http://pharmadevelopment.roche.com/index/pdb/pdb-functional-groups/pdb-
> biostatistics/pdb-ncb-home.htm
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] standard error for quantile

2012-10-31 Thread PIKAL Petr
Thanks Jim.

After reinstall of new R version all mentioned packages work. I tested various 
functions which revealed that on my lognorm data there is no big difference in 
error of median or 10% quantile. I also found some function for quantile se 
computing in Hmisc package.

Petr


> -Original Message-
> From: Jim Lemon [mailto:j...@bitwrit.com.au]
> Sent: Wednesday, October 31, 2012 9:56 AM
> To: PIKAL Petr
> Cc: r-help@r-project.org
> Subject: Re: [R] standard error for quantile
> 
> On 10/31/2012 12:46 AM, PIKAL Petr wrote:
> > Dear all
> >
> > I have a question about quantiles standard error, partly practical
> > partly theoretical. I know that
> >
> > x<-rlnorm(10, log(200), log(2))
> > quantile(x, c(.10,.5,.99))
> >
> > computes quantiles but I would like to know if there is any function
> > to find standard error (or any dispersion measure) of these estimated
> > values.
> >
> > And here is a theoretical one. I feel that when I compute median from
> > given set of values it will have lower standard error then 0.1
> > quantile computed from the same set of values.
> >
> > Is it true? If yes can you point me to some reasoning?
> >
> Hi Petr,
> Using a resampling method, it depends upon the distribution of the
> values. If you have a "love-hate" distribution (bimodal and heavily
> weighted toward extreme values), the median standard error can be
> larger. Try this:
> 
> x<-sample(-5:5,1000,TRUE,
>   prob=c(0.2,0.1,0.05,0.04,0.03,0.02,0.03,0.04,0.05,0.1,0.2))
> x<-ifelse(x<0,x+runif(1000),x-runif(1000))
> hist(x)
> mcse.q(x, 0.1)
> $est
> [1] -3.481419
> 
> $se
> [1] 0.06887319
> 
> mcse.q(x, 0.5)
> $est
> [1] 1.088475
> 
> $se
> [1] 0.3440115
> 
>  > mcse.q(x, 0.1)
> $est
> [1] -3.481419
> 
> $se
> [1] 0.06887319
> 
> Jim

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] standard error for quantile

2012-10-31 Thread Jim Lemon

On 10/31/2012 12:46 AM, PIKAL Petr wrote:

Dear all

I have a question about quantiles standard error, partly practical
partly theoretical. I know that

x<-rlnorm(10, log(200), log(2))
quantile(x, c(.10,.5,.99))

computes quantiles but I would like to know if there is any function to
find standard error (or any dispersion measure) of these estimated
values.

And here is a theoretical one. I feel that when I compute median from
given set of values it will have lower standard error then 0.1 quantile
computed from the same set of values.

Is it true? If yes can you point me to some reasoning?


Hi Petr,
Using a resampling method, it depends upon the distribution of the 
values. If you have a "love-hate" distribution (bimodal and heavily 
weighted toward extreme values), the median standard error can be 
larger. Try this:


x<-sample(-5:5,1000,TRUE,
 prob=c(0.2,0.1,0.05,0.04,0.03,0.02,0.03,0.04,0.05,0.1,0.2))
x<-ifelse(x<0,x+runif(1000),x-runif(1000))
hist(x)
mcse.q(x, 0.1)
$est
[1] -3.481419

$se
[1] 0.06887319

mcse.q(x, 0.5)
$est
[1] 1.088475

$se
[1] 0.3440115

> mcse.q(x, 0.1)
$est
[1] -3.481419

$se
[1] 0.06887319

Jim

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] standard error for quantile

2012-10-30 Thread Ted Harding
On 30-Oct-2012 13:46:17 PIKAL Petr wrote:
> Dear all
> 
> I have a question about quantiles standard error, partly practical
> partly theoretical. I know that
> 
> x<-rlnorm(10, log(200), log(2))
> quantile(x, c(.10,.5,.99))
> 
> computes quantiles but I would like to know if there is any function to
> find standard error (or any dispersion measure) of these estimated
> values.
> 
> And here is a theoretical one. I feel that when I compute median from
> given set of values it will have lower standard error then 0.1 quantile
> computed from the same set of values.
> 
> Is it true? If yes can you point me to some reasoning?
> 
> Thanks for all answers.
> Regards
> Petr
> ["PS" deleted]

The general asymptotic result for the pth quantile (0
Date: 30-Oct-2012  Time: 17:40:55
This message was sent by XFMail

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] standard error for quantile

2012-10-30 Thread Roger Koenker
Petr,

You can  do:

require(quantreg)
summary(rq(x ~ 1, tau = c(.10,.50,.99))


url:www.econ.uiuc.edu/~rogerRoger Koenker
emailrkoen...@uiuc.eduDepartment of Economics
vox: 217-333-4558University of Illinois
fax:   217-244-6678Urbana, IL 61801

On Oct 30, 2012, at 9:37 AM, Bert Gunter wrote:

> Petr:
> 
> 1. Not an R question.
> 
> 2. You want the distribution of order statistics. Search on that. It's
> basically binomial/beta.
> 
> -- Bert
> 
> On Tue, Oct 30, 2012 at 6:46 AM, PIKAL Petr  wrote:
>> Dear all
>> 
>> I have a question about quantiles standard error, partly practical
>> partly theoretical. I know that
>> 
>> x<-rlnorm(10, log(200), log(2))
>> quantile(x, c(.10,.5,.99))
>> 
>> computes quantiles but I would like to know if there is any function to
>> find standard error (or any dispersion measure) of these estimated
>> values.
>> 
>> And here is a theoretical one. I feel that when I compute median from
>> given set of values it will have lower standard error then 0.1 quantile
>> computed from the same set of values.
>> 
>> Is it true? If yes can you point me to some reasoning?
>> 
>> Thanks for all answers.
>> Regards
>> Petr
>> 
>> PS.
>> I found mcmcse package which shall compute the standard error but which
>> I could not make to work probably because I do not have recent R-devel
>> version installed
>> 
>> Error in eval(expr, envir, enclos) :
>>  could not find function ".getNamespace"
>> Error : unable to load R code in package 'mcmcse'
>> Error: package/namespace load failed for 'mcmcse'
>> 
>> Maybe I will also something find in quantreg package, but I did not
>> went through it yet.
>> 
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
> 
> 
> 
> -- 
> 
> Bert Gunter
> Genentech Nonclinical Biostatistics
> 
> Internal Contact Info:
> Phone: 467-7374
> Website:
> http://pharmadevelopment.roche.com/index/pdb/pdb-functional-groups/pdb-biostatistics/pdb-ncb-home.htm
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] standard error for quantile

2012-10-30 Thread Bert Gunter
 Petr:

1. Not an R question.

2. You want the distribution of order statistics. Search on that. It's
basically binomial/beta.

-- Bert

On Tue, Oct 30, 2012 at 6:46 AM, PIKAL Petr  wrote:
> Dear all
>
> I have a question about quantiles standard error, partly practical
> partly theoretical. I know that
>
> x<-rlnorm(10, log(200), log(2))
> quantile(x, c(.10,.5,.99))
>
> computes quantiles but I would like to know if there is any function to
> find standard error (or any dispersion measure) of these estimated
> values.
>
> And here is a theoretical one. I feel that when I compute median from
> given set of values it will have lower standard error then 0.1 quantile
> computed from the same set of values.
>
> Is it true? If yes can you point me to some reasoning?
>
> Thanks for all answers.
> Regards
> Petr
>
> PS.
> I found mcmcse package which shall compute the standard error but which
> I could not make to work probably because I do not have recent R-devel
> version installed
>
> Error in eval(expr, envir, enclos) :
>   could not find function ".getNamespace"
> Error : unable to load R code in package 'mcmcse'
> Error: package/namespace load failed for 'mcmcse'
>
> Maybe I will also something find in quantreg package, but I did not
> went through it yet.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.



-- 

Bert Gunter
Genentech Nonclinical Biostatistics

Internal Contact Info:
Phone: 467-7374
Website:
http://pharmadevelopment.roche.com/index/pdb/pdb-functional-groups/pdb-biostatistics/pdb-ncb-home.htm

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] standard error for quantile

2012-10-30 Thread PIKAL Petr
Dear all

I have a question about quantiles standard error, partly practical
partly theoretical. I know that

x<-rlnorm(10, log(200), log(2))
quantile(x, c(.10,.5,.99))

computes quantiles but I would like to know if there is any function to
find standard error (or any dispersion measure) of these estimated
values.

And here is a theoretical one. I feel that when I compute median from
given set of values it will have lower standard error then 0.1 quantile
computed from the same set of values.

Is it true? If yes can you point me to some reasoning?

Thanks for all answers.
Regards
Petr

PS.
I found mcmcse package which shall compute the standard error but which
I could not make to work probably because I do not have recent R-devel
version installed

Error in eval(expr, envir, enclos) :
  could not find function ".getNamespace"
Error : unable to load R code in package 'mcmcse'
Error: package/namespace load failed for 'mcmcse'

Maybe I will also something find in quantreg package, but I did not
went through it yet.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] standard error

2012-05-28 Thread David Winsemius


On May 28, 2012, at 5:20 AM, Christopher Kelvin wrote:


Dear all,
 I want to determine the standard error or the mean squared error  
for the parameter estimate for beta and eta base on the real data.

 Any help on how to obtain these estimated errors.


library(survival)
d <- data.frame(ob=c(149971, 70808, 133518, 145658, 175701, 50960,  
126606, 82329), state=1)

s <- Surv(d$ob,d$state)
sr <- survreg(s~1,dist="weibull")
beta<-1/sr$scale
p1=(beta)
p1
eta<-exp(sr$coefficients[1])
b=(eta)
b


The usual approach is to rely on the normality of the parameters on  
the "Weibull scale" and then back transform coef +/- 1.96*se(coef)


You get these with

summary(sr)

--
David.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] standard error

2012-05-28 Thread Christopher Kelvin
Dear all,
 I want to determine the standard error or the mean squared error for the 
parameter estimate for beta and eta base on the real data.
 Any help on how to obtain these estimated errors.


library(survival)
d <- data.frame(ob=c(149971, 70808, 133518, 145658, 175701, 50960, 126606, 
82329), state=1)
s <- Surv(d$ob,d$state)
sr <- survreg(s~1,dist="weibull")
beta<-1/sr$scale
p1=(beta)
p1
eta<-exp(sr$coefficients[1])
b=(eta)
b


Thank you
Chris Guure
Researcher
Institute for Mathematical Research
UPM

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] standard error

2012-04-22 Thread Christopher Kelvin
Hello,
I have tried obtaining the value of standard error from the code below but i 
get different values when i compare it with the 
standard error obtained from the hessian matrix. Can somebody help me out?
Thank you

n=100;rr=1000
p1=1.2;b=1.5
sq11=sq21=0
for (i in 1:rr){
t<-rweibull(n,shape=p1,scale=b)
meantrue<-gamma(1+(1/p1))*b
meantrue
d<-meantrue/0.40
cen<- runif(n,min=0,max=d)
s<-ifelse(t<=cen,1,0)
q<-c(t,cen)

z<-function(data, p){ 
beta<-p[1]
eta<-p[2]
log1<-(n*sum(s)*log(p[1])-n*sum(s)*(p[1])*log(p[2])+sum(s)*(p[1]-1)*sum(log(t))-n*sum((t/(p[2]))^(p[1])))
return(-log1)
}

start <- c(1,1)
zz<-optim(start,fn=z,data=q,hessian=T)
zz
m1<-zz$par[2]
p<-zz$par[1]

sq11<-sq11+(1/rr*(sum((q-m1)^2)))
sq21<-sq21+(1/rr*(sum((q-Lm1)^2)))

}

se11<-sqrt(sq11)/(n-1)
se11
se21<-sqrt(sq21)/(n-1)
se21

f<-solve(zz$hessian)
se<-sqrt(diag(f))
se


Chris Guure
Researcher
Institute for Mathematical Research
UPM

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Standard error

2012-04-22 Thread Christopher Kelvin
Hello,
I have tried obtaining the value of standard error from the code below but i 
get different values when i compare it with the 
standard error obtained from the hessian matrix. Can somebody help me out?
Thank you

n=100;rr=1000
p1=1.2;b=1.5
sq11=sq21=0
for (i in 1:rr){
t<-rweibull(n,shape=p1,scale=b)
meantrue<-gamma(1+(1/p1))*b
meantrue
d<-meantrue/0.40
cen<- runif(n,min=0,max=d)
s<-ifelse(t<=cen,1,0)
q<-c(t,cen)

z<-function(data, p){ 
beta<-p[1]
eta<-p[2]
log1<-(n*sum(s)*log(p[1])-n*sum(s)*(p[1])*log(p[2])+sum(s)*(p[1]-1)*sum(log(t))-n*sum((t/(p[2]))^(p[1])))
return(-log1)
}

start <- c(1,1)
zz<-optim(start,fn=z,data=q,hessian=T)
zz
m1<-zz$par[2]
p<-zz$par[1]

sq11<-sq11+(1/rr*(sum((q-m1)^2)))
sq21<-sq21+(1/rr*(sum((q-Lm1)^2)))

}

se11<-sqrt(sq11)/(rr-1)
se11
se21<-sqrt(sq21)/(rr-1)
se21

f<-solve(zz$hessian)
se<-sqrt(diag(f))
se


Chris Guure
Researcher
Institute for Mathematical Research
UPM


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Standard error terms from gfcure

2012-03-27 Thread Bonnett, Laura
Dear R-help,

I am using R 2.14.1 on Windows 7 with the 'gfcure' package (cure rate model).
I have included the treatment variable in the cure part of the model as shown 
below:


Ø  ref_treat <- 
gfcure(Surv(rem.Remtime,rem.Rcens)~1,~1+strata(drpa)+factor(treat(delcure)),data=delcure,dist="loglogistic")

>From that I can obtain the coefficients, standard errors etc as per 
>alternative models (with covariates only fitted to the survival part of the 
>model say).

> summary(ref_treat)

However, only one standard error is output:

Log-logistic mixture model

The maximum loglikelihood is -927.0449

Terms in the accelerated failure time model:
Coefficients  Std.err  z-score   p-value
Log(scale) -0.894528   0.0236 -37.8324 0.000
(Intercept) 6.929351   0.0151 460.4157 0.000

Terms in the logistic model:
Coefficients  Std.err  z-score   p-value
(Intercept) 2.542726
strata(drpa)drpa=2 18.76
factor(treat(delcure))2 0.184192
factor(treat(delcure))3 0.472809
factor(treat(delcure))4 0.255565 953.6876   0.0003 0.9997862
factor(treat(delcure))5 0.401713
Warning message:
In sqrt(diag(solve(object$infomat))) : NaNs produced


Can anyone explain why this is the case?

Very many thanks,
Laura

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] standard error for lda()

2012-02-09 Thread David Winsemius


On Feb 9, 2012, at 6:30 PM, array chip wrote:


David, thanks for your response, hope this stirs more...

Ok, a simple code:

y<-as.factor(rnorm(100)>0.5)
x1<-rnorm(100)
x2<-rnorm(100)
obj<-glm(y~x1+x2,family=binomial)
predict(obj,type='response',se.fit=T)

predict(obj,...) will give predicted probabilities in the "fit"  
element; and the associated estimated standard errors in the  
"se.fit" element (if I understand correctly). The predicted  
probability from logistic regression is ultimately a function of y  
and thus a standard error of it should be able to be computed. So  
one of my questions is whether we can use normal approximation to  
construct 95% CI for the predicted probabilities using standard  
errors, because I am not sure if probabilities would follow normal  
distribution?


Wouldn't it be a binomial distribution if you're dealing with  
classification.




Now, if we try lda():

library(MASS)
obj2<-lda(y~x1+x2)
predict(obj2)

where predict(obj2) produces posterior probabilities, the predicted  
class, etc. My question is whether it's possible to produce standard  
errors for these posterior probabilities?


The heuristic I use in situations like this: If the authors didn't  
think this was a desirable feature, they probably had sensible reasons  
for _not_ including it (or they decided that another method, such as  
logistic regression, was better). I cannot think of a good metric for  
probability along the line perpendicular to the "line of maximal  
discrimination" for which I confess I cannot remember the accepted  
name. If I were asked to come up with an estimate I would probably  
revert to a bootstrap strategy.





Thanks again.

John


From: David Winsemius 
To: array chip 
Cc: "r-help@r-project.org" 
Sent: Thursday, February 9, 2012 2:59 PM
Subject: Re: [R] standard error for lda()


On Feb 9, 2012, at 4:45 PM, array chip wrote:

> Hi, didn't hear any response yet. want to give it another try..  
appreciate any suggestions.

>

My problem after reading this the first time was that I didn't agree  
with the premise that logistic regression would provide a standard  
error for a probability. It provides a standard error around an  
estimated coefficient value. And then you provided no further  
details or code to create a simulation, and there didn't seem much  
point in trying to teach you statistical terminology that you were  
throwning around in a manner that seems rather cavalier ,   
admittedly this being a very particular reaction from this non- 
expert audience of one.



> John
>
>
> ________
>
> To: "r-help@r-project.org" 
> Sent: Wednesday, February 8, 2012 12:11 PM
> Subject: [R] standard error for lda()
>
> Hi, I am wondering if it is possible to get an estimate of  
standard error of the predicted posterior probability from LDA using  
lda() from MASS? Logistic regression using glm() would generate a  
standard error for predicted probability with se.fit=T argument in  
predict(), so would it make sense to get standard error for  
posterior probability from lda() and how?

>
> Another question about standard error estimate from glm(): is it  
ok to calculate 95% CI for the predicted probability using the  
standard error based on normal apprximation, i.e.  
predicted_probability +/- 1.96 * standard_error?

>
> Thanks
>
> John
>[[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

David Winsemius, MD
West Hartford, CT





David Winsemius, MD
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] standard error for lda()

2012-02-09 Thread array chip
David, thanks for your response, hope this stirs more...

Ok, a simple code:

y<-as.factor(rnorm(100)>0.5)

x1<-rnorm(100)
x2<-rnorm(100)
obj<-glm(y~x1+x2,family=binomial)
predict(obj,type='response',se.fit=T)

predict(obj,...) will give predicted probabilities in the "fit" element; and 
the associated estimated standard errors in the "se.fit" element (if I 
understand correctly). The predicted probability from logistic regression is 
ultimately a function of y and thus a standard error of it should be able to be 
computed. So one of my questions is whether we can use normal approximation to 
construct 95% CI for the predicted probabilities using standard errors, because 
I am not sure if probabilities would follow normal distribution?

Now, if we try lda():

library(MASS)
obj2<-lda(y~x1+x2)

predict(obj2)

where predict(obj2) produces posterior probabilities, the predicted class, etc. 
My question is whether it's possible to produce standard errors for these 
posterior probabilities?

Thanks again.

John




 From: David Winsemius 

Cc: "r-help@r-project.org"  
Sent: Thursday, February 9, 2012 2:59 PM
Subject: Re: [R] standard error for lda()


On Feb 9, 2012, at 4:45 PM, array chip wrote:

> Hi, didn't hear any response yet. want to give it another try.. appreciate 
> any suggestions.
> 

My problem after reading this the first time was that I didn't agree with the 
premise that logistic regression would provide a standard error for a 
probability. It provides a standard error around an estimated coefficient 
value. And then you provided no further details or code to create a simulation, 
and there didn't seem much point in trying to teach you statistical terminology 
that you were throwning around in a manner that seems rather cavalier ,  
admittedly this being a very particular reaction from this non-expert audience 
of one.


> John
> 
> 
> ____
> 
> To: "r-help@r-project.org" 
> Sent: Wednesday, February 8, 2012 12:11 PM
> Subject: [R] standard error for lda()
> 
> Hi, I am wondering if it is possible to get an estimate of standard error of 
> the predicted posterior probability from LDA using lda() from MASS? Logistic 
> regression using glm() would generate a standard error for predicted 
> probability with se.fit=T argument in predict(), so would it make sense to 
> get standard error for posterior probability from lda() and how?
> 
> Another question about standard error estimate from glm(): is it ok to 
> calculate 95% CI for the predicted probability using the standard error based 
> on normal apprximation, i.e. predicted_probability +/- 1.96 * standard_error?
> 
> Thanks
> 
> John
>     [[alternative HTML version deleted]]
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>     [[alternative HTML version deleted]]
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

David Winsemius, MD
West Hartford, CT
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] standard error for lda()

2012-02-09 Thread David Winsemius


On Feb 9, 2012, at 4:45 PM, array chip wrote:

Hi, didn't hear any response yet. want to give it another try..  
appreciate any suggestions.




My problem after reading this the first time was that I didn't agree  
with the premise that logistic regression would provide a standard  
error for a probability. It provides a standard error around an  
estimated coefficient value. And then you provided no further details  
or code to create a simulation, and there didn't seem much point in  
trying to teach you statistical terminology that you were throwning  
around in a manner that seems rather cavalier ,  admittedly this  
being a very particular reaction from this non-expert audience of one.




John




To: "r-help@r-project.org" 
Sent: Wednesday, February 8, 2012 12:11 PM
Subject: [R] standard error for lda()

Hi, I am wondering if it is possible to get an estimate of standard  
error of the predicted posterior probability from LDA using lda()  
from MASS? Logistic regression using glm() would generate a standard  
error for predicted probability with se.fit=T argument in predict(),  
so would it make sense to get standard error for posterior  
probability from lda() and how?


Another question about standard error estimate from glm(): is it ok  
to calculate 95% CI for the predicted probability using the standard  
error based on normal apprximation, i.e. predicted_probability +/-  
1.96 * standard_error?


Thanks

John
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


David Winsemius, MD
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] standard error for lda()

2012-02-09 Thread array chip
Hi, didn't hear any response yet. want to give it another try.. appreciate any 
suggestions.

John




To: "r-help@r-project.org"  
Sent: Wednesday, February 8, 2012 12:11 PM
Subject: [R] standard error for lda()

Hi, I am wondering if it is possible to get an estimate of standard error of 
the predicted posterior probability from LDA using lda() from MASS? Logistic 
regression using glm() would generate a standard error for predicted 
probability with se.fit=T argument in predict(), so would it make sense to get 
standard error for posterior probability from lda() and how?

Another question about standard error estimate from glm(): is it ok to 
calculate 95% CI for the predicted probability using the standard error based 
on normal apprximation, i.e. predicted_probability +/- 1.96 * standard_error?

Thanks

John
    [[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] standard error for lda()

2012-02-08 Thread array chip
Hi, I am wondering if it is possible to get an estimate of standard error of 
the predicted posterior probability from LDA using lda() from MASS? Logistic 
regression using glm() would generate a standard error for predicted 
probability with se.fit=T argument in predict(), so would it make sense to get 
standard error for posterior probability from lda() and how?

Another question about standard error estimate from glm(): is it ok to 
calculate 95% CI for the predicted probability using the standard error based 
on normal apprximation, i.e. predicted_probability +/- 1.96 * standard_error?

Thanks

John
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Standard error bars on bar plots

2011-08-12 Thread Mikhail Titov
Or ?barplot2

barplot2(means,main="Proportion of time spent in AMU
sector",xlab="Treatments",ylab="Proportion of
time",names.arg=c("Solitary","Size-matched conspecific","Sub-adult
conspecific"),cex.names=0.85,axis.lty=1,ylim=c(0,0.4),
 plot.ci=TRUE,ci.l=means,ci.u=means+halfSE)

Mikhail


> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
On
> Behalf Of Uwe Ligges
> Sent: Friday, August 12, 2011 12:04 PM
> To: Christopher Crooks
> Cc: r-help@r-project.org
> Subject: Re: [R] Standard error bars on bar plots
> 
> 
> 
> On 12.08.2011 18:43, Christopher Crooks wrote:
> > Hi,
> > I know there have been numerous posts about this but I am unable to find
> one, or at least carry out one, that gives me the plot I want.
> > I have managed to add the error bars to the plot, but they end up not
> aligned with the centre of the bars themselves.
> >
> > Here is my script:
> >
> >
> > means<-c(0.13528,0.082829167,0.2757625)
> > SE<-c(0.036364714,0.039582896,0.06867608)
> > halfSE<-c(0.018182357,0.019791448,0.03433804)
> >
> > barx<-barplot(means,main="Proportion of time spent in AMU
> sector",xlab="Treatments",ylab="Proportion of
> time",names.arg=c("Solitary","Size-matched conspecific","Sub-adult
> conspecific"),cex.names=0.85,axis.lty=1,ylim=c(0,0.4))
> > library(gplots)
> > plotCI(x=means,uiw=halfSE,lty=1,gap=0,add=TRUE)
> 
> 
> plotCI(x=barx, y=means,uiw=halfSE,lty=1,gap=0,add=TRUE)
> 
> Uwe Ligges
> 
> > Thanks for any suggestions or help you may have,
> > CJ
> > __
> > R-help@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide http://www.R-project.org/posting-
> guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-
> guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Standard error bars on bar plots

2011-08-12 Thread Uwe Ligges



On 12.08.2011 18:43, Christopher Crooks wrote:

Hi,
I know there have been numerous posts about this but I am unable to find one, 
or at least carry out one, that gives me the plot I want.
I have managed to add the error bars to the plot, but they end up not aligned 
with the centre of the bars themselves.

Here is my script:


means<-c(0.13528,0.082829167,0.2757625)
SE<-c(0.036364714,0.039582896,0.06867608)
halfSE<-c(0.018182357,0.019791448,0.03433804)

barx<-barplot(means,main="Proportion of time spent in AMU sector",xlab="Treatments",ylab="Proportion of 
time",names.arg=c("Solitary","Size-matched conspecific","Sub-adult 
conspecific"),cex.names=0.85,axis.lty=1,ylim=c(0,0.4))
library(gplots)
plotCI(x=means,uiw=halfSE,lty=1,gap=0,add=TRUE)



plotCI(x=barx, y=means,uiw=halfSE,lty=1,gap=0,add=TRUE)

Uwe Ligges


Thanks for any suggestions or help you may have,
CJ
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Standard error bars on bar plots

2011-08-12 Thread Christopher Crooks
Hi,
I know there have been numerous posts about this but I am unable to find one, 
or at least carry out one, that gives me the plot I want. 
I have managed to add the error bars to the plot, but they end up not aligned 
with the centre of the bars themselves. 

Here is my script: 


means<-c(0.13528,0.082829167,0.2757625)
SE<-c(0.036364714,0.039582896,0.06867608)
halfSE<-c(0.018182357,0.019791448,0.03433804)

barx<-barplot(means,main="Proportion of time spent in AMU 
sector",xlab="Treatments",ylab="Proportion of 
time",names.arg=c("Solitary","Size-matched conspecific","Sub-adult 
conspecific"),cex.names=0.85,axis.lty=1,ylim=c(0,0.4))
library(gplots)
plotCI(x=means,uiw=halfSE,lty=1,gap=0,add=TRUE)

Thanks for any suggestions or help you may have,
CJ
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] standard error of exp(coef) from coxph

2011-07-25 Thread David Winsemius


On Jul 23, 2011, at 1:41 AM, Ehsan Karim wrote:



Dear List,
Must be a silly question, but I was wondering whether there is a  
direct way of calculating "standard error of a HR or exp(coef)" from  
coxph objects
x <- coxph(Surv(time, status) ~ age + inst, lung)> xcoef  
exp(coef) se(coef) zpage   0.0190  1.02  0.00925  2.06  
0.04inst -0.0104  0.99  0.01028 -1.01 0.31




?predict.coxph

(The third example is precisely what you are requesting, at least  
under the assumption that you are considering a discrete predictor or  
want the HR for a one unit change in a continuous one.)




cheers,
Ehsan



David Winsemius, MD
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] standard error of exp(coef) from coxph

2011-07-22 Thread Ehsan Karim

Dear List,
Must be a silly question, but I was wondering whether there is a direct way of 
calculating "standard error of a HR or exp(coef)" from coxph objects
x <- coxph(Surv(time, status) ~ age + inst, lung)> xcoef exp(coef) 
se(coef) zpage   0.0190  1.02  0.00925  2.06 0.04inst -0.0104  
0.99  0.01028 -1.01 0.31

cheers,
Ehsan 
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Standard error lines within xyplot trellis plots

2011-05-23 Thread e-mail tom.pienkowski
Hi

I'm using the lattice function to plot catch data for two types of catch
between two communities over time using the xyplot. I've used
*"smooth"*when specifying
* type=c(  )* . Here is a sample of my code below:

> xyplot(ns+ rc~wk|location, type=c("p", "smooth"), xlab="Week",
+ main="Native shrimp and Crayfish catches within Frenchman and Punches",
+ ylab="Native shrimp and Crayfish catch (Kg)")

'ns' and 'rc' refer to the two catch groups, 'wk' refers to the week and
'location' refers to the two communities in which data were collected.

This works fine, however I'm also trying to create lines to show the
standard error on the same graph.

I've tried messing around with plot.CI etc with no success, any suggestions
would be much appretiated!

Tom

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Standard Error for Cointegration Results

2011-04-03 Thread Kin Ming Wong

Dear Sir/Madam,
 
I have used ca.jo in urca package to identify the cointegration and cajorls to 
estimate the vecm. Althought both return the coefficients for long run 
relationship (or ect1 in cajorls), I am unable to find the standard error and t 
statistics.
 
I spend some weeks to search around. I did find some similar enquiries before 
and answer provided Prof. Pfaff is to use vec2var. However, I still can't find 
corresponding info. after using vec2var function. Highly appreciate if someone 
could advise by using a step by step with codes. For example, in the paper of 
""VAR, SVAR and SVEC Models: Implementation Within R Package vars", how to get 
the t-statistics / standard error for the beta in table 5, p.25?
 
Thank you in advance for your effort in addressing my problem.
 
Yours,
Aries 
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Standard error of forecast

2010-09-28 Thread Ista Zahn
Hi,

predict(model_fit, se.fit=TRUE)

see ?predict.lm for details.

-Ista

On Tue, Sep 28, 2010 at 12:16 PM, Brima  wrote:
>
> Hi all,
>
> This is very basic but for a starter nothing is. I have a simple linear
> regression I am using to predict some values and I need the standard error
> of the prediction (forecast). Whats the easiest/bestway of getting this
> error?
>
> Best regards
> --
> View this message in context: 
> http://r.789695.n4.nabble.com/Standard-error-of-forecast-tp2717125p2717125.html
> Sent from the R help mailing list archive at Nabble.com.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Ista Zahn
Graduate student
University of Rochester
Department of Clinical and Social Psychology
http://yourpsyche.org

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Standard error of forecast

2010-09-28 Thread Brima

Hi all,

This is very basic but for a starter nothing is. I have a simple linear
regression I am using to predict some values and I need the standard error
of the prediction (forecast). Whats the easiest/bestway of getting this
error?

Best regards
-- 
View this message in context: 
http://r.789695.n4.nabble.com/Standard-error-of-forecast-tp2717125p2717125.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Standard Error for difference in predicted probabilities

2010-09-24 Thread Darin A. England
I think you can use the bootstrap to obtain the std error. My
attempt for your problem and data is below. I would be interested if
anyone can point out a problem with this approach.
Darin

y=rbinom(100,1,.4)
x1=rnorm(100, 3, 2)
x2=rbinom(100, 1, .7) 

diff <- vector(mode="numeric", length=200)
for (i in 1:200) {
idx <- sample(1:length(y), length(y), replace=T)
mod=glm(y[idx] ~ x1[idx] + x2[idx], family=binomial)
pred=predict(mod, newdata=data.frame(cbind(x1=rep(mean(x1[idx]),
100), x2=x2[idx])), type="response")
diff[i]=unique(pred)[1]-unique(pred)[2]
}
sd(diff)


On Fri, Sep 24, 2010 at 09:17:29AM -0400, Andrew Miles wrote:
> Is there a way to estimate the standard error for the difference in  
> predicted probabilities obtained from a logistic regression model?
> 
> For example, this code gives the difference for the predicted  
> probability of when x2==1 vs. when x2==0, holding x1 constant at its  
> mean:
> 
> y=rbinom(100,1,.4)
> x1=rnorm(100, 3, 2)
> x2=rbinom(100, 1, .7)
> mod=glm(y ~ x1 + x2, family=binomial)
> pred=predict(mod, newdata=data.frame(cbind(x1=rep(mean(x1), 100),  
> x2)), type="response")
> diff=unique(pred)[1]-unique(pred)[2]
> diff
> 
> I know that predict() will output SE's for each predicted value, but  
> how do I generate a SE for the difference in those predicted values?
> 
> Thanks in advance!
> 
> Andrew Miles
> 
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Standard Error for difference in predicted probabilities

2010-09-24 Thread Andrew Miles
Is there a way to estimate the standard error for the difference in  
predicted probabilities obtained from a logistic regression model?

For example, this code gives the difference for the predicted  
probability of when x2==1 vs. when x2==0, holding x1 constant at its  
mean:

y=rbinom(100,1,.4)
x1=rnorm(100, 3, 2)
x2=rbinom(100, 1, .7)
mod=glm(y ~ x1 + x2, family=binomial)
pred=predict(mod, newdata=data.frame(cbind(x1=rep(mean(x1), 100),  
x2)), type="response")
diff=unique(pred)[1]-unique(pred)[2]
diff

I know that predict() will output SE's for each predicted value, but  
how do I generate a SE for the difference in those predicted values?

Thanks in advance!

Andrew Miles


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] standard error of difference for mixed effects

2010-09-16 Thread ONKELINX, Thierry
Dear Tony,

A mixed models is not a good idea if you have only two levels for sites and two 
times two for phase.

The first problem is a mathematical problems. You are estimating variances 
based on only two and four values. Which is very small, hence you will not get 
reliable estimates of the variance.

The second problem is more conceptual. Your question indicates that you are 
interested in the effect of specific levels of your random effects. The 
philosophy of mixed models is that you only care about the variability due to 
the random effects, and not in specific effects.

IMHO een simple linear model will do.

lm(log(Salinity) ~ log(flow)*sites/phase)

HTH,

Thierry


ir. Thierry Onkelinx
Instituut voor natuur- en bosonderzoek
team Biometrie & Kwaliteitszorg
Gaverstraat 4
9500 Geraardsbergen
Belgium

Research Institute for Nature and Forest
team Biometrics & Quality Assurance
Gaverstraat 4
9500 Geraardsbergen
Belgium

tel. + 32 54/436 185
thierry.onkel...@inbo.be
www.inbo.be

To call in the statistician after the experiment is done may be no more than 
asking him to perform a post-mortem examination: he may be able to say what the 
experiment died of.
~ Sir Ronald Aylmer Fisher

The plural of anecdote is not data.
~ Roger Brinner

The combination of some data and an aching desire for an answer does not ensure 
that a reasonable answer can be extracted from a given body of data.
~ John Tukey
  

> -Oorspronkelijk bericht-
> Van: r-help-boun...@r-project.org 
> [mailto:r-help-boun...@r-project.org] Namens Meissner, Tony (DFW)
> Verzonden: donderdag 16 september 2010 2:58
> Aan: r-help@r-project.org
> Onderwerp: [R] standard error of difference for mixed effects
> 
> I have a dataset relating the effects of engineering works on 
> the level of salinity in a river before and after the works.  
> I have modelled this using linear mixed effects models to 
> determine if the significance and level of the response to 
> the works.  I am wanting to calculate the se of difference at 
> several points. I am not sure of how to calculate the 
> difference and which estimate of se I should be using e.g. is 
> it the random effects or the fixed effects estimates and what 
> formula to use e.g. the one for calculating se using linear models?
> 
> To assist here is one of the models I have formulated
> 
>   log(Salinity) ~ log(flow), random=~log(flow)|sites/phase
> 
> where sites is a factor with levels upstream & downstream, 
> and phase is a factor with two levels before and after.
> 
> Any help would be appreciated.
> 
> Tschüß
> Tony Meissner
> Principal Scientist - Monitoring
> Department for Water | Level 3 28 Vaughan Terrace Berri SA 
> 5343 T 8595 2209  | M 0401 124 971 E 
> tony.meiss...@sa.gov.au<mailto:tony.meiss...@sa.gov.au>
> 
> Mon | Tue | Wed | Thurs | Fri
> 
> www.waterforgood.sa.gov.au<http://www.waterforgood.sa.gov.au/>
>  | www.sa.gov.au<http://www.sa.gov.au/>
> 
> The information in this e-mail may be confidential 
> and/o...{{dropped:13}}
> 
> 

Druk dit bericht a.u.b. niet onnodig af.
Please do not print this message unnecessarily.

Dit bericht en eventuele bijlagen geven enkel de visie van de schrijver weer 
en binden het INBO onder geen enkel beding, zolang dit bericht niet bevestigd is
door een geldig ondertekend document. The views expressed in  this message 
and any annex are purely those of the writer and may not be regarded as stating 
an official position of INBO, as long as the message is not confirmed by a duly 
signed document.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] standard error of difference for mixed effects

2010-09-15 Thread Dennis Murphy
Hi:

Why is phase nested within sites? Shouldn't sites and phase be crossed?
After all, there is one intervention (works) that purportedly affects the
river both upstream and downstream. Moreover, why is log(flow) being treated
as both a fixed and random effect?

Just curious,
Dennis

On Wed, Sep 15, 2010 at 5:58 PM, Meissner, Tony (DFW) <
tony.meiss...@sa.gov.au> wrote:

> I have a dataset relating the effects of engineering works on the level of
> salinity in a river before and after the works.  I have modelled this using
> linear mixed effects models to determine if the significance and level of
> the response to the works.  I am wanting to calculate the se of difference
> at several points. I am not sure of how to calculate the difference and
> which estimate of se I should be using e.g. is it the random effects or the
> fixed effects estimates and what formula to use e.g. the one for calculating
> se using linear models?
>
> To assist here is one of the models I have formulated
>
>  log(Salinity) ~ log(flow), random=~log(flow)|sites/phase
>
> where sites is a factor with levels upstream & downstream, and phase is a
> factor with two levels before and after.
>
> Any help would be appreciated.
>
> Tschüß
> Tony Meissner
> Principal Scientist - Monitoring
> Department for Water | Level 3 28 Vaughan Terrace Berri SA 5343
> T 8595 2209  | M 0401 124 971
> E tony.meiss...@sa.gov.au
>
> Mon | Tue | Wed | Thurs | Fri
>
> www.waterforgood.sa.gov.au |
> www.sa.gov.au
>
> The information in this e-mail may be confidential and/o...{{dropped:13}}
>
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] standard error of difference for mixed effects

2010-09-15 Thread Meissner, Tony (DFW)
I have a dataset relating the effects of engineering works on the level of 
salinity in a river before and after the works.  I have modelled this using 
linear mixed effects models to determine if the significance and level of the 
response to the works.  I am wanting to calculate the se of difference at 
several points. I am not sure of how to calculate the difference and which 
estimate of se I should be using e.g. is it the random effects or the fixed 
effects estimates and what formula to use e.g. the one for calculating se using 
linear models?

To assist here is one of the models I have formulated

  log(Salinity) ~ log(flow), random=~log(flow)|sites/phase

where sites is a factor with levels upstream & downstream, and phase is a 
factor with two levels before and after.

Any help would be appreciated.

Tschüß
Tony Meissner
Principal Scientist - Monitoring
Department for Water | Level 3 28 Vaughan Terrace Berri SA 5343
T 8595 2209  | M 0401 124 971
E tony.meiss...@sa.gov.au

Mon | Tue | Wed | Thurs | Fri

www.waterforgood.sa.gov.au | 
www.sa.gov.au

The information in this e-mail may be confidential and/o...{{dropped:13}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] standard error for predicted mean count from ZIP

2010-07-30 Thread Brian S Cade
Does anyone have code for computing the standard error of the predicted 
mean count from a zero-inflated Poisson regression model estimated by the 
zeroinfl() function from the pscl package (and yes, we've checked with A. 
Z. already)?

Thank you

Brian

Brian S. Cade, PhD

U. S. Geological Survey
Fort Collins Science Center
2150 Centre Ave., Bldg. C
Fort Collins, CO  80526-8818

email:  brian_c...@usgs.gov
tel:  970 226-9326
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] standard error of Binary logistic regression coefficient.

2010-07-27 Thread Joshua Wiley
Hi,

Just to extend the excellent suggestions, if you are interested in the
odds ratio, you can just use exp():

#Odds Ratio
exp(fit4$coefficients)

#Confidence interval around OR
exp(confint(fit4))

To give you an idea graphically of the log odds (or logit) look at:

p <- seq(0, 1, by = .001)
plot(y = log(p / (1 - p) ), x = p, type = "l")

Cheers,

Josh

On Tue, Jul 27, 2010 at 8:40 AM, Bessy  wrote:
>
> Dear all,
>
> I am struggling with the calculation of standard error of the coefficient in
> Binary logistic regression analysis.
>
> I built a binary logsitic regression model as follows and got confused since
> the calculation of standard error of coefficients of X1, X2 and X3 are not
> the same as the Linear regression.
>
>> fit4 <-glm(Y~X1+X2+X3,data=d4,family=binomial("logit"))
> Warning message:
> In glm.fit(x = X, y = Y, weights = weights, start = start, etastart =
> etastart,  :
>  fitted probabilities numerically 0 or 1 occurred
>> summary(fit4)
>
> Call:
> glm(formula = Y ~ X1 + X2 + X3, family = binomial("logit"), data = d4)
>
> Deviance Residuals:
>          Min             1Q         Median             3Q            Max
> -1.641483e+00  -8.421161e-05   0.00e+00   1.349398e-03   1.417550e+00
>
> Coefficients:
>               Estimate     Std. Error  z value Pr(>|z|)
> (Intercept) -10.1534523  10.8397717 -0.93669 0.348921
> X1            0.3312469   0.3007324  1.10147 0.270693
> X2            0.1808757   0.1069222  1.69166 0.090711 .
> X3            5.0874665   5.0820163  1.00107 0.316792
> ---
> Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
>
> (Dispersion parameter for binomial family taken to be 1)
>
>    Null deviance: 91.4954278  on 65  degrees of freedom
> Residual deviance:  5.8129055  on 62  degrees of freedom
> AIC: 13.812906
>
> Number of Fisher Scoring iterations: 12
>
>
> Could somebody suggest the calculation of standard error of X1, X2 and X3 in
> the output of my model, please?
>
> Any suggestions will be really appreciated.
>
> Kind Regards
>
> Bessy
>
> --
> View this message in context: 
> http://r.789695.n4.nabble.com/standard-error-of-Binary-logistic-regression-coefficient-tp2303716p2303716.html
> Sent from the R help mailing list archive at Nabble.com.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Joshua Wiley
Ph.D. Student, Health Psychology
University of California, Los Angeles
http://www.joshuawiley.com/

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] standard error of Binary logistic regression coefficient.

2010-07-27 Thread Achim Zeileis

On Tue, 27 Jul 2010, John Sorkin wrote:


Do not worry about the SE. The SE listed on the output is the SE of the log 
odds. You can use the estimate (beta) and SE from the listing to compute a 
confidence interval (CI)as follows:
CI exp(beta-1.96*SE) to exp(beta-1.96*SE)


The standard errors can be computed by using the vcov() method:
  sqrt(diag(vcov(glm_object)))

Confidence intervals can be computed using the confint() method:
  confint(glm_object)

hth,
Z


John
John Sorkin
Chief Biostatistics and Informatics
Univ. of Maryland School of Medicine
Division of Gerontology and Geriatric Medicine
jsor...@grecc.umaryland.edu
-Original Message-
From: Bessy 
To:  

Sent: 7/27/2010 11:40:33 AM
Subject: [R] standard error of Binary logistic regression coefficient.


Dear all,

I am struggling with the calculation of standard error of the coefficient in
Binary logistic regression analysis.

I built a binary logsitic regression model as follows and got confused since
the calculation of standard error of coefficients of X1, X2 and X3 are not
the same as the Linear regression.


fit4 <-glm(Y~X1+X2+X3,data=d4,family=binomial("logit"))

Warning message:
In glm.fit(x = X, y = Y, weights = weights, start = start, etastart =
etastart,  :
 fitted probabilities numerically 0 or 1 occurred

summary(fit4)


Call:
glm(formula = Y ~ X1 + X2 + X3, family = binomial("logit"), data = d4)

Deviance Residuals:
 Min 1Q Median 3QMax
-1.641483e+00  -8.421161e-05   0.00e+00   1.349398e-03   1.417550e+00

Coefficients:
  Estimate Std. Error  z value Pr(>|z|)
(Intercept) -10.1534523  10.8397717 -0.93669 0.348921
X10.3312469   0.3007324  1.10147 0.270693
X20.1808757   0.1069222  1.69166 0.090711 .
X35.0874665   5.0820163  1.00107 0.316792
---
Signif. codes:  0 ?***? 0.001 ?**? 0.01 ?*? 0.05 ?.? 0.1 ? ? 1

(Dispersion parameter for binomial family taken to be 1)

   Null deviance: 91.4954278  on 65  degrees of freedom
Residual deviance:  5.8129055  on 62  degrees of freedom
AIC: 13.812906

Number of Fisher Scoring iterations: 12


Could somebody suggest the calculation of standard error of X1, X2 and X3 in
the output of my model, please?

Any suggestions will be really appreciated.

Kind Regards

Bessy

--
View this message in context: 
http://r.789695.n4.nabble.com/standard-error-of-Binary-logistic-regression-coefficient-tp2303716p2303716.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Confidentiality Statement:
This email message, including any attachments, is for th...{{dropped:6}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] standard error of Binary logistic regression coefficient.

2010-07-27 Thread John Sorkin
Do not worry about the SE. The SE listed on the output is the SE of the log 
odds. You can use the estimate (beta) and SE from the listing to compute a 
confidence interval (CI)as follows:
CI exp(beta-1.96*SE) to exp(beta-1.96*SE)
John
John Sorkin
Chief Biostatistics and Informatics
Univ. of Maryland School of Medicine
Division of Gerontology and Geriatric Medicine
jsor...@grecc.umaryland.edu 
-Original Message-
From: Bessy 
To:  

Sent: 7/27/2010 11:40:33 AM
Subject: [R] standard error of Binary logistic regression coefficient.


Dear all,

I am struggling with the calculation of standard error of the coefficient in
Binary logistic regression analysis.

I built a binary logsitic regression model as follows and got confused since
the calculation of standard error of coefficients of X1, X2 and X3 are not
the same as the Linear regression.

> fit4 <-glm(Y~X1+X2+X3,data=d4,family=binomial("logit"))
Warning message:
In glm.fit(x = X, y = Y, weights = weights, start = start, etastart =
etastart,  :
  fitted probabilities numerically 0 or 1 occurred
> summary(fit4)

Call:
glm(formula = Y ~ X1 + X2 + X3, family = binomial("logit"), data = d4)

Deviance Residuals: 
  Min 1Q Median 3QMax  
-1.641483e+00  -8.421161e-05   0.00e+00   1.349398e-03   1.417550e+00  

Coefficients:
   Estimate Std. Error  z value Pr(>|z|)  
(Intercept) -10.1534523  10.8397717 -0.93669 0.348921  
X10.3312469   0.3007324  1.10147 0.270693  
X20.1808757   0.1069222  1.69166 0.090711 .
X35.0874665   5.0820163  1.00107 0.316792  
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for binomial family taken to be 1)

Null deviance: 91.4954278  on 65  degrees of freedom
Residual deviance:  5.8129055  on 62  degrees of freedom
AIC: 13.812906

Number of Fisher Scoring iterations: 12


Could somebody suggest the calculation of standard error of X1, X2 and X3 in
the output of my model, please?

Any suggestions will be really appreciated.

Kind Regards

Bessy

-- 
View this message in context: 
http://r.789695.n4.nabble.com/standard-error-of-Binary-logistic-regression-coefficient-tp2303716p2303716.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Confidentiality Statement:
This email message, including any attachments, is for th...{{dropped:6}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] standard error of Binary logistic regression coefficient.

2010-07-27 Thread Bessy

Dear all,

I am struggling with the calculation of standard error of the coefficient in
Binary logistic regression analysis.

I built a binary logsitic regression model as follows and got confused since
the calculation of standard error of coefficients of X1, X2 and X3 are not
the same as the Linear regression.

> fit4 <-glm(Y~X1+X2+X3,data=d4,family=binomial("logit"))
Warning message:
In glm.fit(x = X, y = Y, weights = weights, start = start, etastart =
etastart,  :
  fitted probabilities numerically 0 or 1 occurred
> summary(fit4)

Call:
glm(formula = Y ~ X1 + X2 + X3, family = binomial("logit"), data = d4)

Deviance Residuals: 
  Min 1Q Median 3QMax  
-1.641483e+00  -8.421161e-05   0.00e+00   1.349398e-03   1.417550e+00  

Coefficients:
   Estimate Std. Error  z value Pr(>|z|)  
(Intercept) -10.1534523  10.8397717 -0.93669 0.348921  
X10.3312469   0.3007324  1.10147 0.270693  
X20.1808757   0.1069222  1.69166 0.090711 .
X35.0874665   5.0820163  1.00107 0.316792  
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for binomial family taken to be 1)

Null deviance: 91.4954278  on 65  degrees of freedom
Residual deviance:  5.8129055  on 62  degrees of freedom
AIC: 13.812906

Number of Fisher Scoring iterations: 12


Could somebody suggest the calculation of standard error of X1, X2 and X3 in
the output of my model, please?

Any suggestions will be really appreciated.

Kind Regards

Bessy

-- 
View this message in context: 
http://r.789695.n4.nabble.com/standard-error-of-Binary-logistic-regression-coefficient-tp2303716p2303716.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Standard Error for individual patient survival with survfit and summary.survfit

2010-07-16 Thread Terry Therneau
> Can anyone tell me what is the difference between these two standard 
> errors and how should I interpret the confidence intervals and std.err
> given these differences?

help(survfit.object) will give you the answer.  The std in the object is
for the cumulative hazard, the printout uses a Taylor series argument to
compute the se of the survival.

(The are several reasons for this choice, including that se(survival) is
not well defined by the standard formulas when S=0).

Terry Therneau

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Standard Error for individual patient survival with survfit and summary.survfit

2010-07-15 Thread Charles M. Rowland
 I am using the coxph, survfit and summary.survfit functions to calculate an 
estimate of predicted survival with confidence interval for future patients 
based on the survival distribution of an existing cohort of subjects.  I am 
trying to understand the calculation and interpretation of the std.err and 
confidence intervals printed by the summary.survfit function.

Using the default confidence interval type of "log", I would have expected the 
confidence intervals to be calculated as
exp( log(surv) +,- Z*std.err )
where z is the appropriate quantile of the normal distribution.   This does 
seem to be the case but using the std.err found in the survfit object rather 
than the std.err that is printed by and returned with the summary.survfit 
function.

Can anyone tell me what is the difference between these two standard errors and 
how should I interpret the confidence intervals and std.err given these 
differences?

An example using R version 2.11.1 with survival package version 2.35-8 on 
Windows operating system is below

Thanks,

Charley Rowland
Celera, Corp.

Example:

> fit <- coxph(Surv(futime, fustat) ~ age, data = ovarian)# Fit cox model 
> to existing data
> s.fit=survfit(fit,newdata=data.frame(age=60))# Calculate 
> predicted survival for a new patient
> sum.s.fit=summary(s.fit)  
>   # Print summary of predicted survival estimates.
> sum.s.fit
Call: survfit(formula = fit, newdata = data.frame(age = 60))

 time n.risk n.event survival std.err lower 95% CI upper 95% CI
   59 26   10.978  0.02400.9321.000
  115 25   10.952  0.03900.8781.000
  156 24   10.917  0.05560.8141.000
  268 23   10.880  0.07040.7521.000
  329 22   10.818  0.08840.6621.000
  353 21   10.760  0.09910.5880.981
  365 20   10.698  0.10790.5160.945
  431 17   10.623  0.11870.4290.905
  464 15   10.549  0.12480.3520.858
  475 14   10.480  0.12670.2860.805
  563 12   10.382  0.13320.1930.757
  638 11   10.297  0.12920.1270.697

### The confidence intervals extracted from the survfit and summary.survfit 
objects are identical and agree with what is printed by summary
> s.fit$lower
 [1] 0.9318402 0.8784979 0.8144399 0.7522512 0.6616253 0.5881502 0.5159010 
0.4287437 0.3519573 0.2857688 0.1932689 0.1267205
> sum.s.fit$lower
 [1] 0.9318402 0.8784979 0.8144399 0.7522512 0.6616253 0.5881502 0.5159010 
0.4287437 0.3519573 0.2857688 0.1932689 0.1267205

###  However, the standard errors extracted from the survfit and 
summary.survfit objects are different
> s.fit$std.err   ###Note the std.err from survfit does not agree 
> with what is printed by the summary.survfit function
 [1] 0.02459050 0.04099162 0.06065735 0.07997872 0.10806000 0.13052489 
0.15448058 0.19052784 0.22719704 0.26416491 0.34826243 0.43480448
> sum.s.fit$std.err ###Note the std.err from the summary object agrees with 
> what is printed by summary.survfit
 [1] 0.02404586 0.03902366 0.05563833 0.07037450 0.08836045 0.09914814 
0.10787837 0.11866803 0.12481970 0.12669151 0.13320179 0.12919546

###  The (lower) confidence interval printed by the summary.survfit function 
appears to be based on the stderr contained in the survfit object  (not std.err 
printed in summary)
> zval <- qnorm(1- (1-.95)/2, 0,1)
> exp( log(s.fit$surv) - zval*s.fit$std.err)
 [1] 0.9318402 0.8784979 0.8144399 0.7522512 0.6616253 0.5881502 0.5159010 
0.4287437 0.3519573 0.2857688 0.1932689 0.1267205


> exp( log(s.fit$surv) - zval*sum.s.fit$std.err)
 [1] 0.9328355 0.8818930 0.8224912 0.7665456 0.6876705 0.6254552 0.5652418 
0.4935883 0.4301637 0.3741385 0.2945926 0.2306651



[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Standard error of regression coefficient

2010-06-13 Thread Erik Iverson

Josh B wrote:

Hi all,

This should be a very simple question for you, whereas it is proving devilish 
for me.

How do I output the STANDARD ERROR of the regression coefficient (i.e., the 
standard error of b) from a simple linear regression?



The first 'See Also' in ?lm is for ?summary.lm, which will give you what you 
want.  So simply pass your lm object to summary.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Standard error of regression coefficient

2010-06-13 Thread Josh B
Hi all,

This should be a very simple question for you, whereas it is proving devilish 
for me.

How do I output the STANDARD ERROR of the regression coefficient (i.e., the 
standard error of b) from a simple linear regression?

Consider this data, taken directly from ?lm:

ctl <- c(4.17,5.58,5.18,6.11,4.50,4.61,5.17,4.53,5.33,5.14)
trt <- c(4.81,4.17,4.41,3.59,5.87,3.83,6.03,4.89,4.32,4.69)
group <- gl(2,10,20, labels=c("Ctl","Trt"))
weight <- c(ctl, trt)

The problem is that neither of these lines of code seem to show me what I'm 
looking for:
> lm(weight ~ group)

Call:
lm(formula = weight ~ group)

Coefficients:
(Intercept) groupTrt  
  5.032   -0.371

> anova(lm.D9 <- lm(weight ~ group))
Analysis of Variance Table

Response: weight
  Df Sum Sq Mean Sq F value Pr(>F)
group  1 0.6882  0.6882  1.4191  0.249
Residuals 18 8.7292  0.4850

So what's the secret, then?

Thanks very much R users!
Josh



  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] standard error for the estimated value (lmer fitted model)

2009-11-18 Thread willow1980

Dear R users,
I want to draw standard error lines for the predicted regression line
estimated by logistic regression using lmer. I have two predictors: cafr and
its quadratic form I(cafr^2), where cafr is a variable centered around the
mean of original variable. Now, the estimated value from the fitted model
will be,
(mo...@x)%*%fixef(model)
In the logit scale, the mean sum of square from fitted model will be, 
sesample=sqrt(sum(resid(model)^2)/(n-p-1)), where p is the degrees of
freedom used for fitting.
Could someone make a judgement if it is reasonable to calculate standard
error of the estimated value by
sesample*sqrt(vector%*%ginv(t(mo...@x)%*%mo...@x)%*%t(vector))
, where vector is the (1,cafr,I(cafr^2)) which representing empirical data
vector at considered point.
If this is correct, I think I can use this method to draw standard error
line. Otherwise, would you please suggest a reasonable one?
Thank you very much for your attention!
Yours sincerely, Jianghua
-- 
View this message in context: 
http://old.nabble.com/standard-error-for-the-estimated-value-%28lmer-fitted-model%29-tp26414507p26414507.html
Sent from the R help mailing list archive at Nabble.com.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] standard error associated with correlation coefficient

2009-08-30 Thread Sunil Suchindran
Here are some options for confidence intervals.
#by hand

sample_r <- .5
n_sample <- 100
df <- n_sample-1
fisher_z <- 0.5*(log(1+sample_r)-log(1-sample_r))
se_z <- 1/sqrt(n_sample-3)
t_crit <- qt(.975,df ,lower.tail=TRUE)
z_lci <- fisher_z - (t_crit * se_z)
z_uci <- fisher_z + (t_crit * se_z)

(r_lci <- tanh(z_lci))
(r_uci <- tanh(z_uci))
# Compare to the CIr function in psychometric library
#that uses standard normal instead of t

library(psychometric)
CIr(sample_r,n_sample)


And see

http://www.stat.umn.edu/geyer/5601/examp/bse.html

for bootstrap options.

On Thu, Aug 27, 2009 at 3:03 PM, Donald Braman  wrote:

> I want the standard error associated with a correlation.  I can calculate
> using cor & var, but am wondering if there are libraries that already
> provide this function.
>
>[[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] standard error associated with correlation coefficient

2009-08-27 Thread Donald Braman
I want the standard error associated with a correlation.  I can calculate
using cor & var, but am wondering if there are libraries that already
provide this function.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Standard error of Median in MASS library

2009-07-29 Thread Greg Snow
Well the MASS package is support for a book, the details for most of functions 
in the package are detailed in the book, so if you really want to know, either 
look at the code, or read the book.

-- 
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111


> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-
> project.org] On Behalf Of Lisa Wang
> Sent: Wednesday, July 29, 2009 12:05 PM
> To: r-help@r-project.org
> Subject: [R] Standard error of Median in MASS library
> 
> Dear All,
> 
> I wonder which function in MASS library calculates and output the
> standard error of median.
> 
> Thank you in advance for your help
> 
> Lisa Wang
> 
> Biostatistics, Princess Margaret hospital,
> toronto, On
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-
> guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Standard error of Median in MASS library

2009-07-29 Thread Lisa Wang
Dear All,

I wonder which function in MASS library calculates and output the
standard error of median. 

Thank you in advance for your help

Lisa Wang

Biostatistics, Princess Margaret hospital,
toronto, On

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] standard error of median in MASS library

2009-07-29 Thread Lisa Wang
Hello all,

I wonder which function in MASS library is calculating the standard
error of median?

thank you very much in advance,

Lisa

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] standard error and p-value for the estimated parameter in AR model

2009-06-22 Thread FMH
Dear All,

I used an  AR(1) model to explain the process of the stationary residual and 
have used an 'ar' command in R. From the results, i tried to extract 
the standard error and p-value for the estimated parameter, but unfortunately, 
i never find any way to extract  it from the output. 

What i did was, i assigned the residuals into the 'residual' object in R and 
used an 'ar' function as below. 

> residual <- residuals
> ar(residual, aic = TRUE,  method = "mle", order.max = 1) 

Could someone help me to extract the stadard error and the p-value for the 
estimated parameter, please?

Thank you

Fir


  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] standard error beta glm

2009-06-12 Thread Ian Fiske


Ricardo Arias Brito wrote:
> 
> Dear All,
> 
> Is posible calculate Std. Error for glm as lm, using
> cov(hat beta) = phi * solve(t(X) %*% hat W %*% X)^-1
> on R? Who is hat W and phi output glm?
> 
> y=rpois(20,4)
> fit.glm <- glm(y ~ x, family=poisson
> summary(fit.glm)
> 
> Fitted to a model glm using constrast contr.sum and need compute
> the error standard for last level of the factor.
>  
> best wishes for all,
> 
> Ricardo.
> 

Does sqrt(diag(vcov(fit.glm))) not give you what you need?

Ian
-- 
View this message in context: 
http://www.nabble.com/standard-error-beta-glm-tp23987737p24008764.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] standard error beta glm

2009-06-11 Thread Ricardo Arias Brito

Dear All,

The std. error of the estimated coefficients 
obtained by the summary.lm function can be calculated
as:

y=rnorm(20)
x=y+rnorm(20)
fit <- lm(y ~ x)
summary(fit)

sqrt(  sum(fit$resid**2)/fit$df.resid  * 
solve(t(model.matrix(fit))%*%model.matrix(fit))  )

Is posible calculate Std. Error for glm as lm, using
cov(hat beta) = phi * solve(t(X) %*% hat W %*% X)^-1
on R? Who is hat W and phi output glm?

y=rpois(20,4)
fit.glm <- glm(y ~ x, family=poisson
summary(fit.glm)


Fitted to a model glm using constrast contr.sum and need compute
the error standard for last level of the factor.
 

best wishes for all,

Ricardo.













  Veja quais são os assuntos do momento no Yahoo! +Buscados
http://br.maisbuscados.yahoo.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] standard error for median

2009-03-07 Thread Johannes Huesing
Ralph Scherer  [Sat, Mar 07, 2009 at 10:34:28PM CET]:
> Dear Prof. Ripley,
> 
> thank you for your fast answer.
> But which book do you mean?
> I can't find an MASS book.

Try

library(MASS)
citation(package="MASS")


-- 
Johannes Hüsing   There is something fascinating about science. 
  One gets such wholesale returns of conjecture 
mailto:johan...@huesing.name  from such a trifling investment of fact.  
  
http://derwisch.wikidot.com (Mark Twain, "Life on the Mississippi")

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] standard error for median

2009-03-07 Thread Ralph Scherer
Ok, found it.
Thanks.


Am Samstag, den 07.03.2009, 22:34 +0100 schrieb Ralph Scherer:

> Dear Prof. Ripley,
> 
> thank you for your fast answer.
> But which book do you mean?
> I can't find an MASS book.
> Do you mean the R book?
> 
> Best wishes,
> Ralph
> 
> 
> 
> Am Samstag, den 07.03.2009, 21:24 + schrieb Prof Brian Ripley: 
> 
> > It is an example (both via asymptotic theory and the bootstrap) in 
> > chapter 5 of MASS (the book).  The functions used are in the scripts 
> > of the MASS package, but you will need to understand the theory being 
> > used as described in the book.
> > 
> > On Sat, 7 Mar 2009, Ralph Scherer wrote:
> > 
> > > Dear all,
> > >
> > > is it possible to estimate a standard error for the median?
> > > And is there a function in R for this?
> > > I want to use it to describe a skewed distribution.
> > >
> > > Thanks in advance,
> > > Ralph
> > >
> > >   [[alternative HTML version deleted]]
> > >
> > > __
> > > R-help@r-project.org mailing list
> > > https://stat.ethz.ch/mailman/listinfo/r-help
> > > PLEASE do read the posting guide 
> > > http://www.R-project.org/posting-guide.html
> > > and provide commented, minimal, self-contained, reproducible code.
> > >
> > 

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] standard error for median

2009-03-07 Thread Ralph Scherer
Dear Prof. Ripley,

thank you for your fast answer.
But which book do you mean?
I can't find an MASS book.
Do you mean the R book?

Best wishes,
Ralph



Am Samstag, den 07.03.2009, 21:24 + schrieb Prof Brian Ripley:

> It is an example (both via asymptotic theory and the bootstrap) in 
> chapter 5 of MASS (the book).  The functions used are in the scripts 
> of the MASS package, but you will need to understand the theory being 
> used as described in the book.
> 
> On Sat, 7 Mar 2009, Ralph Scherer wrote:
> 
> > Dear all,
> >
> > is it possible to estimate a standard error for the median?
> > And is there a function in R for this?
> > I want to use it to describe a skewed distribution.
> >
> > Thanks in advance,
> > Ralph
> >
> > [[alternative HTML version deleted]]
> >
> > __
> > R-help@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> >
> 

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] standard error for median

2009-03-07 Thread Prof Brian Ripley
It is an example (both via asymptotic theory and the bootstrap) in 
chapter 5 of MASS (the book).  The functions used are in the scripts 
of the MASS package, but you will need to understand the theory being 
used as described in the book.


On Sat, 7 Mar 2009, Ralph Scherer wrote:


Dear all,

is it possible to estimate a standard error for the median?
And is there a function in R for this?
I want to use it to describe a skewed distribution.

Thanks in advance,
Ralph

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



--
Brian D. Ripley,  rip...@stats.ox.ac.uk
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] standard error for median

2009-03-07 Thread Ralph Scherer
Dear all,

is it possible to estimate a standard error for the median?
And is there a function in R for this?
I want to use it to describe a skewed distribution.

Thanks in advance,
Ralph

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] standard error of logit parameters

2009-01-28 Thread Bomee Park

Hi everyone.

I am now estimating the parameters for a logit model, and trying to  
get the estimates by laximizing the log_likelihood.
The nlm function works nicely for maximizing the -(log_likelihood) and  
returns the parameter estimates that minimize the static, and the  
gradients also, but don't have any clue how I can get the standard  
error for the parameters.


Any help will be greatly appreciated.
Thanks.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Standard error of mean for aov

2008-12-14 Thread JS . Augustyn
Thanks for all of your help, David, I finally got it. Here's some generic  
syntax in case it helps someone else down the road (using a 4-way ANOVA  
with repeated measures on all factors):

# LOAD DATA
data <- read.table("PATH\\datafile.txt")

# RUN THE ANOVA
data.aov <- aov(y ~ factor(x1)*factor(x2)*factor(x3)*factor(x4) +  
Error(factor(id)/(factor(x1)*factor(x2)*factor(x3)*factor(x4))),data)

# TO PRINT ALL MEANS BY EFFECT
print(model.tables(data.aov,"means"),digits=3)

# TO GET SEs FOR FACTOR x1
attach(data)
foo <- data.frame(tapply(y,list(id,x1),mean))

mean(foo) # ANOTHER WAY TO GET MEANS, LIMITED TO FACTOR x1

# COMPUTE SEs FOR x1
sd(foo)/sqrt(nrow(foo))

SEs for an interaction can be had by:
foo <- data.frame(tapply(y,list(id,x1,x2),mean))
sd(foo)/sqrt(nrow(foo))

David, thank you again for all of the help!

Cheers,

Jason






On Dec 13, 2008 3:15pm, David Winsemius  wrote:
>
>
> On Dec 13, 2008, at 2:15 PM, js.augus...@gmail.com wrote:
>
>
>
>
> > Does not this give you what you need?
>
> > model.tables(rawfixtimedata.aov,"means", se=TRUE)
>
>
>
> I tried that, but get an error:
>
> SEs for type 'means' are not yet implemented
>
>
>
>
> I don't get that error. Using the example and this call
>
>
>
> model.tables(npk.aov,"means", se=TRUE)
>
>
>
> I get tables and then:
>
>
>
> Standard errors for differences of means
>
> block N P K N:P N:K P:K
>
> 2.779 1.604 1.604 1.604 2.269 2.269 2.269
>
> replic. 4 12 12 12 6 6 6
>
>
>
> Is your version of R current?
>
>
>
>
>
>
> Maybe I'm not using the correct terminology to describe what I need to  
do. Using the main effect of Marking as an example, I have the following  
mean fixation times for each of 12 subjects:
>
>
>
>
> > txt
> + 1 1278 586
>
> + 2 2410 571
>
> + 3 408 477
>
> + 4 645 371
>
> + 5 265 415
>
> + 6 4871 354
>
> + 7 1878 790
>
> + 8 6064 592
>
> + 9 761 363
>
> + 10 1073 566
>
> + 11 1043 383
>
> + 12 1170 290"
>
>
>
> > marking.t , header=TRUE, row.names="Sub")
>
> > marking.t
>
> Absent Present
>
> 1 1278 586
>
> 2 2410 571
>
> 3 408 477
>
> 4 645 371
>
> 5 265 415
>
> 6 4871 354
>
> 7 1878 790
>
> 8 6064 592
>
> 9 761 363
>
> 10 1073 566
>
> 11 1043 383
>
> 12 1170 290
>
>
>
> That list copied into R produces this pair of means
>
>
>
> > mean(marking.t)
>
> Absent Present
>
> 1822.1667 479.8333
>
>
>
>
>
>
>
>
>
>
>
>
> The means for markings present and absent, respectively, as reported by  
both R and SPSS were:
>
> factor(marking)
>
> Present Absent
>
> 480 1822
>
>
>
>
>
>
>
>
>
>
>
>
>
> The standard errors for these means, SE(x) = SD(x)/sqrt(n), should be:
>
> Present Absent
>
> 41.42 525.55
>
>
>
>
> > sd(marking.t)/sqrt(nrow(marking.t))
>
> Absent Present
>
> 525.58353 41.44599
>
>
>
>
>
>
>
>
>
>
>
>
> Which is what SPSS gives. I need to know how to get R to compute the same  
values.
>
>
>
> Any suggestions?
>
>
>
> Thanks,
>
>
>
> Jason
>
>
>
>
>
>
>
> >
>
> >
>
> >
>
> >
>
>
>
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Standard error of mean for aov

2008-12-13 Thread David Winsemius


On Dec 13, 2008, at 2:15 PM, js.augus...@gmail.com wrote:


> Does not this give you what you need?
> model.tables(rawfixtimedata.aov,"means", se=TRUE)

I tried that, but get an error:
SEs for type 'means' are not yet implemented


I don't get that error. Using the example and this call

model.tables(npk.aov,"means", se=TRUE)

 I get tables and then:

Standard errors for differences of means
block N P K   N:P   N:K   P:K
2.779 1.604 1.604 1.604 2.269 2.269 2.269
replic. 4121212 6 6 6

Is your version of R current?



Maybe I'm not using the correct terminology to describe what I need  
to do. Using the main effect of Marking as an example, I have the  
following mean fixation times for each of 12 subjects:



> txt <-"Sub Absent Present
+ 1 1278 586
+ 2 2410 571
+ 3 408  477
+ 4 645  371
+ 5 265  415
+ 6 4871 354
+ 7 1878 790
+ 8 6064 592
+ 9 761  363
+ 101073 566
+ 111043 383
+ 121170 290"

> marking.t <- read.table(textConnection(txt), header=TRUE,  
row.names="Sub")

> marking.t
   Absent Present
11278 586
22410 571
3 408 477
4 645 371
5 265 415
64871 354
71878 790
86064 592
9 761 363
10   1073 566
11   1043 383
12   1170 290

That list copied into R produces this pair of means

> mean(marking.t)
   Absent   Present
1822.1667  479.8333







The means for markings present and absent, respectively, as reported  
by both R and SPSS were:

factor(marking)
Present Absent
480 1822







The standard errors for these means, SE(x) = SD(x)/sqrt(n), should be:
Present Absent
41.42 525.55


> sd(marking.t)/sqrt(nrow(marking.t))
   Absent   Present
525.58353  41.44599






Which is what SPSS gives. I need to know how to get R to compute the  
same values.


Any suggestions?

Thanks,

Jason



>
>
>
>


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Standard error of mean for aov

2008-12-13 Thread JS . Augustyn
> Does not this give you what you need?
> model.tables(rawfixtimedata.aov,"means", se=TRUE)

I tried that, but get an error:
SEs for type 'means' are not yet implemented

Maybe I'm not using the correct terminology to describe what I need to do.  
Using the main effect of Marking as an example, I have the following mean  
fixation times for each of 12 subjects:

Sub Absent Present
1 1278 586
2 2410 571
3 408 477
4 645 371
5 265 415
6 4871 354
7 1878 790
8 6064 592
9 761 363
10 1073 566
11 1043 383
12 1170 290

The means for markings present and absent, respectively, as reported by  
both R and SPSS were:
factor(marking)
Present Absent
480 1822

The standard errors for these means, SE(x) = SD(x)/sqrt(n), should be:
Present Absent
41.42 525.55

Which is what SPSS gives. I need to know how to get R to compute the same  
values.

Any suggestions?

Thanks,

Jason



On Dec 13, 2008 1:30pm, David Winsemius  wrote:
>
>
> On Dec 13, 2008, at 11:37 AM, Jason Augustyn wrote:
>
>
>
>
> Hi David, thanks for the quick response. I did look at the help files for  
model.tables and se.contrast and neither seemed appropriate. I probably  
wasn't clear enough in my original email, so here's more information:
>
>
>
> I'm analyzing data from a psychology experiment on how people allocate  
visual attention when walking over difficult terrain. In the experiment  
subjects walked on a treadmill for 30 minutes while performing an  
attention-demanding reaction time task. In one condition they could walk  
freely, whereas in another condition they had to avoid markings placed on  
the treadmill belt to simulate obstacles. The stimuli for the reaction time  
task were placed either at eye-level or near the ground.
>
>
>
> The dependent measure I'm working with comes from an eye-tracking system  
that we ran while subjects walked, which provided data on the amount of  
time subjects looked at the reaction time stimuli versus the treadmill  
belt. We divided the fixation time data sets for each subject into three  
time bins to look at changes in fixation behavior over time.
>
>
>
> So the full design of the study is a 2x2x2x3 with repeated measures on  
all factors. The factors were:
>
> Markings (Levels: present, absent)
>
> Reaction time task "Position" (Levels: eye-level, ground-level)
>
> Eye fixation "Plane": (Levels: RT stimuli, treadmill belt)
>
> Time bin (Levels: 1,2,3)
>
>
>
> A call to aov yields main effects of Markings, Position, and Plane, as  
well as a Markings*Plane interaction. For comparison purposes I ran the  
same analysis in SPSS and got equivalent ANOVA results, so I'm confident  
the model has been set up properly in R.
>
>
>
> Now, what I want to get are means and standard errors for the main  
effects and interaction to generate figures for publication using other  
software. As stated in my initial post, I got the means using model.tables  
and they are correct as compared with the SPSS output. However, I cannot  
get the standard errors for the means. I've tried various things in R and  
cannot get values that correspond to the SPSS output.
>
>
>
>
> My understanding is that the se's are for the effects, ie on parameter  
estimates for differences, rather than for the means themselves. One get  
these (at least in the example on the help page) with:
>
>
>
> model.tables(npk.aov, "means", se = TRUE)
>
>
>
> Does not this give you what you need?
>
>
>
> model.tables(rawfixtimedata.aov,"means", se=TRUE)
>
>
>
> I am not sure what you are referring to when you ask for se's for  
the "means" in the presence of interactions. How are you going to partition  
the cases? Would one case contribute to both the main "mean" and to any or  
all the interaction "means" in which it might be involved?
>
>
>
> --
>
> David Winsemius
>
>
>
>
> Again, I assume there is an R function that can get me the values I need  
and would hugely appreciate any pointers. In case you're wondering why I'm  
bothering with running the analyses in R given that I already have them  
done in SPSS, I'm just generally interested in learning to use R to have an  
additional analysis tool in my toolkit.
>
>
>
> Thanks for any help!
>
>
>
> Cheers,
>
>
>
> Jason
>
>
>
>
>
>
>
>
>
> On Sat, Dec 13, 2008 at 9:04 AM, David Winsemius dwinsem...@comcast.net>  
wrote:
>
>
>
> On Dec 12, 2008, at 10:59 PM, js.augus...@gmail.com wrote:
>
>
>
> Hi all,
>
>
>
> I'm quite new to R and have a very basic question regarding how one gets
>
> the standard error of the mean for factor levels under aov. I was able to
>
> get the factor level means using:
>
>
>
> summary(print(model.tables(rawfixtimedata.aov,"means"),digits=3)),
>
>
>
> where rawfixtimedata.aov is my aov model. It doesn't appear that there is
>
> an equivalent function to get the standard errors for the factor levels.
>
>
>
> I searched through the help archives and documentation but could not find
>
> anything that would help resolve my problem. I'm sure there is a trivial
>
> solution, but I wou

Re: [R] Standard error of mean for aov

2008-12-13 Thread David Winsemius


On Dec 13, 2008, at 11:37 AM, Jason Augustyn wrote:

Hi David, thanks for the quick response. I did look at the help  
files for model.tables and se.contrast and neither seemed  
appropriate. I probably wasn't clear enough in my original email, so  
here's more information:


I'm analyzing data from a psychology experiment on how people  
allocate visual attention when walking over difficult terrain. In  
the experiment subjects walked on a treadmill for 30 minutes while  
performing an attention-demanding reaction time task. In one  
condition they could walk freely, whereas in another condition they  
had to avoid markings placed on the treadmill belt to simulate  
obstacles. The stimuli for the reaction time task were placed either  
at eye-level or near the ground.


The dependent measure I'm working with comes from an eye-tracking  
system that we ran while subjects walked, which provided data on the  
amount of time subjects looked at the reaction time stimuli versus  
the treadmill belt. We divided the fixation time data sets for each  
subject into three time bins to look at changes in fixation behavior  
over time.


So the full design of the study is a 2x2x2x3 with repeated measures  
on all factors. The factors were:

Markings (Levels: present, absent)
Reaction time task "Position" (Levels: eye-level, ground-level)
Eye fixation "Plane": (Levels: RT stimuli, treadmill belt)
Time bin (Levels: 1,2,3)

A call to aov yields main effects of Markings, Position, and Plane,  
as well as a Markings*Plane interaction. For comparison purposes I  
ran the same analysis in SPSS and got equivalent ANOVA results, so  
I'm confident the model has been set up properly in R.


Now, what I want to get are means and standard errors for the main  
effects and interaction to generate figures for publication using  
other software. As stated in my initial post, I got the means using  
model.tables and they are correct as compared with the SPSS output.  
However, I cannot get the standard errors for the means. I've tried  
various things in R and cannot get values that correspond to the  
SPSS output.


My understanding is that the se's are for the effects, i.e. on  
parameter estimates for differences, rather than for the means  
themselves. One get these (at least in the example on the help page)  
with:


model.tables(npk.aov, "means", se = TRUE)

Does not this give you what you need?

model.tables(rawfixtimedata.aov,"means", se=TRUE)

I am not sure what you are referring to when you ask for  se's for the  
"means" in the presence of interactions. How are you going to  
partition the cases? Would one case contribute to both the main "mean"  
and to any or all the interaction "means" in which it might be involved?


--
David Winsemius


Again, I assume there is an R function that can get me the values I  
need and would hugely appreciate any pointers. In case you're  
wondering why I'm bothering with running the analyses in R given  
that I already have them done in SPSS, I'm just generally interested  
in learning to use R to have an additional analysis tool in my  
toolkit.


Thanks for any help!

Cheers,

Jason




On Sat, Dec 13, 2008 at 9:04 AM, David Winsemius > wrote:


On Dec 12, 2008, at 10:59 PM, js.augus...@gmail.com wrote:

Hi all,

I'm quite new to R and have a very basic question regarding how one  
gets
the standard error of the mean for factor levels under aov. I was  
able to

get the factor level means using:

summary(print(model.tables(rawfixtimedata.aov,"means"),digits=3)),

where rawfixtimedata.aov is my aov model. It doesn't appear that  
there is
an equivalent function to get the standard errors for the factor  
levels.


I searched through the help archives and documentation but could not  
find
anything that would help resolve my problem. I'm sure there is a  
trivial

solution, but I would sincerely appreciate having someone more expert
dispel my ignorance.


 Have you looked at the help page for model.tables? ... and perhaps ? 
se.contrast


There are arguments to that function that result in standard errors  
for _effects_. If standard errors on the contrasts are not what you  
wanted, then perhaps a full example would help.


--
David Winsemius


Cheers,

Jason Augustyn

   [[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.




__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Standard error of mean for aov

2008-12-13 Thread Jason Augustyn
Hi David, thanks for the quick response. I did look at the help files for
model.tables and se.contrast and neither seemed appropriate. I probably
wasn't clear enough in my original email, so here's more information:
I'm analyzing data from a psychology experiment on how people allocate
visual attention when walking over difficult terrain. In the experiment
subjects walked on a treadmill for 30 minutes while performing an
attention-demanding reaction time task. In one condition they could walk
freely, whereas in another condition they had to avoid markings placed on
the treadmill belt to simulate obstacles. The stimuli for the reaction time
task were placed either at eye-level or near the ground.

The dependent measure I'm working with comes from an eye-tracking system
that we ran while subjects walked, which provided data on the amount of time
subjects looked at the reaction time stimuli versus the treadmill belt. We
divided the fixation time data sets for each subject into three time bins to
look at changes in fixation behavior over time.

So the full design of the study is a 2x2x2x3 with repeated measures on all
factors. The factors were:
Markings (Levels: present, absent)
Reaction time task "Position" (Levels: eye-level, ground-level)
Eye fixation "Plane": (Levels: RT stimuli, treadmill belt)
Time bin (Levels: 1,2,3)

A call to aov yields main effects of Markings, Position, and Plane, as well
as a Markings*Plane interaction. For comparison purposes I ran the same
analysis in SPSS and got equivalent ANOVA results, so I'm confident the
model has been set up properly in R.

Now, what I want to get are means and standard errors for the main effects
and interaction to generate figures for publication using other software. As
stated in my initial post, I got the means using model.tables and they are
correct as compared with the SPSS output. However, I cannot get the standard
errors for the means. I've tried various things in R and cannot get values
that correspond to the SPSS output.

Again, I assume there is an R function that can get me the values I need and
would hugely appreciate any pointers. In case you're wondering why I'm
bothering with running the analyses in R given that I already have them done
in SPSS, I'm just generally interested in learning to use R to have an
additional analysis tool in my toolkit.

Thanks for any help!

Cheers,

Jason




On Sat, Dec 13, 2008 at 9:04 AM, David Winsemius wrote:

>
> On Dec 12, 2008, at 10:59 PM, js.augus...@gmail.com wrote:
>
>  Hi all,
>>
>> I'm quite new to R and have a very basic question regarding how one gets
>> the standard error of the mean for factor levels under aov. I was able to
>> get the factor level means using:
>>
>> summary(print(model.tables(rawfixtimedata.aov,"means"),digits=3)),
>>
>> where rawfixtimedata.aov is my aov model. It doesn't appear that there is
>> an equivalent function to get the standard errors for the factor levels.
>>
>> I searched through the help archives and documentation but could not find
>> anything that would help resolve my problem. I'm sure there is a trivial
>> solution, but I would sincerely appreciate having someone more expert
>> dispel my ignorance.
>>
>>
>  Have you looked at the help page for model.tables? ... and perhaps
> ?se.contrast
>
> There are arguments to that function that result in standard errors for
> _effects_. If standard errors on the contrasts are not what you wanted, then
> perhaps a full example would help.
>
> --
> David Winsemius
>
>
>  Cheers,
>>
>> Jason Augustyn
>>
>>[[alternative HTML version deleted]]
>>
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Standard error of mean for aov

2008-12-13 Thread David Winsemius


On Dec 12, 2008, at 10:59 PM, js.augus...@gmail.com wrote:


Hi all,

I'm quite new to R and have a very basic question regarding how one  
gets
the standard error of the mean for factor levels under aov. I was  
able to

get the factor level means using:

summary(print(model.tables(rawfixtimedata.aov,"means"),digits=3)),

where rawfixtimedata.aov is my aov model. It doesn't appear that  
there is
an equivalent function to get the standard errors for the factor  
levels.


I searched through the help archives and documentation but could not  
find
anything that would help resolve my problem. I'm sure there is a  
trivial

solution, but I would sincerely appreciate having someone more expert
dispel my ignorance.



 Have you looked at the help page for model.tables? ... and perhaps ? 
se.contrast


There are arguments to that function that result in standard errors  
for _effects_. If standard errors on the contrasts are not what you  
wanted, then perhaps a full example would help.


--
David Winsemius



Cheers,

Jason Augustyn

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Standard error of mean for aov

2008-12-12 Thread JS . Augustyn
Hi all,

I'm quite new to R and have a very basic question regarding how one gets  
the standard error of the mean for factor levels under aov. I was able to  
get the factor level means using:

summary(print(model.tables(rawfixtimedata.aov,"means"),digits=3)),

where rawfixtimedata.aov is my aov model. It doesn't appear that there is  
an equivalent function to get the standard errors for the factor levels.

I searched through the help archives and documentation but could not find  
anything that would help resolve my problem. I'm sure there is a trivial  
solution, but I would sincerely appreciate having someone more expert  
dispel my ignorance.

Cheers,

Jason Augustyn

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Standard error of combination of parameter estimates

2008-05-12 Thread Søren Højsgaard
You can use the esticon function in the doBy package. 
Regards
Søren



Fra: [EMAIL PROTECTED] på vegne af Mark Donoghoe
Sendt: ma 12-05-2008 09:11
Til: r-help@r-project.org
Emne: [R] Standard error of combination of parameter estimates



Hi,



Is there a simple command for computing the standard error of a
combination of parameter estimates in a GLM?



For example:



riskdata$age1 <- riskdata$age

riskdata$age2 <- ifelse(riskdata$age<67,0,riskdata$age-67)



model <- glm(death~age1+age2+ldl,
data=riskdata,family=binomial(link=logit))



And we want to find the standard error of the partial linear predictor
for the combination of the two age parameters, age1+age2, at some age X,
without needing to separately use the covariance matrix?



Thanks,



Mark Donoghoe


#
This e-mail message has been scanned for Viruses and Content and cleared
by MailMarshal
#



IMPORTANT NOTICE: This e-mail and any attachment to it are intended only to be 
read or used by the named addressee.
It is confidential and may contain legally privileged information. No 
confidentiality or privilege is waived or lost
by any mistaken transmission to you. The CTC is not responsible for any 
unauthorised alterations to this e-mail or
attachment to it. Views expressed in this message are those of the individual 
sender, and are not necessarily the
views of the CTC. If you receive this e-mail in error, please immediately 
delete it and notify the sender. You must
not disclose, copy or use any part of this e-mail if you are not the intended 
recipient.

#

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Standard error of combination of parameter estimates

2008-05-12 Thread Mark Donoghoe
Hi,

 

Is there a simple command for computing the standard error of a
combination of parameter estimates in a GLM?

 

For example:

 

riskdata$age1 <- riskdata$age

riskdata$age2 <- ifelse(riskdata$age<67,0,riskdata$age-67)

 

model <- glm(death~age1+age2+ldl,
data=riskdata,family=binomial(link=logit))

 

And we want to find the standard error of the partial linear predictor
for the combination of the two age parameters, age1+age2, at some age X,
without needing to separately use the covariance matrix?

 

Thanks,

 

Mark Donoghoe


#
This e-mail message has been scanned for Viruses and Content and cleared 
by MailMarshal
#



IMPORTANT NOTICE: This e-mail and any attachment to it are intended only to be 
read or used by the named addressee. 
It is confidential and may contain legally privileged information. No 
confidentiality or privilege is waived or lost 
by any mistaken transmission to you. The CTC is not responsible for any 
unauthorised alterations to this e-mail or 
attachment to it. Views expressed in this message are those of the individual 
sender, and are not necessarily the 
views of the CTC. If you receive this e-mail in error, please immediately 
delete it and notify the sender. You must 
not disclose, copy or use any part of this e-mail if you are not the intended 
recipient.

#

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Standard error values returned by lm()

2008-03-27 Thread Uwe Ligges
Type summary.lm and read the code.

Best,
Uwe Ligges



Nick Chorley wrote:
> Hi,
> 
> This may be a stupid question, but how are the "std. error" values returned
> by lm() calculated? For example
> 
>> summary(lm)
> 
> Call:
> lm(formula = log10(moments[2, 1:10]) ~ log10(L_vals[1:10]))
> 
> Residuals:
>Min 1Q Median 3QMax
> -0.0052534 -0.0019473  0.0006785  0.0011740  0.0059541
> 
> Coefficients:
>  Estimate Std. Error t value Pr(>|t|)
> (Intercept) -0.520639   0.002488  -209.2 3.05e-16 ***
> log10(L_vals[1:10])  0.112666   0.00344632.7 8.35e-10 ***
> 
> I can't find any documentation on this.
> 
> Regards,
> 
> Nicky Chorley
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Standard error values returned by lm()

2008-03-27 Thread Nick Chorley
Hi,

This may be a stupid question, but how are the "std. error" values returned
by lm() calculated? For example

> summary(lm)

Call:
lm(formula = log10(moments[2, 1:10]) ~ log10(L_vals[1:10]))

Residuals:
   Min 1Q Median 3QMax
-0.0052534 -0.0019473  0.0006785  0.0011740  0.0059541

Coefficients:
 Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.520639   0.002488  -209.2 3.05e-16 ***
log10(L_vals[1:10])  0.112666   0.00344632.7 8.35e-10 ***

I can't find any documentation on this.

Regards,

Nicky Chorley

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.