Dear Brian and Gustaf, I too have a bit of trouble following what Gustaf is doing, but I think that Brian's interpretation -- that Gustaf is trying to transform the standard errors via the inverse link rather than transforming the ends of the confidence intervals -- is probably correct. If this is the case, then what Gustaf has done doesn't make sense.
It is possible to get standard errors on the scale of the response (using, e.g., the delta method), but it's probably better to work on the scale of the linear predictor anyway. This is what the summary, print, and plot methods in the effects package do (as is documented in the help files for the package -- see the transformation argument under ?effect and the type argument under ?summary.eff). Regards, John -------------------------------- John Fox, Professor Department of Sociology McMaster University Hamilton, Ontario, Canada L8S 4M4 905-525-9140x23604 http://socserv.mcmaster.ca/jfox > -----Original Message----- > From: Prof Brian Ripley [mailto:[EMAIL PROTECTED] > Sent: February-17-08 6:42 AM > To: Gustaf Granath > Cc: John Fox; r-help@r-project.org > Subject: Re: [R] Weird SEs with effect() > > On Sun, 17 Feb 2008, Gustaf Granath wrote: > > > Hi John, > > > > In fact I am still a little bit confused because I had read the > > ?effect help and the archives. > > > > ?effect says that the confidence intervals are on the linear > predictor > > scale as well. Using exp() on the untransformed confidence intervals > > gives me the same values as summary(eff). My confidence intervals > > seems to be correct and reflects the results from my glm models. > > > > But when I use exp() to get the correct SEs on the response scale I > > get SEs that sometimes do not make sense at all. Interestingly I have > > What exactly are you doing here? I suspect you are not using the > correct > formula to transform the SEs (you do not just exponeniate them), but > without the reproducible example asked for we cannot tell. > > > found a trend. For my model with adjusted means ~ 0.5-1.5 I get huge > > SEs (SEs > 1, but my glm model shows significant differences between > > level 1 = 0.55 and level 2 = 1.15). Models with means around 10-20 my > > SEs are fine with exp(). Models with means around 75-125 my SEs get > > way too small with exp(). > > > > Something is not right here (or maybe they are but I don not > > understand it) so I think my best option will be to use the > confidence > > intervals instead of SEs in my plot. > > If you want confidence intervals, you are better off computing those on > a > reasonable scale and transforming then. Or using a profile likelihood > to > compute them (which will be equivariant under monotone scale > transformations). > > > Regards, > > > > Gustaf > > > > > >> Quoting John Fox <[EMAIL PROTECTED]>: > >> > >> Dear Gustaf, > >> > >> From ?effect, "se: a vector of standard errors for the effect, on > the scale > >> of the linear predictor." Does that help? > >> > >> Regards, > >> John > >> > >> -------------------------------- > >> John Fox, Professor > >> Department of Sociology > >> McMaster University > >> Hamilton, Ontario, Canada L8S 4M4 > >> 905-525-9140x23604 > >> http://socserv.mcmaster.ca/jfox > >> > >> > >>> -----Original Message----- > >>> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] > >>> project.org] On Behalf Of Gustaf Granath > >>> Sent: February-16-08 11:43 AM > >>> To: r-help@r-project.org > >>> Subject: [R] Weird SEs with effect() > >>> > >>> Hi all, > >>> > >>> Im a little bit confused concerning the effect() command, effects > >>> package. > >>> I have done several glm models with family=quasipoisson: > >>> > >>> model <-glm(Y~X+Q+Z,family=quasipoisson) > >>> > >>> and then used > >>> > >>> results.effects <-effect("X",model,se=TRUE) > >>> > >>> to get the "adjusted means". I am aware about the debate concerning > >>> adjusted means, but you guys just have to trust me - it makes sense > >>> for me. > >>> Now I want standard error for these means. > >>> > >>> results.effects$se > >>> > >>> gives me standard error, but it is now it starts to get confusing. > The > >>> given standard errors are very very very small - not realistic. I > >>> thought that maybe these standard errors are not back transformed > so I > >>> used exp() and then the standard errors became realistic. However, > for > >>> one of my glm models with quasipoisson the standard errors make > kind > >>> of sense without using exp() and gets way to big if I use exp(). To > be > >>> honest, I get the feeling that Im on the wrong track here. > >>> > >>> Basically, I want to know how SE is calculated in effect() (all I > know > >>> is that the reported standard errors are for the fitted values) and > if > >>> anyone knows what is going on here. > >>> > >>> Regards, > >>> > >>> Gustaf Granath > >>> > >>> ______________________________________________ > >>> R-help@r-project.org mailing list > >>> https://stat.ethz.ch/mailman/listinfo/r-help > >>> PLEASE do read the posting guide http://www.R-project.org/posting- > >>> guide.html > >>> and provide commented, minimal, self-contained, reproducible code. > >> > >> > > > > ______________________________________________ > > R-help@r-project.org mailing list > > https://stat.ethz.ch/mailman/listinfo/r-help > > PLEASE do read the posting guide http://www.R-project.org/posting- > guide.html > > and provide commented, minimal, self-contained, reproducible code. > > > > -- > Brian D. Ripley, [EMAIL PROTECTED] > Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ > University of Oxford, Tel: +44 1865 272861 (self) > 1 South Parks Road, +44 1865 272866 (PA) > Oxford OX1 3TG, UK Fax: +44 1865 272595 ______________________________________________ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.