Sorry, I thought it is relatively known. It appears it was introduced by Clopper Almon, University of Maryland. One reviewer suggested I would use it... So I am trying to do what I was told.
"A convenient way to express the answer is to ask by what percent the standard error of estimate goes up when the variable is eliminated and all others adjust to compensate as best they can for the elimination. We may call this measure the marginal explanatory value, or mexval, of the variable." (from here http://www.lib.umd.edu/drum/handle/1903/1914). According to the above paper m = 100 * (sqrt (1 + t^2/(T - n) - 1) ) where: t is the t statistics T is the number of observations n is the number of parameters estimated My code is trivial: ################################################## logreg<-glm(Goods ~ Pop97+Emp96P+EmpGov96+(White)+(Crimes95)+(Pov93)+(Pi94)+(Transfer94)+(Unemp96)+(Grants96), family=binomial(link=logit), data=dat.pall) summary(logreg) ################################################## Thanks, mike ________________________________ From: spencerg <spencer.gra...@prodsyse.com> Cc: r-help@r-project.org Sent: Monday, May 18, 2009 8:28:56 AM Subject: Re: [R] MEXVAL I do not understand the term "mexval statistics". I think you want to look for "anova.glm", fitting several models leaving each term out one at a time in succession and then using "anova.glm" to compare your general model with each submodel in succession. If that does NOT give you what you want, please ask again, AFTER first reading the posting guide "http://www.R-project.org/posting-guide.html"; And please provide commented, minimal, self-contained, reproducible code with your post, explaining in particular why "anova.glm" does not seem to solve your problem. There is a problem with SEE in non-normal situations, if by SEE you mean standard error of the estimate. Least squares with normal errors is also maximum likelihood. The consensus among professional statisticians has long been that when the the errors are not additive or normal or independent or have constant variance, the proper generalization is to use maximum likelihood, provided one can select an appropriate likelihood. In particular, "glm" assumes independent binomial observations. If that is NOT reasonable, you should not be using "glm". Hope this helps. Spencer Graves Mihai Nica wrote: > Greetings: > > I would like to kindly ask help with obtaining mexval statistics (marginal > explanatory value - percentage increase in SEE if the variable were left out > of the regression model) for a logit (glm) model with several continuous > independent variables. I believe I can do it manually for each variable, but > I really hope there might be somebody who has a function already written. > Writing one is still a little over my skills (I am working on it though). > > Thanks, > > mike > > > > [[alternative HTML version deleted]] > > ______________________________________________ > R-help@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > > ______________________________________________ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. [[alternative HTML version deleted]] ______________________________________________ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.