summary(Base)
would show if one of columns of Base was read as character data instead of
the expected numeric. That could cause an explosion in the number of dummy
variables, hence a huge design matrix.
-Bill
On Fri, Nov 11, 2022 at 11:30 PM George Brida
wrote:
> Dear R users,
>
> I have a
ght have plenty of
physical memory, but also lots of (open files, cookies, applications, other
stuff) that eat memory.
Regards,
Tim
-Original Message-
From: R-help On Behalf Of George Brida
Sent: Friday, November 11, 2022 4:17 PM
To: r-help@r-project.org
Subject: [R] Logistic regress
That’s not a large data set. Something else besides memory limits is going on.
You should post output of summary(Base).
—
David
Sent from my iPhone
> On Nov 11, 2022, at 11:29 PM, George Brida wrote:
>
> Dear R users,
>
> I have a database called Base.csv (attached to this email) which
Dear R users,
I have a database called Base.csv (attached to this email) which
contains 13 columns and 8257 rows and whose the first 8 columns are dummy
variables which take 1 or 0. The problem is when I wrote the following
instructions to do a logistic regression , R runs for hours and hours
> On Apr 23, 2019, at 8:26 AM, Paul Bernal wrote:
>
> Dear friends, hope you are all doing great,
>
> I would like to know if there is any R package that allows fitting of
> logistic regression to panel data.
>
> I installed and loaded package plm, but from what I have read so far, plm
>
Dear friends, hope you are all doing great,
I would like to know if there is any R package that allows fitting of
logistic regression to panel data.
I installed and loaded package plm, but from what I have read so far, plm
only allows fitting of linear regression to panel data, not logistic.
On Fri, 1 Jul 2016, Faradj Koliev wrote:
Dear Achim Zeileis,
Many thanks for your quick and informative answer.
I?m sure that the vcovCL should work, however, I experience some problems.
> coeftest(model, vcov=vcovCL(model, cluster=mydata$ID))
First I got this error:
Error in
Dear Achim Zeileis,
Many thanks for your quick and informative answer.
I’m sure that the vcovCL should work, however, I experience some problems.
> coeftest(model, vcov=vcovCL(model, cluster=mydata$ID))
First I got this error:
Error in vcovCL(model, cluster = mydata$ID) :
length of
On Fri, 1 Jul 2016, Faradj Koliev wrote:
Dear all,
I use ?polr? command (library: MASS) to estimate an ordered logistic regression.
My model: summary( model<- polr(y ~ x1+x2+x3+x4+x1*x2 ,data=mydata, Hess =
TRUE))
But how do I get robust clustered standard errors?
I??ve tried
Dear all,
I use ”polr” command (library: MASS) to estimate an ordered logistic regression.
My model: summary( model<- polr(y ~ x1+x2+x3+x4+x1*x2 ,data=mydata, Hess =
TRUE))
But how do I get robust clustered standard errors?
I’’ve tried coeftest(resA, vcov=vcovHC(resA,
Dear all, I request your help to solve a problem I've encountered in using
'mice' for multiple imputation.
I want to apply a logistic regression model.
I need to extract information on the fit of the model.
Is there any way to calculate a likelihood ratio or the McFadden-pseudoR2
from the results
> On Mar 25, 2016, at 10:19 PM, Michael Artz wrote:
>
> Hi,
> I have now read an introductory text on regression and I think I do
> understand what the intercept is doing. However, my original question is
> still unanswered. I understand that the intercept term is the
Hi,
I have now read an introductory text on regression and I think I do
understand what the intercept is doing. However, my original question is
still unanswered. I understand that the intercept term is the constant
that each other term is measured against. I think baseline is a good word
for
> On Mar 15, 2016, at 1:27 PM, Michael Artz wrote:
>
> Hi,
> I am trying to use the summary from the glm function as a data source. I
> am using the call sink() then
> summary(logisticRegModel)$coefficients then sink().
Since it's a matrix you may need to locate a
The reference category is aliased with the constant term in the
default contr.treatment contrasts.
See ?contr.treatment , ?C, ?contrasts
If you don't know what this means, you should probably consult a local
statistical resource or ask about linear model contrasts at a
statistical help website
Hi,
I am trying to use the summary from the glm function as a data source. I
am using the call sink() then
summary(logisticRegModel)$coefficients then sink(). The independent
variables are categorical and thus there is always a baseline value for
every category that is omitted from the glm
Do you have the sample sizes that the sample proportions were computed
from (e.g. 0.5 could be 1 out of 2 or 100 out of 200)?
If you do then you can specify the model with the proportions as the y
variable and the corresponding sample sizes as the weights argument to
glm.
If you only have
But beta can only be used to model the open interval between zero and one
On Monday, January 25, 2016, Greg Snow <538...@gmail.com> wrote:
> Do you have the sample sizes that the sample proportions were computed
> from (e.g. 0.5 could be 1 out of 2 or 100 out of 200)?
>
> If you do then you can
Alternatively you might use log(p/1-p) as your dependent variable and use
OLS with robust standard errors. Much of your inference would be analogous
to a logistic regression
John C Frain
3 Aranleigh Park
Rathfarnham
Dublin 14
Ireland
www.tcd.ie/Economics/staff/frainj/home.html
with glm(), you might try the quasi binomial family
On Saturday, January 23, 2016, pari hesabi wrote:
> Hello everybody,
>
> I am trying to fit a logistic regression model by using glm() function in
> R. My response variable is a sample proportion NOT binary
> On Jan 23, 2016, at 12:41 PM, pari hesabi wrote:
>
> Hello everybody,
>
> I am trying to fit a logistic regression model by using glm() function in R.
> My response variable is a sample proportion NOT binary numbers(0,1).
So multiply the sample proportions (and
You were not completely clear, but it appears that you have data where each subject has
results from 8 trials, as a pair of variables is changed. If that is correct, then you
want to have a variance that corrects for the repeated measures. In R the glm command
handles the simple case but not
Hello,
I mostly use Stata 13 for my regression analysis. I want to conduct a logistic
regression on a proportion/number of success. Because I receive errors in Stata
I did not expect nor understand (if there are Stata experts who want to know
more about the problems I face and can potentially
Hi everyone!
I conducted a study for which I conducted logistic regressions (and it
works), but now I'd like to have the results per condition, and I failed to
discover how to have them. I explain myself:
In conduted a study in which participants can realize one behavior
(coded 1 if
Hi,
It seems that I'm quite lost in this wide and powerful R's universe, so I
permit myself to ask your help about issues with which I'm struggling.
Thank you,
I would like to know if the answerâs accuracy (correct = 1; incorrect = 0)
varies depending on 2 categorical variables which are the
http://stats.stackexchange.com/questions/62225/conditional-logistic-regression-vs-glmm-in-r
might be a good start
Ersatzistician and Chutzpahthologist
I can answer any question. I don't know is an answer. I don't know
yet is a better answer.
On Tue, Jul 1, 2014 at
Dear all, I have to use Zelig package for doing logistic regression.
How can I use Zelig package for logistic regression?
I did this code by glm function:
glm1 = glm(kod~Curv+Elev+Out.c+Slope+Aspect,data=data,
family=binomial)
summary(glm1)
But the results were not appropriate for my
You might want to read this vignette:
http://cran.r-project.org/web/packages/HSAUR/vignettes/Ch_logistic_regression_glm.pdf
On 14 June 2014 19:53, javad bayat j.bayat...@gmail.com wrote:
Dear all, I have to use Zelig package for doing logistic regression.
How can I use Zelig package for
Hi guys, I have a trouble to solve the specificity and senstitivity
for a logistic regression model. I really need your help, would you
please help me out? :) Thank you!!
This is the model I constructed:
model=glm(Status ~ Gender.compl+ X2.4.times.per.month+
Hello there,
Is it possible to do a logistic regression with R while 1) using the
full-information maximum likelihood (FIML) for treating missing values, and
2) having a complex sample design (Cluster, Weight, Stratum).
I looked at Lavaan, but, if I'm correct, it doesn't do FIML when using
Dear List,
I'm quite new to R and want to do logistic regression with a 200K
feature data set (around 150 training examples).
I'm aware that I should use Naive Bayes but I have a more general
question about the capability of R handling very high dimensional data.
Please consider the
it is simply because you can't do a regression with more predictors than
observations.
Cheers.
Am 12.12.2013 09:00, schrieb Romeo Kienzler:
Dear List,
I'm quite new to R and want to do logistic regression with a 200K
feature data set (around 150 training examples).
I'm aware that I
I thought so (with all the limitations due to collinearity and so on),
but actually there is a limit for the maximum size of an array which is
independent of your memory size and is due to the way arrays are
indexed. You can't create an object with more than 2^31-1 = 2147483647
elements.
On 13-12-12 6:51 AM, Eik Vettorazzi wrote:
I thought so (with all the limitations due to collinearity and so on),
but actually there is a limit for the maximum size of an array which is
independent of your memory size and is due to the way arrays are
indexed. You can't create an object with more
thanks Duncan for this clarification.
A double precision matrix with 2e11 elements (as the op wanted) would
need about 1.5 TB memory, that's more than a standard (windows 64bit)
computer can handle.
Cheers.
Am 12.12.2013 13:00, schrieb Duncan Murdoch:
On 13-12-12 6:51 AM, Eik Vettorazzi wrote:
On 12/12/2013 7:08 AM, Eik Vettorazzi wrote:
thanks Duncan for this clarification.
A double precision matrix with 2e11 elements (as the op wanted) would
need about 1.5 TB memory, that's more than a standard (windows 64bit)
computer can handle.
According to Microsoft's Memory Limits web page
ok, so 200K predictors an 10M observations would work?
On 12/12/2013 12:12 PM, Eik Vettorazzi wrote:
it is simply because you can't do a regression with more predictors than
observations.
Cheers.
Am 12.12.2013 09:00, schrieb Romeo Kienzler:
Dear List,
I'm quite new to R and want to do
Dear Eik,
thank you so much for your help!
best Regards,
Romeo
On 12/12/2013 12:51 PM, Eik Vettorazzi wrote:
I thought so (with all the limitations due to collinearity and so on),
but actually there is a limit for the maximum size of an array which is
independent of your memory size and is
Hello all.
I have this code:
myLOOCV - function(myformula, data) {
Y - all.vars(myformula)[1]
Scores- numeric(length(data[,1]))
for (i in 1:length(data[,1])) {
train - data[-i,]
test - data[i,]
myModel - lrm(myformula, train)
Scores[i] -
Dear colleagues I have a couple of problems related with binary logistic
regression.
The first problem is how to compute Pearson and Likelihood chi-squeared
tests for grouped data.
For the same form of data set how to compute sensitivity, specificity and
related measures.
When I speak about
Please read the attach file.
Thank you
Endy
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained,
On Apr 19, 2013, at 11:45 AM, Endy BlackEndy wrote:
Please read the attach file.
Please re-read : http://www.r-project.org/mail.html#instructions
Thank you
Endy
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
Dear colleagues I have a couple of problems related with binary logistic
regression.
The first problem is how to compute Pearson and Likelihood chi-squeared
tests for grouped data.
For the same form of data set how to compute sensitivity, specificity and
related measures.
When I speak about
I have a data set to be analyzed using to binary logistic regression. The
data set is iin grouped form. My question is: how I can compute
Hosmer-Lemeshow test and measures like sensitivity and specificity? Any
suggestion will be greatly appreciated.
Thank you
Endy
[[alternative HTML
] On Behalf Of
Endy BlackEndy [pert...@gmail.com]
Sent: 14 April 2013 19:05
To: R-Help
Subject: [R] Logistic regression
I have a data set to be analyzed using to binary logistic regression. The
data set is iin grouped form. My question is: how I can compute
Hosmer-Lemeshow test and measures like
Dear R friends.
I´m trying to fit a Logistic Regression using glm( family='binomial').
Here is the model:
*model-glm(f_ocur~altitud+UTM_X+UTM_Y+j_sin+j_cos+temp_res+pp,
offset=(log(1/off)), data=mydata, family='binomial')*
mydata has 76820 observations.
The response variable f_ocur) is a 0-1.
Dear R friends.
I´m trying to fit a Logistic Regression using glm( family='binomial').
Here is the model:
*model-glm(f_ocur~altitud+UTM_X+UTM_Y+j_sin+j_cos+temp_res+pp,
offset=(log(1/off)), data=mydata, family='binomial')*
mydata has 76820 observations.
The response variable f_ocur) is a 0-1.
I am new to R and I am trying to do a monte carlo simulation where I
generate data and interject error then test various cut points; however, my
output was garbage (at x equal zero, I did not get .50)
I am basically testing the performance of classifiers.
Here is the code:
n - 1000; # Sample size
What do you mean by at x equal zero?
On Sun, Oct 21, 2012 at 8:37 AM, Adel Powell powella...@gmail.com wrote:
I am new to R and I am trying to do a monte carlo simulation where I
generate data and interject error then test various cut points; however, my
output was garbage (at x equal zero, I
Does anyone know of any X^2 tests to compare the fit of logistic models
which factor out the sample size? I'm dealing with a very large sample and
I fear the significant X^2 test I get when adding a variable to the model
is simply a result of the sample size (200,000 cases).
I'd rather use
On Jul 31, 2012, at 10:35 AM, M Pomati marco.pom...@bristol.ac.uk wrote:
Does anyone know of any X^2 tests to compare the fit of logistic models
which factor out the sample size? I'm dealing with a very large sample and
I fear the significant X^2 test I get when adding a variable to the
Marc, thank you very much for your help.
I've posted in on
http://math.stackexchange.com/questions/177252/x2-tests-to-compare-the-fit-of-large-samples-logistic-models
and added details.
Many thanks
Marco
--On 31 July 2012 11:50 -0500 Marc Schwartz marc_schwa...@me.com wrote:
On Jul 31,
On Jul 31, 2012, at 10:25 AM, M Pomati wrote:
Marc, thank you very much for your help.
I've posted in on
http://math.stackexchange.com/questions/177252/x2-tests-to-compare-the-fit-of-large-samples-logistic-models
and added details.
I think you might have gotten a more statistically
Hello there!
I got some data with x and y values. there are some y == 0. This is a
problem for the selfstarting regression model SSlogis.
The regression works if I use a non selfstarting model. The formula is the
same. But this needs very detailed information of the starting list. I dont
have
On Apr 3, 2012, at 9:25 PM, Melrose2012 wrote:
I am trying to plot the logistic regression of a dataset (# of
living flies
vs days the flies are alive) and then fit a best-fit line to this
data.
Here is my code:
plot(fflies$living~fflies$day,xlab=Number of Days,ylab=Number of
Fruit
I am trying to plot the logistic regression of a dataset (# of living flies
vs days the flies are alive) and then fit a best-fit line to this data.
Here is my code:
plot(fflies$living~fflies$day,xlab=Number of Days,ylab=Number of Fruit
Flies,main=Number of Living Fruit Flies vs Day,pch=16)
alive
On Apr 3, 2012, at 9:25 PM, Melrose2012 wrote:
I am trying to plot the logistic regression of a dataset (# of
living flies
vs days the flies are alive) and then fit a best-fit line to this
data.
Here is my code:
plot(fflies$living~fflies$day,xlab=Number of Days,ylab=Number of
Fruit
No, this is related to my own data.
Yes, 'fflies' the dataset - here I am working with two columns: # of fruit
flies alive vs # of days these flies are alive.
There is no error, it's just that the best-fit line does not plot nicely on
top of my data (see figure attached).
Get help. You do not understand glm's. What do you think the fitted
values are? -- Hint: they are *not* an estimate of the number of
living fruit flies.
-- Bert
On Tue, Apr 3, 2012 at 6:25 PM, Melrose2012
melissa.patric...@stonybrook.edu wrote:
I am trying to plot the logistic regression of a
Thank you for your reply.
I do understand that I am working in log space with the default link
function of the binomial being logit. My problem is, I thought that they
way I had written the code, when I did the lines command, it should plot
the best-fit line (found by 'glm') on top of my
Hi Melissa,
I would highly encourage you to read [1]. It would be extremely beneficial
for your understanding of the type of models you should use in a situation
like yours (count data).
Best regards,
Jorge.-
[1] cran.r-project.org/web/packages/pscl/vignettes/countreg.pdf
On Wed, Apr 4,
Melrose2012 melissa.patrician at stonybrook.edu writes:
alive - (fflies$living)
dead - (fflies$living[1]-alive)
glm.fit - glm(cbind(alive,dead)~fflies$day,family=binomial)
summary(glm.fit)
Your call to glm() is not doing what you think it's doing. What you want to do
is probably closer to
Hi
does anyone know how to do a logistic regression in R?
any help appreciated
Carl
--
View this message in context:
http://r.789695.n4.nabble.com/Logistic-regression-tp4512658p4512658.html
Sent from the R help mailing list archive at Nabble.com.
On Wed, Mar 28, 2012 at 11:49 AM, carlb1 carl19...@hotmail.com wrote:
Hi
does anyone know how to do a logistic regression in R?
I do. And doubtless many other folks on R-help do too.
any help appreciated
You should probably start here:
Hi!
I constructed a model where most of the variables are linearly related to
each other. However, the dependent variable is dichotomous, which means I
should not try to fit a linear relationship between the other variables,
which are continuos, and this variable. I'm wondering how I would have
Hello.
I am beginning to analyze my work and have realized that a simple chi-square
analysis will not suffice for my research, with one notable reason is that data
are not discrete. Since my data fit the assumptions of a logistic regression,
I am moving forward with this analysis. With
Hi,
does anyone know of an implementation of generalized linear models with random
effects, where the random effects are non-gaussian?
Actually, what I need is to do a logistic regression (or binomial regression)
where the linear predictor in addition to fixed effects and gaussian random
I'm surprised not to see the simple answer: glm models return the MLE
estimate.
fit - glm(y ~ x1 + x2 + , family='binomial')
There is no need for special packages, this is a standard part of R.
Terry Therneau
begin included message --
On 02/06/2012
I am looking for R packages that can make a Logistic Regression model
with parameter estimation by Maximum Likelihood Estimation.
Many thanks for helping out.
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do
Hi,
On Mon, Feb 6, 2012 at 10:08 AM, Ana rrast...@gmail.com wrote:
I am looking for R packages that can make a Logistic Regression model
with parameter estimation by Maximum Likelihood Estimation.
How are you looking? Did you perhaps try Google
On 02/06/2012 03:08 PM, Ana wrote:
I am looking for R packages that can make a Logistic Regression model
with parameter estimation by Maximum Likelihood Estimation.
Many thanks for helping out.
__
R-help@r-project.org mailing list
Danielle Duncan dlduncan2 at alaska.edu writes:
Greetings, I have a question that I'd like to get input on. I have a
classic toxicology study where I artificially fertilized and exposed
embryos to a chemical and counted defects. In addition, I kept track of
male-female pairs that I used to
Greetings, I have a question that I'd like to get input on. I have a
classic toxicology study where I artificially fertilized and exposed
embryos to a chemical and counted defects. In addition, I kept track of
male-female pairs that I used to artificially fertilize and generate
embryos with. I
On 12/01/2011 08:00 PM, Ben quant wrote:
The data I am using is the last file called l_yx.RData at this link (the
second file contains the plots from earlier):
http://scientia.crescat.net/static/ben/
The logistic regression model you are fitting assumes a linear
relationship between x and the
Sorry if this is a duplicate: This is a re-post because the pdf's mentioned
below did not go through.
Hello,
I'm new'ish to R, and very new to glm. I've read a lot about my issue:
Warning message:
glm.fit: fitted probabilities numerically 0 or 1 occurred
...including:
On Dec 1, 2011, at 18:54 , Ben quant wrote:
Sorry if this is a duplicate: This is a re-post because the pdf's mentioned
below did not go through.
Still not there. Sometimes it's because your mailer doesn't label them with the
appropriate mime-type (e.g. as application/octet-stream, which is
Thank you for the feedback, but my data looks fine to me. Please tell me if
I'm not understanding.
I followed your instructions and here is a sample of the first 500 values :
(info on 'd' is below that)
d - as.data.frame(l_yx)
x = with(d, y[order(x)])
x[1:500] # I have 1's and 0's dispersed
On Dec 1, 2011, at 21:32 , Ben quant wrote:
Thank you for the feedback, but my data looks fine to me. Please tell me if
I'm not understanding.
Hum, then maybe it really is a case of a transition region being short relative
to the range of your data. Notice that the warning is just that: a
Here you go:
attach(as.data.frame(l_yx))
range(x[y==1])
[1] -22500.746.
range(x[y==0])
[1] -10076.5303653.0228
How do I know what is acceptable?
Also, here are the screen shots of my data that I tried to send earlier
(two screen shots, two pages):
Oops! Please ignore my last post. I mistakenly gave you different data I
was testing with. This is the correct data:
Here you go:
attach(as.data.frame(l_yx))
range(x[y==0])
[1] 0.0 14.66518
range(x[y==1])
[1] 0.0 13.49791
How do I know what is acceptable?
Also, here are the
I'm not proposing this as a permanent solution, just investigating the
warning. I zeroed out the three outliers and received no warning. Can
someone tell me why I am getting no warning now?
I did this 3 times to get rid of the 3 outliers:
mx_dims = arrayInd(which.max(l_yx), dim(l_yx))
On Dec 1, 2011, at 23:43 , Ben quant wrote:
I'm not proposing this as a permanent solution, just investigating the
warning. I zeroed out the three outliers and received no warning. Can someone
tell me why I am getting no warning now?
It's easier to explain why you got the warning before.
Thank you so much for your help.
The data I am using is the last file called l_yx.RData at this link (the
second file contains the plots from earlier):
http://scientia.crescat.net/static/ben/
Seems like the warning went away with pmin(x,1) but now the OR is over
15k. If I multiple my x's by
HI
I use glm in R to do logistic regression. and treat both response and
predictor as factor
In my first try:
***
Call:
glm(formula = as.factor(diagnostic) ~ as.factor(7161521) +
as.factor(2281517), family =
On 20.11.2011 12:46, tujchl wrote:
HI
I use glm in R to do logistic regression. and treat both response and
predictor as factor
In my first try:
***
Call:
glm(formula = as.factor(diagnostic) ~ as.factor(7161521) +
On 20.11.2011 16:58, 屠鞠传礼 wrote:
Thank you Ligges :)
one more question:
my response value diagnostic have 4 levels (0, 1, 2 and 3), so I use it like
this:
as.factor(diagnostic) ~ as.factor(7161521) +as.factor(2281517)
Is it all right?
Uhh. 4 levels? Than I doubt logistic regression is the
On 20.11.2011 17:27, 屠鞠传礼 wrote:
I worried it too, Do you have idear that what tools I can use?
Depends on your aims - what you want to do with the fitted model.
A multinomial model, some kind of discriminant analysis (lda, qda), tree
based methods, svm and so son come to mind. You
I worried it too, Do you have idear that what tools I can use?
ÔÚ 2011-11-21 00:13:26£¬Uwe Ligges lig...@statistik.tu-dortmund.de дµÀ£º
On 20.11.2011 16:58, ÍÀ¾Ï´«Àñ wrote:
Thank you Ligges :)
one more question:
my response value diagnostic have 4 levels (0, 1, 2 and 3), so I use it
Thank you Ligges :)
one more question:
my response value diagnostic have 4 levels (0, 1, 2 and 3), so I use it like
this:
as.factor(diagnostic) ~ as.factor(7161521) +as.factor(2281517)
Is it all right?
ÔÚ 2011-11-20 23:45:23£¬Uwe Ligges lig...@statistik.tu-dortmund.de дµÀ£º
On 20.11.2011
Thank you very much :)
I search on net and find sometimes response value in logistic model can have
more than 2 values, and the way of this kinds of regression is called Ordinal
Logistic Regression. and even we can caculate it by the same way I mean glm in
R.
here are some references:
1.
On Nov 20, 2011, at 7:26 PM, 屠鞠传礼 wrote:
Thank you very much :)
I search on net and find sometimes response value in logistic model
can have more than 2 values, and the way of this kinds of regression
is called Ordinal Logistic Regression. and even we can caculate it
by the same way I
Hi
can we perform logistic regression using lmer function. Please help me with
this
--
View this message in context:
http://r.789695.n4.nabble.com/logistic-Regression-using-lmer-tp4034056p4034056.html
Sent from the R help mailing list archive at Nabble.com.
arunkumar akpbond007 at gmail.com writes:
Hi
can we perform logistic regression using lmer function. Please help me with
this
Yes.
library(lme4)
glmer([reponse]~[fixed effects (covariates)]+(1|[grouping variable]),
data=[data frame], family=binomial)
Further questions
Can I atleast get help with what pacakge to use for logistic
regression with all possible models and do prediction. I know i can
use regsubsets but i am not sure if it has any prediction functions to
go with it.
Thanks
On Oct 25, 6:54 pm, RAJ dheerajathr...@gmail.com wrote:
Hello,
I am pretty
Try the glm package
Steve Friedman Ph. D.
Ecologist / Spatial Statistical Analyst
Everglades and Dry Tortugas National Park
950 N Krome Ave (3rd Floor)
Homestead, Florida 33034
steve_fried...@nps.gov
Office (305) 224 - 4282
Fax (305) 224 - 4147
Hi,
On Wed, Oct 26, 2011 at 12:35 PM, RAJ dheerajathr...@gmail.com wrote:
Can I atleast get help with what pacakge to use for logistic
regression with all possible models and do prediction. I know i can
use regsubsets but i am not sure if it has any prediction functions to
go with it.
Maybe
Check glmulti package for all subset selection.
Weidong Gu
On Wed, Oct 26, 2011 at 12:35 PM, RAJ dheerajathr...@gmail.com wrote:
Can I atleast get help with what pacakge to use for logistic
regression with all possible models and do prediction. I know i can
use regsubsets but i am not sure if
You mean the glm() _function_ in the stats package.
?glm
(just to avoid confusion)
-- Bert
On Wed, Oct 26, 2011 at 10:31 AM, steve_fried...@nps.gov wrote:
Try the glm package
Steve Friedman Ph. D.
Ecologist / Spatial Statistical Analyst
Everglades and Dry Tortugas National Park
950 N
The reason that you are not likely getting replies is that what you propose to
do is considered a poor way of building models.
You need to get out of the SAS Mindset.
I would suggest you obtain a copy of Frank Harrell's book:
http://www.amazon.com/exec/obidos/ASIN/0387952322/
and then
Hello,
I am pretty new to R, I have always used SAS and SAS products. My
target variable is binary ('Y' and 'N') and i have about 14 predictor
variables. My goal is to compare different variable selection methods
like Forward, Backward, All possible subsests. I am using
misclassification rate to
1 - 100 of 280 matches
Mail list logo