summary(Base)
would show if one of columns of Base was read as character data instead of
the expected numeric. That could cause an explosion in the number of dummy
variables, hence a huge design matrix.
-Bill
On Fri, Nov 11, 2022 at 11:30 PM George Brida
wrote:
> Dear R users,
>
> I have a
Hi George,
I did not get an attachment.
My first step would be to try simplifying things. Do all of these work?
fit_1=glm(Base[,2]~Base[,1],family=binomial(link="logit"))
fit_1=glm(Base[,2]~Base[,10],family=binomial(link="logit"))
fit_1=glm(Base[,2]~Base[,11],family=binomial(link="logit"))
That’s not a large data set. Something else besides memory limits is going on.
You should post output of summary(Base).
—
David
Sent from my iPhone
> On Nov 11, 2022, at 11:29 PM, George Brida wrote:
>
> Dear R users,
>
> I have a database called Base.csv (attached to this email) which
> On Apr 23, 2019, at 8:26 AM, Paul Bernal wrote:
>
> Dear friends, hope you are all doing great,
>
> I would like to know if there is any R package that allows fitting of
> logistic regression to panel data.
>
> I installed and loaded package plm, but from what I have read so far, plm
>
On Fri, 1 Jul 2016, Faradj Koliev wrote:
Dear Achim Zeileis,
Many thanks for your quick and informative answer.
I?m sure that the vcovCL should work, however, I experience some problems.
> coeftest(model, vcov=vcovCL(model, cluster=mydata$ID))
First I got this error:
Error in
Dear Achim Zeileis,
Many thanks for your quick and informative answer.
I’m sure that the vcovCL should work, however, I experience some problems.
> coeftest(model, vcov=vcovCL(model, cluster=mydata$ID))
First I got this error:
Error in vcovCL(model, cluster = mydata$ID) :
length of
On Fri, 1 Jul 2016, Faradj Koliev wrote:
Dear all,
I use ?polr? command (library: MASS) to estimate an ordered logistic regression.
My model: summary( model<- polr(y ~ x1+x2+x3+x4+x1*x2 ,data=mydata, Hess =
TRUE))
But how do I get robust clustered standard errors?
I??ve tried
> On Mar 25, 2016, at 10:19 PM, Michael Artz wrote:
>
> Hi,
> I have now read an introductory text on regression and I think I do
> understand what the intercept is doing. However, my original question is
> still unanswered. I understand that the intercept term is the
Hi,
I have now read an introductory text on regression and I think I do
understand what the intercept is doing. However, my original question is
still unanswered. I understand that the intercept term is the constant
that each other term is measured against. I think baseline is a good word
for
> On Mar 15, 2016, at 1:27 PM, Michael Artz wrote:
>
> Hi,
> I am trying to use the summary from the glm function as a data source. I
> am using the call sink() then
> summary(logisticRegModel)$coefficients then sink().
Since it's a matrix you may need to locate a
The reference category is aliased with the constant term in the
default contr.treatment contrasts.
See ?contr.treatment , ?C, ?contrasts
If you don't know what this means, you should probably consult a local
statistical resource or ask about linear model contrasts at a
statistical help website
Do you have the sample sizes that the sample proportions were computed
from (e.g. 0.5 could be 1 out of 2 or 100 out of 200)?
If you do then you can specify the model with the proportions as the y
variable and the corresponding sample sizes as the weights argument to
glm.
If you only have
But beta can only be used to model the open interval between zero and one
On Monday, January 25, 2016, Greg Snow <538...@gmail.com> wrote:
> Do you have the sample sizes that the sample proportions were computed
> from (e.g. 0.5 could be 1 out of 2 or 100 out of 200)?
>
> If you do then you can
Alternatively you might use log(p/1-p) as your dependent variable and use
OLS with robust standard errors. Much of your inference would be analogous
to a logistic regression
John C Frain
3 Aranleigh Park
Rathfarnham
Dublin 14
Ireland
www.tcd.ie/Economics/staff/frainj/home.html
with glm(), you might try the quasi binomial family
On Saturday, January 23, 2016, pari hesabi wrote:
> Hello everybody,
>
> I am trying to fit a logistic regression model by using glm() function in
> R. My response variable is a sample proportion NOT binary
> On Jan 23, 2016, at 12:41 PM, pari hesabi wrote:
>
> Hello everybody,
>
> I am trying to fit a logistic regression model by using glm() function in R.
> My response variable is a sample proportion NOT binary numbers(0,1).
So multiply the sample proportions (and
You were not completely clear, but it appears that you have data where each subject has
results from 8 trials, as a pair of variables is changed. If that is correct, then you
want to have a variance that corrects for the repeated measures. In R the glm command
handles the simple case but not
http://stats.stackexchange.com/questions/62225/conditional-logistic-regression-vs-glmm-in-r
might be a good start
Ersatzistician and Chutzpahthologist
I can answer any question. I don't know is an answer. I don't know
yet is a better answer.
On Tue, Jul 1, 2014 at
You might want to read this vignette:
http://cran.r-project.org/web/packages/HSAUR/vignettes/Ch_logistic_regression_glm.pdf
On 14 June 2014 19:53, javad bayat j.bayat...@gmail.com wrote:
Dear all, I have to use Zelig package for doing logistic regression.
How can I use Zelig package for
it is simply because you can't do a regression with more predictors than
observations.
Cheers.
Am 12.12.2013 09:00, schrieb Romeo Kienzler:
Dear List,
I'm quite new to R and want to do logistic regression with a 200K
feature data set (around 150 training examples).
I'm aware that I
I thought so (with all the limitations due to collinearity and so on),
but actually there is a limit for the maximum size of an array which is
independent of your memory size and is due to the way arrays are
indexed. You can't create an object with more than 2^31-1 = 2147483647
elements.
On 13-12-12 6:51 AM, Eik Vettorazzi wrote:
I thought so (with all the limitations due to collinearity and so on),
but actually there is a limit for the maximum size of an array which is
independent of your memory size and is due to the way arrays are
indexed. You can't create an object with more
thanks Duncan for this clarification.
A double precision matrix with 2e11 elements (as the op wanted) would
need about 1.5 TB memory, that's more than a standard (windows 64bit)
computer can handle.
Cheers.
Am 12.12.2013 13:00, schrieb Duncan Murdoch:
On 13-12-12 6:51 AM, Eik Vettorazzi wrote:
On 12/12/2013 7:08 AM, Eik Vettorazzi wrote:
thanks Duncan for this clarification.
A double precision matrix with 2e11 elements (as the op wanted) would
need about 1.5 TB memory, that's more than a standard (windows 64bit)
computer can handle.
According to Microsoft's Memory Limits web page
ok, so 200K predictors an 10M observations would work?
On 12/12/2013 12:12 PM, Eik Vettorazzi wrote:
it is simply because you can't do a regression with more predictors than
observations.
Cheers.
Am 12.12.2013 09:00, schrieb Romeo Kienzler:
Dear List,
I'm quite new to R and want to do
Dear Eik,
thank you so much for your help!
best Regards,
Romeo
On 12/12/2013 12:51 PM, Eik Vettorazzi wrote:
I thought so (with all the limitations due to collinearity and so on),
but actually there is a limit for the maximum size of an array which is
independent of your memory size and is
On Apr 19, 2013, at 11:45 AM, Endy BlackEndy wrote:
Please read the attach file.
Please re-read : http://www.r-project.org/mail.html#instructions
Thank you
Endy
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
Endy,
See the package ResourceSelection for the HL test and the package caret for the
sensitivity and specificity measures.
Regards,
Jose Iparraguirre
Chief Economist
Age UK, London
From: r-help-boun...@r-project.org [r-help-boun...@r-project.org] On
What do you mean by at x equal zero?
On Sun, Oct 21, 2012 at 8:37 AM, Adel Powell powella...@gmail.com wrote:
I am new to R and I am trying to do a monte carlo simulation where I
generate data and interject error then test various cut points; however, my
output was garbage (at x equal zero, I
On Jul 31, 2012, at 10:35 AM, M Pomati marco.pom...@bristol.ac.uk wrote:
Does anyone know of any X^2 tests to compare the fit of logistic models
which factor out the sample size? I'm dealing with a very large sample and
I fear the significant X^2 test I get when adding a variable to the
Marc, thank you very much for your help.
I've posted in on
http://math.stackexchange.com/questions/177252/x2-tests-to-compare-the-fit-of-large-samples-logistic-models
and added details.
Many thanks
Marco
--On 31 July 2012 11:50 -0500 Marc Schwartz marc_schwa...@me.com wrote:
On Jul 31,
On Jul 31, 2012, at 10:25 AM, M Pomati wrote:
Marc, thank you very much for your help.
I've posted in on
http://math.stackexchange.com/questions/177252/x2-tests-to-compare-the-fit-of-large-samples-logistic-models
and added details.
I think you might have gotten a more statistically
On Apr 3, 2012, at 9:25 PM, Melrose2012 wrote:
I am trying to plot the logistic regression of a dataset (# of
living flies
vs days the flies are alive) and then fit a best-fit line to this
data.
Here is my code:
plot(fflies$living~fflies$day,xlab=Number of Days,ylab=Number of
Fruit
On Apr 3, 2012, at 9:25 PM, Melrose2012 wrote:
I am trying to plot the logistic regression of a dataset (# of
living flies
vs days the flies are alive) and then fit a best-fit line to this
data.
Here is my code:
plot(fflies$living~fflies$day,xlab=Number of Days,ylab=Number of
Fruit
No, this is related to my own data.
Yes, 'fflies' the dataset - here I am working with two columns: # of fruit
flies alive vs # of days these flies are alive.
There is no error, it's just that the best-fit line does not plot nicely on
top of my data (see figure attached).
Get help. You do not understand glm's. What do you think the fitted
values are? -- Hint: they are *not* an estimate of the number of
living fruit flies.
-- Bert
On Tue, Apr 3, 2012 at 6:25 PM, Melrose2012
melissa.patric...@stonybrook.edu wrote:
I am trying to plot the logistic regression of a
Thank you for your reply.
I do understand that I am working in log space with the default link
function of the binomial being logit. My problem is, I thought that they
way I had written the code, when I did the lines command, it should plot
the best-fit line (found by 'glm') on top of my
Hi Melissa,
I would highly encourage you to read [1]. It would be extremely beneficial
for your understanding of the type of models you should use in a situation
like yours (count data).
Best regards,
Jorge.-
[1] cran.r-project.org/web/packages/pscl/vignettes/countreg.pdf
On Wed, Apr 4,
Melrose2012 melissa.patrician at stonybrook.edu writes:
alive - (fflies$living)
dead - (fflies$living[1]-alive)
glm.fit - glm(cbind(alive,dead)~fflies$day,family=binomial)
summary(glm.fit)
Your call to glm() is not doing what you think it's doing. What you want to do
is probably closer to
On Wed, Mar 28, 2012 at 11:49 AM, carlb1 carl19...@hotmail.com wrote:
Hi
does anyone know how to do a logistic regression in R?
I do. And doubtless many other folks on R-help do too.
any help appreciated
You should probably start here:
I'm surprised not to see the simple answer: glm models return the MLE
estimate.
fit - glm(y ~ x1 + x2 + , family='binomial')
There is no need for special packages, this is a standard part of R.
Terry Therneau
begin included message --
On 02/06/2012
Hi,
On Mon, Feb 6, 2012 at 10:08 AM, Ana rrast...@gmail.com wrote:
I am looking for R packages that can make a Logistic Regression model
with parameter estimation by Maximum Likelihood Estimation.
How are you looking? Did you perhaps try Google
On 02/06/2012 03:08 PM, Ana wrote:
I am looking for R packages that can make a Logistic Regression model
with parameter estimation by Maximum Likelihood Estimation.
Many thanks for helping out.
__
R-help@r-project.org mailing list
Danielle Duncan dlduncan2 at alaska.edu writes:
Greetings, I have a question that I'd like to get input on. I have a
classic toxicology study where I artificially fertilized and exposed
embryos to a chemical and counted defects. In addition, I kept track of
male-female pairs that I used to
On 12/01/2011 08:00 PM, Ben quant wrote:
The data I am using is the last file called l_yx.RData at this link (the
second file contains the plots from earlier):
http://scientia.crescat.net/static/ben/
The logistic regression model you are fitting assumes a linear
relationship between x and the
On Dec 1, 2011, at 18:54 , Ben quant wrote:
Sorry if this is a duplicate: This is a re-post because the pdf's mentioned
below did not go through.
Still not there. Sometimes it's because your mailer doesn't label them with the
appropriate mime-type (e.g. as application/octet-stream, which is
Thank you for the feedback, but my data looks fine to me. Please tell me if
I'm not understanding.
I followed your instructions and here is a sample of the first 500 values :
(info on 'd' is below that)
d - as.data.frame(l_yx)
x = with(d, y[order(x)])
x[1:500] # I have 1's and 0's dispersed
On Dec 1, 2011, at 21:32 , Ben quant wrote:
Thank you for the feedback, but my data looks fine to me. Please tell me if
I'm not understanding.
Hum, then maybe it really is a case of a transition region being short relative
to the range of your data. Notice that the warning is just that: a
Here you go:
attach(as.data.frame(l_yx))
range(x[y==1])
[1] -22500.746.
range(x[y==0])
[1] -10076.5303653.0228
How do I know what is acceptable?
Also, here are the screen shots of my data that I tried to send earlier
(two screen shots, two pages):
Oops! Please ignore my last post. I mistakenly gave you different data I
was testing with. This is the correct data:
Here you go:
attach(as.data.frame(l_yx))
range(x[y==0])
[1] 0.0 14.66518
range(x[y==1])
[1] 0.0 13.49791
How do I know what is acceptable?
Also, here are the
I'm not proposing this as a permanent solution, just investigating the
warning. I zeroed out the three outliers and received no warning. Can
someone tell me why I am getting no warning now?
I did this 3 times to get rid of the 3 outliers:
mx_dims = arrayInd(which.max(l_yx), dim(l_yx))
On Dec 1, 2011, at 23:43 , Ben quant wrote:
I'm not proposing this as a permanent solution, just investigating the
warning. I zeroed out the three outliers and received no warning. Can someone
tell me why I am getting no warning now?
It's easier to explain why you got the warning before.
Thank you so much for your help.
The data I am using is the last file called l_yx.RData at this link (the
second file contains the plots from earlier):
http://scientia.crescat.net/static/ben/
Seems like the warning went away with pmin(x,1) but now the OR is over
15k. If I multiple my x's by
On 20.11.2011 12:46, tujchl wrote:
HI
I use glm in R to do logistic regression. and treat both response and
predictor as factor
In my first try:
***
Call:
glm(formula = as.factor(diagnostic) ~ as.factor(7161521) +
On 20.11.2011 16:58, 屠鞠传礼 wrote:
Thank you Ligges :)
one more question:
my response value diagnostic have 4 levels (0, 1, 2 and 3), so I use it like
this:
as.factor(diagnostic) ~ as.factor(7161521) +as.factor(2281517)
Is it all right?
Uhh. 4 levels? Than I doubt logistic regression is the
On 20.11.2011 17:27, 屠鞠传礼 wrote:
I worried it too, Do you have idear that what tools I can use?
Depends on your aims - what you want to do with the fitted model.
A multinomial model, some kind of discriminant analysis (lda, qda), tree
based methods, svm and so son come to mind. You
I worried it too, Do you have idear that what tools I can use?
ÔÚ 2011-11-21 00:13:26£¬Uwe Ligges lig...@statistik.tu-dortmund.de дµÀ£º
On 20.11.2011 16:58, ÍÀ¾Ï´«Àñ wrote:
Thank you Ligges :)
one more question:
my response value diagnostic have 4 levels (0, 1, 2 and 3), so I use it
Thank you Ligges :)
one more question:
my response value diagnostic have 4 levels (0, 1, 2 and 3), so I use it like
this:
as.factor(diagnostic) ~ as.factor(7161521) +as.factor(2281517)
Is it all right?
ÔÚ 2011-11-20 23:45:23£¬Uwe Ligges lig...@statistik.tu-dortmund.de дµÀ£º
On 20.11.2011
Thank you very much :)
I search on net and find sometimes response value in logistic model can have
more than 2 values, and the way of this kinds of regression is called Ordinal
Logistic Regression. and even we can caculate it by the same way I mean glm in
R.
here are some references:
1.
On Nov 20, 2011, at 7:26 PM, 屠鞠传礼 wrote:
Thank you very much :)
I search on net and find sometimes response value in logistic model
can have more than 2 values, and the way of this kinds of regression
is called Ordinal Logistic Regression. and even we can caculate it
by the same way I
arunkumar akpbond007 at gmail.com writes:
Hi
can we perform logistic regression using lmer function. Please help me with
this
Yes.
library(lme4)
glmer([reponse]~[fixed effects (covariates)]+(1|[grouping variable]),
data=[data frame], family=binomial)
Further questions
Can I atleast get help with what pacakge to use for logistic
regression with all possible models and do prediction. I know i can
use regsubsets but i am not sure if it has any prediction functions to
go with it.
Thanks
On Oct 25, 6:54 pm, RAJ dheerajathr...@gmail.com wrote:
Hello,
I am pretty
Try the glm package
Steve Friedman Ph. D.
Ecologist / Spatial Statistical Analyst
Everglades and Dry Tortugas National Park
950 N Krome Ave (3rd Floor)
Homestead, Florida 33034
steve_fried...@nps.gov
Office (305) 224 - 4282
Fax (305) 224 - 4147
Hi,
On Wed, Oct 26, 2011 at 12:35 PM, RAJ dheerajathr...@gmail.com wrote:
Can I atleast get help with what pacakge to use for logistic
regression with all possible models and do prediction. I know i can
use regsubsets but i am not sure if it has any prediction functions to
go with it.
Maybe
Check glmulti package for all subset selection.
Weidong Gu
On Wed, Oct 26, 2011 at 12:35 PM, RAJ dheerajathr...@gmail.com wrote:
Can I atleast get help with what pacakge to use for logistic
regression with all possible models and do prediction. I know i can
use regsubsets but i am not sure if
You mean the glm() _function_ in the stats package.
?glm
(just to avoid confusion)
-- Bert
On Wed, Oct 26, 2011 at 10:31 AM, steve_fried...@nps.gov wrote:
Try the glm package
Steve Friedman Ph. D.
Ecologist / Spatial Statistical Analyst
Everglades and Dry Tortugas National Park
950 N
The reason that you are not likely getting replies is that what you propose to
do is considered a poor way of building models.
You need to get out of the SAS Mindset.
I would suggest you obtain a copy of Frank Harrell's book:
http://www.amazon.com/exec/obidos/ASIN/0387952322/
and then
On Sep 21, 2011, at 10:25 AM, n wrote:
Hello all,
Suppose in a logistic regression model, the binary outcome is coded as
0 or 1.
In SAS, the default probability computed is for Y = 0 (smaller of the
two values) . However, in SPSS the probability computed is for Y = 1
(greater of the two
On 13.06.2011 19:37, Alal wrote:
Hello
I'm trying to write a model for my data, but Im not sure it's statistically
correct.
I have one variable (2 levels: A or B). To explain it, I've got 2 factors
and 3 continuous variables. I need to do a logistic regression, but...
First: can I actually
Which statistical principles are you invoking on which to base such analyses?
Frank
Sergio Della Franca wrote:
Dear R-Helpers,
I want to perform a logistic regression on my dataset (y).
I used the following code:
logistic-glm(formula=interest_variable~.,family = binomial(link =
First a word of caution: Forward, backward, and stepwise regression analyses
are not well received among statisticians. There are many reasons for this.
Some of the reasons include:
(1) The p value computed at each step is computed ignoring all the previous
steps. This can lead to incorrect
Date: Tue, 7 Jun 2011 01:38:32 -0700
From: farah.farid@student.aku.edu
To: r-help@r-project.org
Subject: [R] Logistic Regression
I am working on my thesis in which i have couple of independent variables
that are categorical in nature and the
The 10% change idea was never a good one and has not been backed up by
simulations. It is quite arbitrary and results in optimistic standard
errors of remaining variables. In fact a paper presented at the Joint
Statistical Meetings about 3 years ago (I'm sorry I've forgotten the names
of the
IMHO, you evidence considerable confusion and misunderstanding of
statistical methods. I would say that most of what you describe is
nonsense. Of course, maybe I'm just the one who's confused, but I
would strongly suggest you consult with a local statistician. This
list is unlikely to be able to
Why is a one unit change in x an interesting range for the purpose of
estimating an odds ratio?
The default in summary() is the inter-quartile-range odds ratio as clearly
stated in the rms documentation.
Frank
array chip wrote:
Hi, I am trying to run a simple logistic regression using lrm()
On 29.04.2011 18:29, Biedermann, Jürgen wrote:
Hi there,
I have the problem, that I'm not able to reproduce the SPSS residual
statistics (dfbeta and cook's distance) with a simple binary logistic
regression model obtained in R via the glm-function.
I tried the following:
fit - glm(y ~ x1 +
On Apr 27, 2011, at 00:22 , Andre Guimaraes wrote:
Greetings from Rio de Janeiro, Brazil.
I am looking for advice / references on binary logistic regression
with weighted least squares (using lrm weights), on the following
context:
1) unbalanced sample (n0=1, n1=700);
2) sampling
On Wed, 27 Apr 2011, peter dalgaard wrote:
On Apr 27, 2011, at 00:22 , Andre Guimaraes wrote:
Greetings from Rio de Janeiro, Brazil.
I am looking for advice / references on binary logistic regression
with weighted least squares (using lrm weights), on the following
context:
1) unbalanced
Many thanks for your messages.
I will take a look at the survey package.
I was concerned with the issues raised by Cramer (1999) in Predictive
performance of the binary logit model in unbalanced samples.
In this particular case, misclassification costs are much higher for
the smaller group
Dear Ted,
sorry for being unclear. Let me try again.
I indeed have no knowledge about the value of the response variable for
any object.
Instead, I have a data frames of explanatory variables for
each object. For example,
x1 x2 x3
1 4.409974 2.348745 1.9845313
2 3.809249
In view of your further explanation, Robin, the best I can offer
is the following.
[1] Theoretical frame.
*IF* variables (X1,X2,X3) are distributed according to a
mixture of two multivariate normal distributions, i.e. as
two groups, each with a multivariate normal distribution,
*AND* the members
On 03-Jan-11 14:02:21, Robin Aly wrote:
Hi all,
is there any package which can do an EM algorithm fitting of
logistic regression coefficients given only the explanatory
variables? I tried to realize this using the Design package,
but I didn't find a way.
Thanks a lot Kind regards
Robin
Hi:
I think you created a problem for yourself in the way you generated your
data.
y-rbinom(2000,1,.7)
euro - rnorm(2000, m = 300 * y + 50 * (1 - y), s = 20 * y + 12 * (1 - y))
# Create a 2000 x 2 matrix of probabilities
prmat - cbind(0.8 * y + 0.2 * (1 - y), 0.2 * y + 0.8 * (1 - y))
# sample
array chip arrayprofile at yahoo.com writes:
[snip]
I can think of analyzing this data using glm() with the attached dataset:
test-read.table('test.txt',sep='\t')
fit-glm(cbind(positive,total-positive)~treatment,test,family=binomial)
summary(fit)
anova(fit, test='Chisq')
First, is this
A possible caveat here.
Traditionally, logistic regression was performed on the
logit-transformed proportions, with the standard errors based on the
residuals for the resulting linear fit. This accommodates overdispersion
naturally, but without telling you that you have any.
glm with a binomial
On Dec 21, 2010, at 14:22 , S Ellison wrote:
A possible caveat here.
Traditionally, logistic regression was performed on the
logit-transformed proportions, with the standard errors based on the
residuals for the resulting linear fit. This accommodates overdispersion
naturally, but without
...and before you believe in overdispersion, make sure you have a
credible explanation for it. All too often, what you really have
is a model that doesn't fit your data properly.
Well put.
A possible fortune?
S Ellison
***
:
glm(log(percentage/(1-percentage))~treatment,data=test)
Thanks
John
From: Ben Bolker bbol...@gmail.com
To: r-h...@stat.math.ethz.ch
Sent: Tue, December 21, 2010 5:08:34 AM
Subject: Re: [R] logistic regression or not?
array chip arrayprofile at yahoo.com
.
*From:* Ben Bolker bbol...@gmail.com
*To:* r-h...@stat.math.ethz.ch
*Sent:* Tue, December 21, 2010 5:08:34 AM
*Subject:* Re: [R] logistic regression or not?
array chip arrayprofile at yahoo.com http://yahoo.com/ writes:
[snip]
I can think of analyzing
Ben, thanks again.
John
From: Ben Bolker bbol...@gmail.com
Cc: r-h...@stat.math.ethz.ch; S Ellison s.elli...@lgc.co.uk; peter dalgaard
pda...@gmail.com
Sent: Tue, December 21, 2010 9:26:29 AM
Subject: Re: [R] logistic regression or not?
On 10-12-21 12:20 PM
You would be better off posting to R-sig-mixed-models or R-sig-ecology
-- Bert
On Thu, Nov 18, 2010 at 9:32 AM, Billy.Requena billy.requ...@gmail.com wrote:
Hello,
I’d like to evaluate the temporal effect on the relationship between a
continuous variable (e.g. size) and the probability of
dear all,
thank you everyone for the profound answers and the needful references!
achim, thank you for the very kind offer!! sorrily i'm not around vienna in
the near feature, otherwise i'd be glad to coming back to your invitation.
yours,
kay
-
Kay Cichini
On 08/22/2010 01:51 PM, Kay Cichini wrote:
achim, thank you for the very kind offer!! sorrily i'm not around vienna in
the near feature, otherwise i'd be glad to coming back to your invitation.
Not that it's any of my business, but I don't think you need to go THAT
far to visit Achim these
hello gavin achim,
thanks for responding.
by logistic regression tree i meant a regression tree for a binary response
variable.
but as you say i could also use a classification tree - in my case with only
two outcomes.
i'm not aware if there are substantial differences to expect for the two
On Fri, 20 Aug 2010, Kay Cichini wrote:
hello gavin achim,
thanks for responding.
by logistic regression tree i meant a regression tree for a binary
response variable. but as you say i could also use a classification tree
- in my case with only two outcomes.
i'm not aware if there are
It would be good to tell us of the frequency of observations in each
category of Y, and the number of continuous X's. Recursive
partitioning will require perhaps 50,000 observations in the less
frequent Y category for its structure and predicted values to
validate, depending on X and the
hello,
my data-collection is not yet finished, but i though have started
investigating possible analysis methods.
below i give a very close simulation of my future data-set, however there
might be more nominal explanatory variables - there will be no continous at
all (maybe some ordered
On Fri, 2010-08-20 at 14:46 -0700, Kay Cichini wrote:
hello,
my data-collection is not yet finished, but i though have started
investigating possible analysis methods.
below i give a very close simulation of my future data-set, however there
might be more nominal explanatory variables -
On Fri, 20 Aug 2010, Kay Cichini wrote:
hello,
my data-collection is not yet finished, but i though have started
investigating possible analysis methods.
below i give a very close simulation of my future data-set, however there
might be more nominal explanatory variables - there will be no
On Thu, 2010-08-19 at 13:42 -0700, Kay Cichini wrote:
hello everyone,
i sampled 100 stands at 20 restoration sites and presence of 3 different
invasive plant species.
i came across logistic regression trees and wonder if this is suited for my
purpose - predicting presence of these
1 - 100 of 196 matches
Mail list logo