On Thu, 16 Aug 2007, Juergen Kuehn wrote:
Dear everybody,
I'm a new user of R 2.4.1 and I'm searching for information on improving
the output of regression tree graphs.
In the terminal nodes I am up to now able to indicate the number of
values (n) and the mean of all values in this terminal n
Dear everybody,
I'm a new user of R 2.4.1 and I'm searching for information on improving
the output of regression tree graphs.
In the terminal nodes I am up to now able to indicate the number of
values (n) and the mean of all values in this terminal node by the command
> text(tree, use.n=T,
na.exclude should give the same results as na.omit, which is the
default na.action. Is the number of complete cases the same in these
two regressions?
On 26/07/07, Vaibhav Gathibandhe <[EMAIL PROTECTED]> wrote:
> Hi all,
>
> Can you please tell me what is the problem here.
>
> My regression eq is
Hi all,
Can you please tell me what is the problem here.
My regression eq is y = B0 + B1X1 +B2X2 +e
And i am interested in coefficient B1
I am doing regression with two cases:
1) reg<-lm(y ~ X1 + X2, sam) where sam is the data
2) reg<-lm(y ~ X1 + X2, sam, na.action= na.exclude) . I have missi
Hi
I have two questions:
1)
I would like to know if there is a package in R that constructs a
regression tree using the 'goodness-of-split' algorithm for survival
analysis proposed by Le Blanc and Crowley (1993) (rather than the usual
CART algorithm that uses within-node difference and impurity fun
Ron,
you're right. It's not legitimate at all. I suggest you
to take a look at the HUGE bibliography on cointegration,
as a start up.
Rogerio
> Dear all R user,
>
> Please forgive me if my question is too simple.
>
> My question is related to Statistics rather directly to R. Suppose I ha
Dear all R user,
Please forgive me if my question is too simple.
My question is related to Statistics rather directly to R. Suppose I have two
time series of spot prices of two commodities X and Y for two years. Now I want
to see what percentage of spot price of X is explained by Y. Yes I
On Fri, 2 Feb 2007, Henric Nilsson (Public) wrote:
> Torsten, consider the following:
>
>> ### ordinal regression
>> mammoct <- ctree(ME ~ ., data = mammoexp)
> Warning message:
> no admissible split found
>> ### estimated class probabilities
>> treeresponse(mammoct, newdata = mammoexp[1:5, ])
>
On Fri, 2 Feb 2007, Henric Nilsson (Public) wrote:
> Den Fr, 2007-02-02, 06:03 skrev Stacey Buckelew:
>> Hi,
>>
>> I am working on a regression tree in Rpart that uses a continuous response
>> variable that is ordered. I read a previous response by Pfr. Ripley to a
>> inquiry regarding the abili
Den Fr, 2007-02-02, 06:03 skrev Stacey Buckelew:
> Hi,
>
> I am working on a regression tree in Rpart that uses a continuous response
> variable that is ordered. I read a previous response by Pfr. Ripley to a
> inquiry regarding the ability of rpart to handle ordinal responses in
> 2003. At that
Hi,
I am working on a regression tree in Rpart that uses a continuous response
variable that is ordered. I read a previous response by Pfr. Ripley to a
inquiry regarding the ability of rpart to handle ordinal responses in
2003. At that time rpart was unable to implement an algorithm to handle
or
On Wed, 2007-01-31 at 09:25 -0800, amna khan wrote:
> Sir I am not finding the function to plot least square regression line on
> type="o" plot of two variables.
> guid me in this regard.
Did you want something like this:
x <- 1:50
y <- rnorm(50)
plot(x, y, type = "o")
abline(lm(y ~ x))
See
Sir I am not finding the function to plot least square regression line on
type="o" plot of two variables.
guid me in this regard.
--
AMINA SHAHZADI
Department of Statistics
GC University Lahore, Pakistan.
Email:
[EMAIL PROTECTED]
[EMAIL PROTECTED]
[EMAIL PROTECTED]
[[alternative HTML ver
Fortune?
On 1/12/07, Peter Dalgaard <[EMAIL PROTECTED]> wrote:
> Prof Brian Ripley wrote:
> >
> > Where did you tell it 'x' was the abscissa and 'y' the ordinate?
> > (Nowhere: R is lacking a mind_read() function!)
> Please stop complaining about missing features. Patches will be considered.
>
> O
On 1/12/2007 5:56 AM, Barry Rowlingson wrote:
> ken knoblauch wrote:
>> This should do the trick:
>>
>> mind_reader <- function() {
>> ll <- letters[round(runif(6, 1, 26))]
>
> I see my paraNormal distribution package hasn't found its way to CRAN yet:
>
> http://tolstoy.newcastle.edu.au/R/
Barry Rowlingson wrote:
> ken knoblauch wrote:
>> This should do the trick:
>>
>> mind_reader <- function() {
>> ll <- letters[round(runif(6, 1, 26))]
>
> I see my paraNormal distribution package hasn't found its way to CRAN yet:
>
> http://tolstoy.newcastle.edu.au/R/help/05/04/1701.html
ken knoblauch wrote:
> This should do the trick:
>
> mind_reader <- function() {
> ll <- letters[round(runif(6, 1, 26))]
I see my paraNormal distribution package hasn't found its way to CRAN yet:
http://tolstoy.newcastle.edu.au/R/help/05/04/1701.html
Barry
This should do the trick:
mind_reader <- function() {
ll <- letters[round(runif(6, 1, 26))]
ff <- ll[1]
for (ix in 2:length(ll)) {
ff <- paste(ff, ll[ix], sep = "")
}
if (exists(ff)) {
cat("The function that you were t
Prof Brian Ripley wrote:
>
> Where did you tell it 'x' was the abscissa and 'y' the ordinate?
> (Nowhere: R is lacking a mind_read() function!)
Please stop complaining about missing features. Patches will be considered.
Oh, it's you, Brian. Never mind then. You'll get to it, I'm sure.
;-)
--
Tom Backer Johnsen wrote:
> My simpleminded understanding of simple regression is that when
> plotting regression lines for x on y and y on x in the same plot, the
> lines should cross each other at the respective means. But, given the
> R function below, abline (lm(y~x)) works fine, but abli
On Fri, 12 Jan 2007, Tom Backer Johnsen wrote:
> My simpleminded understanding of simple regression is that when
> plotting regression lines for x on y and y on x in the same plot, the
> lines should cross each other at the respective means. But, given the
> R function below, abline (lm(y~x)) wor
On Fri, 12 Jan 2007, Tom Backer Johnsen wrote:
> My simpleminded understanding of simple regression is that when
> plotting regression lines for x on y and y on x in the same plot, the
> lines should cross each other at the respective means. But, given the
> R function below, abline (lm(y~x))
Try this version of your function and then think about it
tst <- function () {
attach (attitude)
x <- rating
y <- learning
detach (attitude)
plot (x, y)
abline(v=mean(x))
abline(h=mean(y))
abline (lm(y~x))
cc <- coef(lm(x ~ y))
abline (-cc[1]/cc[2], 1/cc[2])
}
> My simpleminded understanding of
My simpleminded understanding of simple regression is that when
plotting regression lines for x on y and y on x in the same plot, the
lines should cross each other at the respective means. But, given the
R function below, abline (lm(y~x)) works fine, but abline (lm(x~y))
does not. Why?
funct
Dear Helpers,
I have a simple question. In statistic studies. I have lear to make inference
on sampling. I want to know what should be the strategy when I have the whole
population ? If a suppose that data are collecte without error, does inference
made are useful ?
sincerly !
Justin BEM
Elè
Alvaro wrote:
> I need to run a regression analysis with a large number of samples. Each
> sample (identified in the first file column) has its own x and y values. I
> will use the same model in all samples. How can I run the model for each
> sample? In SAS code I would use the "BY SAMPLE" statemen
I need to run a regression analysis with a large number of samples. Each
sample (identified in the first file column) has its own x and y values. I
will use the same model in all samples. How can I run the model for each
sample? In SAS code I would use the "BY SAMPLE" statement.
Alvaro
There was a missing line:
On 10/14/06, Gabor Grothendieck <[EMAIL PROTECTED]> wrote:
> Here is another approach using the same data as in
> John Fox's reply. His is probably superior but this
> does have the advantage that its very simple. Note
> that it gives the same coefficients and R square
Here is another approach using the same data as in
John Fox's reply. His is probably superior but this
does have the advantage that its very simple. Note
that it gives the same coefficients and R squared
to several decimal places. We just simulate a
data set with the given means and variance co
ty
Hamilton, Ontario
Canada L8S 4M4
905-525-9140x23604
http://socserv.mcmaster.ca/jfox
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of John Sorkin
> Sent: Saturday, October 14, 2006 3:27 PM
> To: r-help@
R 2.2.0
windows XP
How can I perform a regression analyses using a vector of means, a
variance-covariance matrix? I looked at the help screen for lm and did
not see any option for using the afore mentioned structures as input to
lm.
Thanks,
John
John Sorkin M.D., Ph.D.
Chief, Biostatistics and In
Dear UseRs,
you can find on CRAN web site my last contribute about
R & regression techniques:
http://cran.r-project.org/doc/contrib/Ricci-regression-it.pdf
It's in Italian language.
Regards.
Vito Ricci
Se non ora, quando?
Se non qui, dove?
Se no
Andreas Svensson wrote:
> So, how can I constrain the abline to the relevant region, i.e stop
> abline from extrapolating beyond the actual range of data.
> Or should I use a function line 'lines' to do this?
One elegant way of doing this is using 'xyplot' from 'lattice' and adding a
loess line w
AIL PROTECTED]
(801) 408-8111
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Andreas Svensson
Sent: Wednesday, May 24, 2006 10:52 AM
To: r-help@stat.math.ethz.ch
Subject: [R] Regression line limited by the rage of values
Hi
In R, using plot(x,y) follo
On Wed, 2006-05-24 at 21:53 +0200, Andreas Svensson wrote:
> Thankyou very much Marc for that nifty little script.
>
> When I use it on my real dataset though, the lines are fat in the middle
> and thinner towards the ends. I guess it's because "lines" draw one
> fitted line for each x, and if y
Thankyou very much Marc for that nifty little script.
When I use it on my real dataset though, the lines are fat in the middle
and thinner towards the ends. I guess it's because "lines" draw one
fitted line for each x, and if you have hundreds of x, this turns into a
line that is thicker that i
On Wed, 2006-05-24 at 18:51 +0200, Andreas Svensson wrote:
> Hi
>
> In R, using plot(x,y) followed by abline(lm(y~x)) produces a graph
> with a regression line spanning the whole plot . This means that the
> line extends beyond the swarm of data points to the defined of default
> plot regio
Hi
In R, using plot(x,y) followed by abline(lm(y~x)) produces a graph
with a regression line spanning the whole plot . This means that the
line extends beyond the swarm of data points to the defined of default
plot region. With par(xpd=T) it will span the entire figure region. But
how can
Trujillo L. skreiv:
> Sorry for the naiveness of my question but I have been trying in the
> R-help and the CRAN website without any success. I am trying to perform
> a regression through the origin (without intercept) and my main concern
> is about its evaluative statistics. It is clear for me th
)
anova(lm.D9 <- lm(weight ~ group))
summary(lm.D90 <- lm(weight ~ group - 1))# omitting intercept
I hope this helps a bit.
Best,
Roland
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Trujillo L.
> Sent: Tuesday, May 2
Dear R-users:
Sorry for the naiveness of my question but I have been trying in the
R-help and the CRAN website without any success. I am trying to perform
a regression through the origin (without intercept) and my main concern
is about its evaluative statistics. It is clear for me that R squared
d
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
lt;[EMAIL PROTECTED]>
> Cc: 'r-help'
> Subject: Re: [R] regression modeling
>
>
> May I offer a perhaps contrary perspective on this.
>
> Statistical **theory** tells us that the precision of estimates
> improves as
> sample size increases. However, in prac
Berton Gunter wrote:
> May I offer a perhaps contrary perspective on this.
>
> Statistical **theory** tells us that the precision of estimates improves as
> sample size increases. However, in practice, this is not always the case.
> The reason is that it can take time to collect that extra data, a
process." - George E. P. Box
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Weiwei Shi
> Sent: Tuesday, April 25, 2006 12:10 PM
> To: bogdan romocea
> Cc: r-help
> Subject: Re: [R] regression modeling
>
> i bel
ry/predictive potential of a dataset vary as the dataset gets
> larger and larger?
>
>
> > -Original Message-
> > From: [EMAIL PROTECTED]
> > [mailto:[EMAIL PROTECTED] On Behalf Of Weiwei Shi
> > Sent: Monday, April 24, 2006 12:45 PM
> > To: r-help
>
> [mailto:[EMAIL PROTECTED] On Behalf Of Weiwei Shi
> Sent: Monday, April 24, 2006 12:45 PM
> To: r-help
> Subject: [R] regression modeling
>
> Hi, there:
> I am looking for a regression modeling (like regression
> trees) approach for
> a large-scale industry dataset. Any sugge
Hi, there:
I am looking for a regression modeling (like regression trees) approach for
a large-scale industry dataset. Any suggestion on a package from R or from
other sources which has a decent accuracy and scalability? Any
recommendation from experience is highly appreciated.
Thanks,
Weiwei
--
Hi list,
I'm looking for a way to easily extract regression p-values and export
them to one file for further evaluation.
Here is the problem.
I use lm() and step() to get my regression parameters/coefficients.
after that I can extract them with summary(lm-results)$coefficients[,4]
so far so go
Have you considered "lme" in library(nlme)? The companion book
Pinheiro and Bates (2000) Mixed-Effects Models in S and S-Plus
(Springer) is my favorite reference for this kind of thing. From what I
understand of your question, you should be able to find excellent
answers in this boo
Dear R-users,
I set up an experiment where I put up bluebird boxes across an
urbanization gradient. I monitored these boxes and at some point I
pulled a feather from a chick and a friend used spectral properties
(rtot, a continuous var) to index chick health. There is an effect of
sex that I wou
Dear R users,
There is a method called "style analysis" where you make a regression being
Y=fund yield and X=benchmarks yield, where we have the restrictions to
calculatethe linear regression:
1. The regression must don have the intercept term.
2. The coefficient sum must be one.
3. All
Hi,
I have the following problem which I would appreciate some help on.
A variable y is to be modelled as a function of a set of variables
Vector(x).
The twist is that there is another variable z in the problem with the
property that y(i) <= z(i).
So the data set is divided into three categ
On Sat, 15 Oct 2005, giacomo moro wrote:
> Hi,
> I would like to regress y (dependent variable) on x (independent variable)
> and y(-1).
> I have create the y(-1) variable in this way: ly<-lag(y, -1)
> Now if I do the following regression lm (y ~ x + ly) the results I obtain
> are not correct.
Create time series from your data and then use lm with
the dyn or dynlm package (as lm does not support time
series directly). With the dyn package you just preface
lm with dyn$ and then use lm as usual:
library(dyn)
yt <- ts(y)
xt <- ts(x)
dyn$lm(yt ~ xt + lag(yt, -1))
After loading dyn try thi
Hi,
I would like to regress y (dependent variable) on x (independent variable) and
y(-1).
I have create the y(-1) variable in this way: ly<-lag(y, -1)
Now if I do the following regression lm (y ~ x + ly) the results I obtain are
not correct.
Can someone tell me the code to use in R in order to
On Thu, 29 Sep 2005, Christian Hennig wrote:
>> ?confint
>>
>
> Thank you to all of you.
>
> As far as I see this is not mentioned on the lm help page (though I
> presumably don't have the recent version), which I would
> suggest...
and I would suggest that you study a good book on the subject.
Sorry, I forgot confint and I made a mistake in my suggestion which
should be:
cbind(estimate = coef(lm.D9),
lower = coef(lm.D9) - 1.96 * sqrt(diag(vcov(lm.D9))),
upper = coef(lm.D9) + 1.96 * sqrt(diag(vcov(lm.D9
Best,
Renaud
Christian Hennig a écrit :
> Hi list,
>
> is ther
Why not use vcov() and the normal approximation ?
> ctl <- c(4.17,5.58,5.18,6.11,4.50,4.61,5.17,4.53,5.33,5.14)
> trt <- c(4.81,4.17,4.41,3.59,5.87,3.83,6.03,4.89,4.32,4.69)
> group <- gl(2,10,20, labels=c("Ctl","Trt"))
> weight <- c(ctl, trt)
> lm.D9 <- lm(weight ~ group)
>
> cbind(estimat
> ?confint
>
Thank you to all of you.
As far as I see this is not mentioned on the lm help page (though I
presumably don't have the recent version), which I would
suggest...
Best,
Christian
On Thu, 29 Sep 2005, Chuck Cleland wrote:
> ?confint
>
> For example:
>
> > ctl <- c(4.17,5.58,5.18,6.1
On Thu, 29 Sep 2005, Christian Hennig wrote:
> Hi list,
>
> is there any direct way to obtain confidence intervals for the regression
> slope from lm, predict.lm or the like?
There is a confint method: e.g.,
R> fm <- lm(dist ~ speed, data = cars)
R> confint(fm, parm = "speed")
2.5 % 9
?confint
> -Oprindelig meddelelse-
> Fra: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] På vegne af Christian Hennig
> Sendt: 29. september 2005 13:19
> Til: r-help-request Mailing List
> Emne: [R] Regression slope confidence interval
>
> Hi list,
>
&
?confint
For example:
> ctl <- c(4.17,5.58,5.18,6.11,4.50,4.61,5.17,4.53,5.33,5.14)
> trt <- c(4.81,4.17,4.41,3.59,5.87,3.83,6.03,4.89,4.32,4.69)
> group <- gl(2,10,20, labels=c("Ctl","Trt"))
> weight <- c(ctl, trt)
> lm(weight ~ group)
Call:
lm(formula = weight ~ group)
Coe
Hi list,
is there any direct way to obtain confidence intervals for the regression
slope from lm, predict.lm or the like?
(If not, is there any reason? This is also missing in some other statistics
softwares, and I thought this would be quite a standard application.)
I know that it's easy to imple
I retract the siggestion I proposed last night -- it was based
on a bad hunch! Sorry for wasting time.
Best wishes,
Ted.
E-Mail: (Ted Harding) <[EMAIL PROTECTED]>
Fax-to-email: +44 (0)870 094 0861
Date: 27-Sep-05
On 26-Sep-05 Witold Eryk Wolski wrote:
> Ted,
>
> I agree with you that if you unwrap the data you can use lm.
> And you can separate the data in the way you describe. However, if you
> have thousands of such datasets I do not want to do it by "looking at
> the graph".
>
> Yes the scatter may b
Ted,
I agree with you that if you unwrap the data you can use lm.
And you can separate the data in the way you describe. However, if you
have thousands of such datasets I do not want to do it by "looking at
the graph".
Yes the scatter may be larger as in the example and range(y) may be
large
On 26-Sep-05 Witold Eryk Wolski wrote:
> Hi,
>
> I do not know the intercept and slope.
> And you have to know them in order to do something like:
> ix<-(y < 0.9*(x-50)/200
>
> I am right?
>
> cheers
Although I really knew them from the way you generated the data,
I "pretended" I did not know t
Hi,
I do not know the intercept and slope.
And you have to know them in order to do something like:
ix<-(y < 0.9*(x-50)/200
I am right?
cheers
(Ted Harding) wrote:
On 26-Sep-05 nwew wrote:
Dear R-users,
I have the following data
x <- runif(300,min=1,max=230)
y <- x*0.005 + 0.2
y <- y+rn
On 26-Sep-05 nwew wrote:
> Dear R-users,
>
> I have the following data
>
> x <- runif(300,min=1,max=230)
>
> y <- x*0.005 + 0.2
> y <- y+rnorm(100,mean=0,sd=0.1)
> y <- y%%1 # <--- modulo operation
> plot(x,y)
>
> and would like to recapture the slope (0.005) and intercept(0.2).
> I wonder
Dear R-users,
I have the following data
x <- runif(300,min=1,max=230)
y <- x*0.005 + 0.2
y <- y+rnorm(100,mean=0,sd=0.1)
y <- y%%1 # <--- modulo operation
plot(x,y)
and would like to recapture the slope (0.005) and intercept(0.2). I wonder if
there are any clever algorithms to do this. I
I have not seen any replies, so I will offer a comment:
1. You speak of x1, x2, ..., x10, but your example includes only
x1+x2+x3+x4. I'm confused. If you could still use help with this,
could you please simplify your example further so there was only x1+x2,
say? Can you
Dear WizaRds!
I am sorry to ask for some help, but I have come to a complete stop in
my efforts. I hope, though, that some of you might find the problem
quite interesting to look at.
I have been trying to estimate parameters for lotteries, the so called
utility of chance, i.e. the "felt" proba
Sounds like you're looking for something like pure.error.anova in the `alr3'
package on CRAN...
Andy
> From: Christoph Scherber
>
> Dear R users,
>
> How can I do a regression analysis in R where there is more than one
> observation per x value? I tried the example in Sokal&Rohlf
> (3rd edn.,
Dear R users,
How can I do a regression analysis in R where there is more than one
observation per x value? I tried the example in Sokal&Rohlf (3rd edn.,
1995), page 476 ff., but I somehow couldn´t find a way to partition the
sums of squares into "linear regression", "deviations from regression
Hi,
I suggest to give a look to:
Practical Regression and Anova using R by Julian
Faraway
http://cran.r-project.org/doc/contrib/Faraway-PRA.pdf
http://www.stat.lsa.umich.edu/~faraway/book/
see also package faraway for datasets:
http://cbio.uct.ac.za/CRAN/src/contrib/Descriptions/faraway.html
Clark Allan wrote:
> i would like to give the class a practical assignment as well. could you
> suggest a good problem and the location of the data set/s?
>
> it would be good if the data set has been analysed by a number of other
> people so that students can see the different ways of tackling a
hi all
i am busy teaching a regression analysis course to second year science
students. the course is fairly theoretical with all of the standard
theorems and proofs...
i would like to give the class a practical assignment as well. could you
suggest a good problem and the location of the data set
Sundar Dorai-Raj writes:
> Hi, Laura,
>
> Would ?predict.glm be better?
>
> plot(logarea, hempresence,
> xlab = "Surface area of log (m2)",
> ylab="Probability of hemlock seedling presence",
> type="n", font.lab=2, cex.lab=1.5, axes=TRUE)
> lines(logarea, predict(hemhem, logreg
Laura M Marx wrote:
> Hi there,
> I've looked through the very helpful advice about adding fitted lines to
> plots in the r-help archive, and can't find a post where someone has offered
> a solution for my specific problem. I need to plot logistic regression fits
> from three differently-si
Hi there,
I've looked through the very helpful advice about adding fitted lines to
plots in the r-help archive, and can't find a post where someone has offered
a solution for my specific problem. I need to plot logistic regression fits
from three differently-sized data subsets on a plot of th
On Apr 11, 2005 8:18 PM, Fernando Saldanha <[EMAIL PROTECTED]> wrote:
> Can someone shed some light on this obscure portion of the help for lm?
>
>"Considerable care is needed when using 'lm' with time series.
>
> Unless 'na.action = NULL', the time series attributes are stripped
> fr
Can someone shed some light on this obscure portion of the help for lm?
"Considerable care is needed when using 'lm' with time series.
Unless 'na.action = NULL', the time series attributes are stripped
from the variables before the regression is done. (This is
necessary as omi
Dr. Frank E. Harrell, Jr., Professor and Chair of the Department of
Biostatistics at Vanderbilt University is giving a one-day workshop on
Regression Modeling Strategies on Friday, April 29, 2005. Analyses of the
example datasets use R/S-Plus and make extensive use of the Hmisc library
written
Sherri Miller wrote:
I am running some models (for the first time) using rpart and am getting
results I don't know how to interpret. I'm using cross-validation to prune
the tree and the results look like:
Root node error: 172.71/292 = 0.59148
n= 292
CP nsplit rel error xerror xstd
1 0
I am running some models (for the first time) using rpart and am getting
results I don't know how to interpret. I'm using cross-validation to prune
the tree and the results look like:
Root node error: 172.71/292 = 0.59148
n= 292
CP nsplit rel error xerror xstd
1 0.124662 0 1
> From: Martin Maechler
>
> > "ReidH" == Huntsinger, Reid <[EMAIL PROTECTED]>
> > on Thu, 3 Mar 2005 17:24:22 -0500 writes:
>
> ReidH> You might use lsfit instead and just do the whole Y
> ReidH> matrix at once. That saves all the recalculation of
> ReidH> things involving
> "ReidH" == Huntsinger, Reid <[EMAIL PROTECTED]>
> on Thu, 3 Mar 2005 17:24:22 -0500 writes:
ReidH> You might use lsfit instead and just do the whole Y
ReidH> matrix at once. That saves all the recalculation of
ReidH> things involving only X.
yes, but in these cases, we
-help@stat.math.ethz.ch
Subject: [R] regression on a matrix
Hi -
I am doing a monte carlo experiment that requires to do a linear
regression of a matrix of vectors of dependent variables on a fixed
set of covariates (one regression per vector). I am wondering if
anyone has any idea of how to
Hi -
I am doing a monte carlo experiment that requires to do a linear
regression of a matrix of vectors of dependent variables on a fixed
set of covariates (one regression per vector). I am wondering if
anyone has any idea of how to speed up the computations in R. The code
follows:
#regression
I will be giving a one-day short course related to my book Regression Modeling
Strategies in Toronto as part of the Joint Statistical Meetings on August 8.
For more information visit the American Statistical Association web site
amstat.org and biostat.mc.vanderbilt.edu/rms. The course applies
On Tue, 2004-07-20 at 17:02, Avril Coghlan wrote:
Hello,
I'm a newcomer to R so please
forgive me if this is a silly question.
It's that I have a linear regression:
fm <- lm (x ~ y)
and I want to test whether the
slope of the regression is significantly
less than 1. How can I do this in R?
Another
see also the contributed document by John Verzani, Simple R, page 87f.
> Adaikalavan Ramasamy wrote:
>
> > I would try to construct the confidence intervals and
> compare them to
> > the value that you want
> >
> >>x <- rnorm(20)
> >>y <- 2*x + rnorm(20)
> >>summary( m1 <- lm(y~x) )
> >
> >
>
Adaikalavan Ramasamy wrote:
I would try to construct the confidence intervals and compare them to
the value that you want
x <- rnorm(20)
y <- 2*x + rnorm(20)
summary( m1 <- lm(y~x) )
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.1418 0.1294 1.0950.288
x
At 06:44 PM 7/20/2004 +0100, Adaikalavan Ramasamy wrote:
>I would try to construct the confidence intervals and compare them to
>the value that you want
>> x <- rnorm(20)
>> y <- 2*x + rnorm(20)
>> summary( m1 <- lm(y~x) )
>
>
>Coefficients:
>Estimate Std. Error t value Pr(>|t|)
>(Inter
I would try to construct the confidence intervals and compare them to
the value that you want
> x <- rnorm(20)
> y <- 2*x + rnorm(20)
> summary( m1 <- lm(y~x) )
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.1418 0.1294 1.0950.288
x 2.2058
Hello,
I'm a newcomer to R so please
forgive me if this is a silly question.
It's that I have a linear regression:
fm <- lm (x ~ y)
and I want to test whether the
slope of the regression is significantly
less than 1. How can I do this in R?
I'm also interested in comparing the
slopes of two re
nd coding my own R functions and possess a sound
> familiarity with
> Numerical Linear Algebra and Matrix theory. Secondly, i don't
> want to dig
> deep into books to find little useful. Any references that
> you provide will
> be pretty helpful
>
> Please advise
helpful
Please advise
-Dev
From: Berton Gunter <[EMAIL PROTECTED]>
To: devshruti pahuja <[EMAIL PROTECTED]>
Subject: Re: [R] Regression Modeling query
Date: Tue, 22 Jun 2004 13:44:13 -0700
MIME-Version: 1.0
X-Sender: "Berton Gunter" <[EMAIL PROTECTED]>
Received: from c
ason where as in men's tennis it's fluctuating. Also, i
would like to which age group reflects the prime of a tennis player and
hence i've changed continuous variables to categorical.
Please advise
Thanks
-Dev
From: "Peter Flom" <[EMAIL PROTECTED]>
To: <[EMAIL P
1 - 100 of 144 matches
Mail list logo