Yeah, this is absolutely what I want!
Thanks all above for your helpful suggestion.
2007/6/6, Rob Creecy [EMAIL PROTECTED]:
You could try the gmp multi precision arithmetic package.
library(gmp)
urand.bigz(10,64)
[1] 11691875040763095143 15618480061048441861
13311871202921807091
Dear R-lister,
One of my friends wanted to produce random number which is 64 bits. He did
it with Fortune. I think R can do it also. But I don't know how to display a
very big integer in the complete form but not scientific form. And what's
the biggest integer R can display in complete form ?
2007/5/24, Lucke, Joseph F [EMAIL PROTECTED]:
--
*From:* 李俊杰 [mailto:[EMAIL PROTECTED]
*Sent:* Monday, May 21, 2007 8:12 PM
*To:* Lucke, Joseph F
*Subject:* Re: [R] How to compare linear models with intercept and those
withoutintercept using minimizing adjs R^2
Hi,Oksanen,
Thanks for your reply.
I agree with you at the point that if we misjudge none-zero intercept to be
zero, there will be loss still or even great loss as you and Venables
emphasized in your practical research work. If there won't be any loss when
we misjudge
zero intercept to be
2007/5/21, Lucke, Joseph F [EMAIL PROTECTED]:
One issue is whether you want your estimators to be based on central
moments (covariances) or on non-central moments. Removing the intercept
changes the statistics from central to non-central moments. The
adjusted R2, by which I think you mean
I have a question about what you've wrote in your pdf file. Why must we view
my problem in the viewpoint of hypothesis testing? Is testing the original
philosophy of maximizing Fisher's A-statistic to choose a optimum model?
Thanks.
2007/5/21, Lucke, Joseph F [EMAIL PROTECTED]:
I taken the
So when I am using the adjusted R2 and as a penalized optimality criterion, and
I have to compare models with intercept and those without intercept to
decide the final model selected, does my crierion in my first email make
sense?
Because we know that in leaps(leaps), if we want to select a model
= 10,000,000*x) so that SSR SST if one is
not deriving the fit from the regular linear regression process.
--Paul
On 5/19/07, 李俊杰 [EMAIL PROTECTED] wrote:
I know that -1 indicates to remove the intercept term. But my question
is
why intercept term CAN NOT be treated as a variable term as we
Hi, Mark
What I want to do exactly is that I want to make a comparison between a
model with intercept and one without intercept on adjusted r2 term, since we
know that minimizing adjusted r-square is a variable selection strategy. I
know there are other alternatives to conduct a variable
Dear R-list,
I apologize for my many emails but I think I know how to desctribe my
problem differently and more clearly.
My question is how to compare linear models with intercept and those without
intercept using maximizing adjusted R^2 strategy.
Now I do it like the following:
Dear R-list,
I'm not sure what I've found about a function in DAAG package is a bug.
When I was using cv.lm(DAAG) , I found there might be something wrong with
it. The problem is that we can't use it to deal with a linear model with
more than one predictor variable. But the usage documentation
Dear R-list,
I am sorry for my shortage of stat knowlege. I want know how to conduct a
hypothesis test : Ho:|E(X)|=|E(Y)|-H1:otherwise.
Actually, in my study, X and Y is two observations of bias,
where bias=u^hat-u, u is a parameter I concerned. Given X=(u^hat_xi - u) and
Y=(u^hat_yi - u), I
I know that -1 indicates to remove the intercept term. But my question is
why intercept term CAN NOT be treated as a variable term as we place a
column consited of 1 in the predictor matrix.
If I stick to make a comparison between a model with intercept and one
without intercept on adjusted r2
Hi, everybody,
3 questions about R-square:
-(1)--- Does R2 always increase as variables are added?
-(2)--- Does R2 always greater than 1?
-(3)--- How is R2 in summary(lm(y~x-1))$r.squared
calculated? It is different from (r.square=sum((y.hat-mean
Hi, all
When I am using mle.cv(wle), I find a interesting problem: I can't do
leave-one-out cross-validation with mle.cv(wle). I will illustrate the
problem as following:
xx=matrix(rnorm(20*3),ncol=3)
bb=c(1,2,0)
yy=xx%*%bb+rnorm(20,0,0.001)+0
Hi, everyone
When I was using cv.lm(DAAG) , I found there might be something wrong with
it. The problem is that we can't use it to deal with a linear model with
more than one predictor variable. But the usage documentation
hasn't informed us about this.
You can find it by excuting the following
Dear all,
I found most of R packages do stepwise model selection with AIC criterian. I
am doing a study on the comparison of severy popular model selection methods
including stepwise using p-value criterian. We know that in SAS the stepwise
uses p-value criterian, so this method could be a
Hi, all
I met a problem to query GSE3524, which cannot be open on my computer. I
hope some of you would be kind to give me some advice. Thanks!
The code is as follow:
##
library(GEOquery)
gsename=GSE3524
gse=getGEO(gsename)
##
The error information follows as
If it matters to you (why?) use pmax(0, kde$y).
Actually, it bothers me because I need sufficient precision of numerical
calculation in this case, where the density estimate is around zero. To
illustrate why I concern about it, I'd like to introduce the problem I am
working with.
The problem is
Why? And how to solve it? The code and result are following,
data=rnorm(50)
kde=density(data,n=20,from=-1,to=10)
kde$x;kde$y
[1] -1.000 -0.4210526 0.1578947 0.7368421 1.3157895 1.8947368
[7] 2.4736842 3.0526316 3.6315789 4.2105263 4.7894737 5.3684211
[13] 5.9473684
20 matches
Mail list logo