While using the rmvnorm function, I get the error:
Error in eigen(sigma, sym = TRUE) : error code 5 from Lapack routine
'dsyevr'
The same thing happens when I try the eigen() function on my covariance
matrix. The matrix is a symmetric 111x111 matrix. Well, it is almost
symmetric; there are sli
Hi all:
I would like to create a line of plot margin text (using mtext() ) that
features both a superscript and a subset of an object. However, I
cannot seem to do both things at once, nor have I found a way to paste
the two results together.
(I pull the object subset because this is part of
Andrew Robinson wrote:
> Hi Gad,
>
> try:
>
>
>> class(a)
> [1] "Arima"
>> getAnywhere(print.Arima)
Thanks Andrew.
For the record, the standard error is the square root of the diagonal of
the covariance matrix a$var.coef (itself obtained through some magic):
ses[x$mask] <- round(sqrt(diag(x$
Hi Gad,
try:
> class(a)
[1] "Arima"
> getAnywhere(print.Arima)
...
Cheers,
Andrew
On Fri, Mar 16, 2007 at 01:47:25PM +1100, Gad Abraham wrote:
> Hi,
>
> Can anyone explain how the standard error in arima() is calculated?
>
> Also, how can I extract it from the Arima object? I don't see it
Hi,
Can anyone explain how the standard error in arima() is calculated?
Also, how can I extract it from the Arima object? I don't see it in there.
> x <- rnorm(1000)
> a <- arima(x, order = c(4, 0, 0))
> a
Call:
arima(x = x, order = c(4, 0, 0))
Coefficients:
ar1 ar2 ar3
Hello Experts
I have the following codes and data for 2 interpolation plots.
http://www.nabble.com/file/7206/3d_plot_data.txt 3d_plot_data.txt
data<-read.table("3d_plot_data.txt", header=T)
attach(data)
par(mfrow=c(1,2))
library(akima)
interpolation<-interp(rr,veg_r,predict)
persp(interpol
Hi,
I've got a dataset with 7 variables for 8 different species. I'd like
to test the null hypothesis of no difference among species for these
variables. MANOVA seems like the appropriate test, but since I'm
unsure of how well the data fit the assumptions of equal
variance/covariance and multivari
Does this do what you want?
> x <- " vara varb S PC
+ 1 None 250 1 80
+ 2 None 250 1 70
+ 3 Some 250 1 60
+ 4 Some 250 1 70
+ 5 None 1000 1 90
+ 6 None 1000 1 90
+ 7 Some 1000 1 80
+ 8 Some 1000 1 70
+ 9 None 250 2 100
+ 10 None 250 2 80
+ 11 Some 250 2 70
+ 12 Some 25
Hi,
recently, I got a bunch of DNA sequences (with chromosome coordinate for
each sequence), I would like to know the relative distance between these
sequences and all the genes (genomic sequences) on human genome, i.e., are
these sequences locate at upstream of the genes(5 prime end), downstream
Hi,
I have a data set that looks like this:
> data
vara varb S PC
1 None 250 1 80
2 None 250 1 70
3 Some 250 1 60
4 Some 250 1 70
5 None 1000 1 90
6 None 1000 1 90
7 Some 1000 1 80
8 Some 1000 1 70
9 None 250 2 100
10 None 250 2 80
11 Some 250 2 70
12 Some 250 2
Thanks, one and all... I knew it had to be simple.
On 3/14/07, Jason Barnhart <[EMAIL PROTECTED]> wrote:
>
> This should work.
>
> > test.df <- data.frame(x1=c(NA,2,3,NA), x2=c(1,2,3,4),
> > x3=c(1,NA,NA,4))
> > test.df
> x1 x2 x3
> 1 NA 1 1
> 2 2 2 NA
> 3 3 3 NA
> 4 NA 4 4
>
> > test.df
Laura Hill wrote:
> Hi
>
> Could anybody give me a bit of advice on some code I'm having trouble with?
>
> I've been trying to calculate the loglikelihood of a function iterated over
> a data of time values and I seem to be experiencing difficulty when I use
> the function expm(). Here's an examp
I have been comparing the output of an R package to S+Finmetrics and I
notice that
the covariance matrix outputted by the two procedures is different. The
R package
computes the covariance matrix using Method 1 and I think ( but I'm not
sure ) that S+Finmetrics computes it
using Method 2.
I put
Hi, I know how to use LASSO for model selection based on the Cp criterion.
I heard that we can also use cross validation as a criterion too. I used
cv.lars to give me the lowest predicted error & fraction. But I'm short of
a step to arrive at the number of variables to be included in the fina
Hi, I know how to use LASSO for model selection based on the Cp criterion.
I heard that we can also use cross validation as a criterion too. I used
cv.lars to give me the lowest predicted error & fraction. But I'm short of
a step to arrive at the number of variables to be included in the fina
Hi,
Anyone knows any existing package/program that has implemented
unbiased (or bias-reduced) sandwich variance estimator (Wu (1986) and
Carroll (2001)
for GEE estimates?
Thanks
Qiong
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailm
If Ted is right, then one work-around is to use Firth's method for penalized
log-likelihood. The technique is originally intended to reduce small sample
bias. However, it's now being extended to deal with complete and quasi
separation problems.
I believe the library is called logistf but I haven't
Hi
I suppose you will not get usefull response for such poorly specified
question.
For automating procedures on data frames you can either do looping or
use lapply or maybe do.call can also provide some functionality.
If you elaborate what you did and in what respect it was
unsatisfactory ma
On 15-Mar-07 17:03:50, Milton Cezar Ribeiro wrote:
> Dear All,
>
> I would like adjust and know the "R2" of following presence/absence
> data:
>
> x<-1:10
> y<-c(0,0,0,0,0,0,0,1,1,1)
>
> I tryed use clogit (survival package) but it don´t worked.
>
> Any idea?
>
> miltinho
You are trying to f
Oops. Yep, I totally forgot my specs and such. I'm currently running
R-2.4.1 on a 64-bit Linux box (Fedora Core 6) with 4GB of RAM. The files
are 10-50Kb on average, but this error came about when only working with
~16,000 of them. The final size of the corpus is ~1.7M files. So,
obviously, this m
Joe,
here is a piece of junk copied from my blog, showing how to use mtry and hth.
library(MASS);
library(randomForest);
data(Boston);
set.seed(2007);
# SEARCH FOR BEST VALUE OF MTRY FOR RANDOM FORESTS
mtry <- tuneRF(Boston[, -14], Boston[, 14], mtryStart = 1,
stepFactor = 2, ntree
I have a multivariate time series and I would like to build a forecasting
model with both AR and MA terms, I think that this is possible in R. I have
looked at the vars package and it looks like it is possible to estimate MA
terms using the Phi and Psi functions but I am not sure how to incorporate
Dear All,
I would like adjust and know the "R2" of following presence/absence data:
x<-1:10
y<-c(0,0,0,0,0,0,0,1,1,1)
I tryed use clogit (survival package) but it don´t worked.
Any idea?
miltinho
__
[[alternative HTML version deleted]
Hi,
This is a bit R unrelated, I want to, however, use my results from
this in a R function.
I am trying to figure out a formula for the deviance given a
likelihood function. In my derivation I end up with having ln(y-1)
twice in my expression for the deviance (see attached pdf). Which
makes it
I am working on a project where we start with start with 2 long,
equal-length vectors, massage them in various ways, and end up with a
function mapping one interval to another. I'll call that function
"f1." The last step in R is to generate f1 as the value of the
approxfun function. I would
Joe,
I'm guessing you are doing a 2-category problem. The three lines are
OOB errors for overall error and each of the two categories.
There is only one default value of mtry. You can specify a different
mtry when the forest is built (in your call to randomForest()), but it
applies to the entire
On Mar 15, 2007, at 12:37 PM, Mark Wardle wrote:
> Dear all,
>
> I'm struggling with a plot and would value any help!
>
> I'm attempting to highlight a histogram and density plot to show a
> proportion of cases above a threshold value. I wanted to cross-
> hatch the
> area below the density curve
When using the plot.randomForest method, 3 error series (by number of trees)
are plotted. I suspect they are associated with the 3 default values of mtry
that are used, for example, in the tuneRF method but I'm not sure. Could
someone confirm?
Also, is it possible to force different values of m
Hi all,
I'm using SVM to classify data (2 classes) and I get strange results :
> model = svm(x, y, probability = TRUE)
> pred = predict(model, x, decision.values = TRUE, probability = FALSE)
> table(pred,y)
y
pred ctl nuc
ctl 82 3
nuc 5 84
> pred
1 2 3 4 5 6 7 8
Yes, your error is due to running out of memory. This is probably one
of the most frequent questions asked here, so if you search again you
can find a lot of advice on how to get around it.
As you learn more about R programming you will learn how to store data
more efficiently, rm() to remove v
Dear all,
I'm struggling with a plot and would value any help!
I'm attempting to highlight a histogram and density plot to show a
proportion of cases above a threshold value. I wanted to cross-hatch the
area below the density curve. The breaks and bandwidth are deliberate
integer values because o
Mark Wardle wrote:
> Dear all,
>
> I'm struggling with a plot and would value any help!
> ...
>
> Is there a better way? As always, I'm sure there's a one-liner rather
> than my crude technique!
>
As always, I've spent ages trying to sort this, and then the minute
after sending an email, I find
Hello all,
I've been working with R & Fridolin Wild's lsa package a bit over the
past few months, but I'm still pretty much a novice. I have a lot of
files that I want to use to create a semantic space. When I begin to run
the initial textmatrix( ), it runs for about 3-4 hours and eventually
g
>>> [EMAIL PROTECTED] 15/03/2007 13:26:52 >>>
On 15-Mar-07 12:09:42, Eli Gurarie wrote:
> Hello all,
>
> I am fishing for some suggestions on efficient ways to make qdist and
> pdist type functions from an arbitrary distribution whose probability
> density function I've defined myself.
Ted Ha
Does anyone know where I can find a proof of the fact that when each X
matrix in a SUR is the same,
then SUR estimation is equivalent to OLS estmation. The proof is
supposedly in William Greene's book but that book
costs 157.00 an has mixed reviews so I am reluctant to purchase it.
Thanks.
---
I have to perform ANOVA's on many different data organized in a dataframe. I
can run an ANOVA for each sample, but I've got hundreds of data and I would
like to avoid manually carrying out each test. in addition, I would like to
have the results organized in a simple way, for example in a table,
This is much easier in R-devel: just use message() and scan("stdin").
gannet% cat Test.R
message("Enter file name: ", appendLF=FALSE)
fn <- scan("stdin", what="", n=1)
works for me in R-devel via R --vanilla -f Test.R > Rout.txt
I believe it also works under Windows.
On Wed, 14 Mar 2007, John S
Giovanni Parrinello said the following on 3/15/2007 6:43 AM:
> Dear All,
> update.packages(ask='graphics')
> --- Please select a CRAN mirror for use in this session ---
> Error in .readRDS(pfile) : unknown input format.
> ???
> TIA
> Giovanni
>
I cannot replicate this in R-2.4.1. What version o
I had similar problems, actually it is very difficult to customize persp
graphics, you should try wireframe in lattice instead.
This reference might help to customize the wireframe plots:
http://www.polisci.ohio-state.edu/faculty/lkeele/3dinR.pdf
JR
El mié, 14-03-2007 a las 20:40 -0700, Joseph
I can speak to some of these issues. I don't know about how much benefit
you can get from SMP for *single* instances of R, though.
1.) Multicore will be helpful, at least, if you are running several
instances of R at once. So, for example, if you have people running two
different models at the
Dear All,
update.packages(ask='graphics')
--- Please select a CRAN mirror for use in this session ---
Error in .readRDS(pfile) : unknown input format.
???
TIA
Giovanni
--
dr. Giovanni Parrinello
Department of Biotecnologies
Medical Statistics Unit
University of Brescia
Viale Europa, 11 25123 Bres
Hi,
we are looking for a new workstation for big datasets/applications (easily
up to 100'000 records and up to 300 Variables) using R. As an example:
Variable Selection for a multivariate regression using stepAIC.
What is the best configuration for a workstation to reach a high performance
level
On 15-Mar-07 12:09:42, Eli Gurarie wrote:
> Hello all,
>
> I am fishing for some suggestions on efficient ways to make qdist and
> pdist type functions from an arbitrary distribution whose probability
> density function I've defined myself.
>
> For example, let's say I have a distribution whose
Bonjour,
elles correspondent à des données de différents traitements (EU) avec des
dates différentes
j'aimerai pouvoir faire une comparaison de k moyennes selon ces deux
variables date et EU
ANOVA à n facteurs n'est pas la bonne solution
je voudrai savoir qu'elles sont les tests qui permettent d
Please do not mess up the thread by posting as a reply to some other topic.
Thanks,
Ranjan
On Thu, 15 Mar 2007 16:51:20 +0530 [EMAIL PROTECTED] wrote:
>
> Hi All
>
> Thanks for supporting people like me.
> What is cointegration and its connection with granger causality test ?
> what is its
I use a windows Server 2003 machine to run R code in batch mode every
night using the following command:
"F:\Program Files\R\R-2.4.1pat\bin\R.exe" CMD BATCH --vanilla --slave
"batch_master_dr.R"
This produces an output file, of which the first three lines look like
this:
Loading required package
Hi
Could anybody give me a bit of advice on some code I'm having trouble with?
I've been trying to calculate the loglikelihood of a function iterated over
a data of time values and I seem to be experiencing difficulty when I use
the function expm(). Here's an example of what I am trying to do
y
Hello all,
I am fishing for some suggestions on efficient ways to make qdist and
pdist type functions from an arbitrary distribution whose probability
density function I've defined myself.
For example, let's say I have a distribution whose pdf is:
dRN <- function(x,d,v,s)
# d, v, and s are par
Hi List,
sorry if this is a FAQ : I could not make my way to it :-(
Once upon a time it happened to be a package named "rpm" for "Robust
Point Matching".
I can find a few traces of it in the CRAN archives : works by Saussen
and al. (Aligning spectra with R), but can't find the package anymore.
N
Hi All
Thanks for supporting people like me.
What is cointegration and its connection with granger causality test ?
what is its use and mathematical methodology behind it. Secondly, is
cointegration test like "Phillips-Ouliaris Cointegration Test" of tseries
package or of urca package is the
FYI, to save data as bitmap images, see the EBImage package on Bioconductor.
/H
On 3/15/07, Ranjan Maitra <[EMAIL PROTECTED]> wrote:
> On Wed, 14 Mar 2007 18:45:53 -0700 (PDT) Milton Cezar Ribeiro <[EMAIL
> PROTECTED]> wrote:
>
> > Dear Friends,
> >
> > I saved a matrix - which contans values 0
On Thu, 2007-03-15 at 10:21 +0100, Peter Dalgaard wrote:
> Gavin Simpson wrote:
> > On Wed, 2007-03-14 at 20:16 -0700, Steven McKinney wrote:
> >
> >> Since you can index a matrix or dataframe with
> >> a matrix of logicals, you can use is.na()
> >> to index all the NA locations and replace them
On Thu, Mar 15, 2007 at 10:21:22AM +0100, Peter Dalgaard wrote:
> Gavin Simpson wrote:
> > On Wed, 2007-03-14 at 20:16 -0700, Steven McKinney wrote:
> >
> >> Since you can index a matrix or dataframe with
> >> a matrix of logicals, you can use is.na()
> >> to index all the NA locations and repla
Internally, labs in persp are drawn as when you use "text" function.
So you cannot change the sizes by cex.lab, but you can change by cex:
persp(x, y, z, cex=1.5)
gives larger labs in persp 3d plot.
Of course there may be some side effect because it may change the size
something other than labs.
Dear Lukas,
allthough I'm intrigued by the purpose of what you are trying to do,
as mentioned by some of the other persons on this list, I liked the
challenge to write such a function.
I came up with the following during some train-traveling this morning:
tum <- function(x)
{
On Wed, Mar 14, 2007 at 06:30:53PM -0500, Ranjan Maitra wrote:
> I agree with Bert on this one! Any commercial entity's future policies will
> not be decided by some group's past understanding. Everything can be
> explained in terms of shareholder value.
>
Personally, i would go even further. I
Gavin Simpson wrote:
> On Wed, 2007-03-14 at 20:16 -0700, Steven McKinney wrote:
>
>> Since you can index a matrix or dataframe with
>> a matrix of logicals, you can use is.na()
>> to index all the NA locations and replace them
>> all with 0 in one command.
>>
>>
>
> A quicker solution, tha
On Wed, 2007-03-14 at 20:16 -0700, Steven McKinney wrote:
> Since you can index a matrix or dataframe with
> a matrix of logicals, you can use is.na()
> to index all the NA locations and replace them
> all with 0 in one command.
>
A quicker solution, that, IIRC, was posted to the list by Peter
D
58 matches
Mail list logo