More generally, anything that has a dim attribute is an array
including 1d, 2d, 3d structures with dim attributes.
Matrices have a dim attribute so matrices are arrays and
is.array(m) will be TRUE if m is a matrix.
miguel manese gmail.com> writes:
:
: I think the more intuitive way to think
Can we do Cholesky Decompositon in R for any matrix
-
[[alternative HTML version deleted]]
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read th
I think the more intuitive way to think of it is that dim works only
for matrices (an array being a 1 column matrix). and vectors are not
matrices.
> x <- 1:5
> class(x) # numeric
> dim(x) <- 5
> class(x) # array
> dim(x) <- c(5,1)
> class(x) # matrix
> dim(x) <- c(1,5)
> class(x) # matrix
On
Olivia Lau fas.harvard.edu> writes:
:
: Hi all,
:
: I'm not sure if this is a feature or a bug (and I did read the
: FAQ and the posting guide, but am still not sure). Some of my
: students have been complaining and I thought I just might ask:
: Let K be a vector of length k. If one types
Hi all,
I'm not sure if this is a feature or a bug (and I did read the
FAQ and the posting guide, but am still not sure). Some of my
students have been complaining and I thought I just might ask:
Let K be a vector of length k. If one types dim(K), you get
NULL rather than [1] k. Is this logi
I hesitate to add this comment since it either completely confuses people or
they take to it very quickly.
The data that you are using is mostly categorical. I expect that tables will
have been used in the past and that to acertain extent the graphics are
suppossed to help with getting a quick
Judie,
You may want to see if the MedlineR library, which has a
program for constructing co-occurrence matrices, will work for
you. The program can be found at:
http://dbsr.duke.edu/pub/MedlineR/
Have fun with it,
Tim Liao
Professor of Sociology & Statistics
University of Illinois
Urbana, IL 6
> From: Judie Z
>
> Dear R experts,
> I have the data in the following fomat(from some kind of card
> sorting process)
>
> ID Category Card numbers
> 1 1 1,2,5
> 1 2 3,4
> 2 1 1,2
> 2 2 3
> 2 3 4,5
>
> I want t
Thanks for the responses to this question, I fully realise it is a rather open
question and the "open" pointers are the kind of thing I am looking for.
I will look into the lattice package and layout.
Regarding the HTML output, the current "tool chain" assets that I have have
been refactored ov
On Fri, 2005-01-21 at 01:48 +0100, Robin Gruna wrote:
> Hi,
> I want to plot two graphics on top of each other with layout(), a
> scatterplot and a barplot. The problems are the different x-axes
> ratios of the plots. How can I align the two x-axes? Thank you very
> much,
> Robin
Robin,
Here i
On Thu, 2005-01-20 at 20:51 -0500, K Fernandes wrote:
> Hello,
>
> I have three vectors defined as follows:
>
> > x<-c(10,20,30,40,50)
> > y1<-c(154,143,147,140,148)
> > y2<-c(178,178,171,188,180)
>
> I would like to plot y1 vs x and y2 vs x on the same graph. How might I do
> this? I have lo
?points has this example
plot(-4:4, -4:4, type = "n")# setting up coord. system
points(rnorm(200), rnorm(200), col = "red")
points(rnorm(100)/2, rnorm(100)/2, col = "blue", cex = 1.5)
In general you might want to check out the keyword section of the help, in
particular the Graphics section which
Dear R experts,
I have the data in the following fomat(from some kind of card sorting process)
ID Category Card numbers
1 1 1,2,5
1 2 3,4
2 1 1,2
2 2 3
2 3 4,5
I want to transform this data into two co-occurence
Hello,
I have three vectors defined as follows:
> x<-c(10,20,30,40,50)
> y1<-c(154,143,147,140,148)
> y2<-c(178,178,171,188,180)
I would like to plot y1 vs x and y2 vs x on the same graph. How might I do
this? I have looked through a help file on plots but could not find the
answer to plotting
Hi,
I want to plot two graphics on top of each other with layout(), a scatterplot
and a barplot. The problems are the different x-axes ratios of the plots. How
can I align the two x-axes? Thank you very much,
Robin
[[alternative HTML version deleted]]
__
On Thu, 20 Jan 2005 13:16:13 -0500, "Doran, Harold" <[EMAIL PROTECTED]>
wrote :
>Dear List:
>
>First, many thanks to those who offered assistance while I constructed
>code for the simulation. I think I now have code that resolves most of
>the issues I encountered with memory.
>
>While the code wor
On Thu, 2005-01-20 at 23:53 +0100, Robin Gruna wrote:
> Hi,
> I want to draw a barplot at the axes of another plot. I saw that with
> two histogramms and a scatterplot in a R graphics tutorial somewhere
> on the net, seemed to be a 2d histogramm. Can someone figure out what
> I mean and give me a h
Hi,
I want to draw a barplot at the axes of another plot. I saw that with two
histogramms and a scatterplot in a R graphics tutorial somewhere on the net,
seemed to be a 2d histogramm. Can someone figure out what I mean and give me a
hint to create such a graphic? Thank you very much,
Robin
"Dieter Menne" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> To call R from Delphi, you may try
> http://www.menne-biomed.de/download/RDComDelphi.zip.
I downloaded this file and tried to compile the RDCom project using Delphi 5
and Delphi 7 but I get this message from both compile
Hi,
We managed to compile R 2.0.1 on 64-bit SUSE Linux 9.1 on a HP
Proliant setup fairly uneventfully by following instructions on the R
installation guide. We did encounter a minor hiccup in setting up x11,
a problem which we note has been raised 4 or 5 times previously, but
this was overcome tha
Ton van Daelen wrote:
Hi all -
I am trying to tune an SVM model by optimizing the cross-validation
accuracy. Maximizing this value doesn't necessarily seem to minimize the
number of misclassifications. Can anyone tell me how the
cross-validation accuracy is defined? In the output below, for example
Definitely check out the lattice package.
One other option is to use sweave/latex mixed with RODBC. This can be
used to produce PDF's for easy distribution as well. I would also
consider operating this in a batch mode, the R/sweave/latex works very
well this way.
Shawn Way, PE
Engineering Mana
Hi All,
I want to fit a straight line into a group of two-dimensional data points with
errors in both x and y coordinates. I found there is an algorithm provided in
"NUMERICAL RECIPES IN C" http://www.library.cornell.edu/nr/bookcpdf/c15-3.pdf
I'm wondering if there is a similar function for this
The 99.7% accuracy you quoted, I take it, is the accuracy on the training
set. If so, that number hardly means anything (other than, perhaps,
self-fulfilling prophecy). Usually what one would want is for the model to
be able to predict data that weren't used to train the model with high
accuracy.
Hi all -
I am trying to tune an SVM model by optimizing the cross-validation
accuracy. Maximizing this value doesn't necessarily seem to minimize the
number of misclassifications. Can anyone tell me how the
cross-validation accuracy is defined? In the output below, for example,
cross-validation ac
Dear List:
First, many thanks to those who offered assistance while I constructed
code for the simulation. I think I now have code that resolves most of
the issues I encountered with memory.
While the code works perfectly for smallish datasets with small sample
sizes, it arouses a windows-based e
> From: Douglas Bates
>
> Liaw, Andy wrote:
> >>From: Douglas Bates
> >>
> >>michael watson (IAH-C) wrote:
> >>
> >>>I think that title makes sense... I hope it does...
> >>>
> >>>I have a data frame, one of the columns of which is a
> >>
> >>factor. I want
> >>
> >>>the rows of data that corres
Kedves Ügyfelünk!
Az [EMAIL PROTECTED] címre küldött levelét rendszerünk fogadta, munkatársunk
hamarosan válaszol rá. Amennyiben Ön rendelési, szállítási vagy egyéb
adminisztrációs problémával kapcsolatban írt nekünk, kérjük újra küldje el a
levelet az [EMAIL PROTECTED] címre, hogy a rendelések fe
Liaw, Andy wrote:
From: Douglas Bates
michael watson (IAH-C) wrote:
I think that title makes sense... I hope it does...
I have a data frame, one of the columns of which is a
factor. I want
the rows of data that correspond to the level in that factor which
occurs the most times.
So first you wan
Does the following do what you want (or at least get you closer)?
> tmp <- matrix(0,16,16)
> tmp[col(tmp)%%4 == row(tmp)%%4] <- 64
> tmp
...
Greg Snow, Ph.D.
Statistical Data Center
[EMAIL PROTECTED]
(801) 408-8111
>>> "Doran, Harold" <[EMAIL PROTECTED]> 01/20/05 07:17AM >>>
I should probably
On Thu, 20 Jan 2005, Sabrina Carpentier wrote:
I am running R 2.0.0 on a SunOs 5.9 machine and using Oracle 8i.1.7.0.0
(enterprise edition)
and when I try to load ROracle I receive the following error:
"require(ROracle)
Loading required package: ROracle
Loading required package: DBI
Error in dyn.
>>> "Paul Sorenson" <[EMAIL PROTECTED]> 01/19/05 03:18PM >>>
>> I know enough about R to be dangerous and our marketing people have
>> asked me to "automate" some reporting. Data comes from an SQL
source
>> and graphs and various summaries are currently created manually in
>> Excel. The raw infor
I am running R 2.0.0 on a SunOs 5.9 machine and using Oracle 8i.1.7.0.0
(enterprise edition)
and when I try to load ROracle I receive the following error:
"require(ROracle)
Loading required package: ROracle
Loading required package: DBI
Error in dyn.load(x, as.logical(local), as.logical(now)
> From: Douglas Bates
>
> michael watson (IAH-C) wrote:
> > I think that title makes sense... I hope it does...
> >
> > I have a data frame, one of the columns of which is a
> factor. I want
> > the rows of data that correspond to the level in that factor which
> > occurs the most times.
>
>
Dear all,
I am interested in correctly testing effects of continuous environmental
variables and ordered factors on bacterial abundance. Bacterial
abundance is derived from counts and expressed as percentage. My problem
is that the abundance data contain many zero values:
Bacteria <-
c(2.23,0,0
I don't know about the 'in R' bit, but ISTR that Monte-Carlo (or pseudo
Monte-Carlo) Integration is a way of doing this 'numerically'. I know that
Mathematica implements the (pseudo Monte-Carlo)
Halton-Hammersley-Wozniakowski algorithm as Nintegrate. Perhaps something
equivalent has been coded by
I'm using GCC 3.4.2 to build R-2.0.1 from sources downloaded from the
University of Bristol mirror.
bash-2.05$./configure
R is now configured for sparc-sun-solaris2.9
Source directory: .
Installation directory:/usr/local
C compiler:gcc -g -O2
C++ compiler:
[EMAIL PROTECTED] wrote:
Hello,
I'm Carla, an italian student, I'm looking for a package to transform non
normal data to normality. I tried to use Box Cox, but it's not ok. There is a
package to use Johnson families' transormation? Can you give me any suggestions
to find free software as R that
Greetings, Carla:
While it is possible to map any proper density into a normal through their
CDFs, that may not be useful in your case.
I suggest that you first plot your data.
?qqnorm
(Type ?qqnorm on the R command line and hit Enter.)
Are your data continuous, or do they occur in groups? Do
Virginie Rondeau wrote:
Hello
I would like to compare the results obtained with a classical non
parametric proportionnal hazard model with a parametric proportionnal
hazard model using a Weibull.
How can we obtain the equivalence of the parameters using coxph(non
parametric model) and survreg(p
Hello,
I'm Carla, an italian student, I'm looking for a package to transform non
normal data to normality. I tried to use Box Cox, but it's not ok. There is a
package to use Johnson families' transormation? Can you give me any suggestions
to find free software as R that use this trasform?
Thank
On Thu, Jan 20, 2005 at 03:18:53PM +0100, Virginie Rondeau wrote:
> Hello
> I would like to compare the results obtained with a classical non
> parametric proportionnal hazard model with a parametric proportionnal
> hazard model using a Weibull.
>
> How can we obtain the equivalence of the param
michael watson (IAH-C) wrote:
I think that title makes sense... I hope it does...
I have a data frame, one of the columns of which is a factor. I want
the rows of data that correspond to the level in that factor which
occurs the most times.
So first you want to determine the mode (in the sense o
Hello
I would like to compare the results obtained with a classical non
parametric proportionnal hazard model with a parametric proportionnal
hazard model using a Weibull.
How can we obtain the equivalence of the parameters using coxph(non
parametric model) and survreg(parametric model) ?
Than
In complex analysis, Cauchy's integral theorem states (loosely
speaking) that the path integral
of any entire differentiable function, around any closed curve, is zero.
I would like to see this numerically, using R (and indeed I would like
to use the
residue theorem as well).
Has anyone coded u
I should probably have explained my data and model a little better.
Assume I have student achievement scores across four time points. I
estimate the model using gls() as follows
fm1 <- gls(score ~ time, long, correlation=corAR1(form=~1|stuid),
method='ML')
I can now extract the variance-covarianc
On Jan 20, 2005, at 8:57 AM, michael watson ((IAH-C)) wrote:
I think that title makes sense... I hope it does...
I have a data frame, one of the columns of which is a factor. I want
the rows of data that correspond to the level in that factor which
occurs the most times.
I can get a list by doing:
newdata <- subset(mydata, mydata$myfact ==
names(which.max(table(mydata$myfact
michael watson (IAH-C) wrote:
I think that title makes sense... I hope it does...
I have a data frame, one of the columns of which is a factor. I want
the rows of data that correspond to the level in that factor w
Any particular site where I get some examples and references in Multiple
Imputation using Bootstrapping
-
[[alternative HTML version deleted]]
__
R-help@stat.math.ethz.ch mailing list
https://s
I think that title makes sense... I hope it does...
I have a data frame, one of the columns of which is a factor. I want
the rows of data that correspond to the level in that factor which
occurs the most times.
I can get a list by doing:
by(data,data$pattern,subset)
And go through each eleme
Where can I get the literature on Multiple Imputation using Additive
Regressing, Bootstrapping, Predictive Mean Matching
__
[[alternative HTML version deleted]]
__
R-help@stat.math.ethz.ch mail
I'm still not clear on exactly what your question is. If you can plug in
the numbers you want in, say, the lower triangular portion, you can copy
those to the upper triangular part easily; something like:
m[upper.tri(m)] <- m[lower.tri(m)]
Is that what you're looking for?
Andy
> From: Doran, H
On Thu, 20 Jan 2005, Marco Sandri wrote:
Hi.
I have R (Ver 2.0) correctly running on a Suse 9.0
Linux machine.
32- or 64-bit?
I correclty installed the "Logic Regression" LogicReg library
(by the command: R CMD INSTALL LogicReg)
developed by Ingo Ruczinski and Charles Kooperberg :
http://bear.fhcrc
Dear List:
I am working to construct a matrix of a particular form. For the most
part, developing the matrix is simple and is built as follows:
vl.mat<-matrix(c(0,0,0,0,0,64,0,0,0,0,64,0,0,0,0,64),nc=4)
Now to expand this matrix to be block-diagonal, I do the following:
sample.size <- 100 # num
> From: Prof Brian Ripley
>
> On Thu, 20 Jan 2005, Tomas Kalibera wrote:
>
> > Dear Prof Ripley,
> >
> > thanks for your suggestions, it's very nice one can create
> custom connections
> > directly in R and I think it is what I need just now.
> >
> >> However, what is wrong with reading a file
Kjetil Vestfossen wrote:
Anyone
I'm wondering how to make confidence intervals (bonferroni or simultaneous)
when using Manova and Mancova in Splus. I 'm doing manova with four variables
on length and four variables on weight (of salmon). The measuring is done on
different time points. I'm workin
Hi Paul,
I find your question intriguing, but might I ask that you elaborate on your
terminology and context of "lock out" and "hidden" in your question?
Otherwise I am afraid that my current ideas on an answer will surely be
based on the wrong diagnosis of what you are really looking for.
Thanks,
Hi.
I have R (Ver 2.0) correctly running on a Suse 9.0
Linux machine.
I correclty installed the "Logic Regression" LogicReg library
(by the command: R CMD INSTALL LogicReg)
developed by Ingo Ruczinski and Charles Kooperberg :
http://bear.fhcrc.org/~ingor/logic/html/program.html
When I try to load
On Thu, 20 Jan 2005, Tomas Kalibera wrote:
Dear Prof Ripley,
thanks for your suggestions, it's very nice one can create custom connections
directly in R and I think it is what I need just now.
However, what is wrong with reading a file at a time and combining the
results in R using rbind?
Well,
Dear Prof Ripley,
thanks for your suggestions, it's very nice one can create custom
connections directly in R and I think it is what I need just now.
However, what is wrong with reading a file at a time and combining the
results in R using rbind?
Well, the problem is performance. If I concatena
Anyone
I'm wondering how to make confidence intervals (bonferroni or simultaneous)
when using Manova and Mancova in Splus. I 'm doing manova with four variables
on length and four variables on weight (of salmon). The measuring is done on
different time points. I'm working on my master in the fi
On Thu, 20 Jan 2005, Tomas Kalibera wrote:
is it possible to create my own connection which I could use with
Yes. In a sense, all the connections are custom connections written by
someone.
read.table or scan ? I would like to create a connection that would read
from multiple files in sequence (l
Hello,
is it possible to create my own connection which I could use with
read.table or scan ? I would like to create a connection that would read
from multiple files in sequence (like if they were concatenated),
possibly with an option to skip first n lines of each file. I would like
to avoid using
Hello!
I have run Rprof on a function of mine and the results look very strange,
to say the least. At the end I of this email is an output of summaryRprof. Can
someone help me interpret this output? I have read the appropriate section in
the manual "Writing R Extensions" and help pages.
If
Hi,
see these links:
http://www.liacc.up.pt/~ltorgo/DataMiningWithR/
http://sawww.epfl.ch/SIC/SA/publications/FI01/fi-sp-1/sp-1-page45.html
Brian D. Ripley, Datamining: Large Databases and
Methods, in Proceedings of useR! 2004 - The R User
Conference, may 2004
http://www.ci.tuwien.ac.at/Confer
65 matches
Mail list logo