[R] cluster analysis and null hypothesis testing

2004-09-14 Thread Patrick Giraudoux
Hi,

I am wondering if a Monte Carlo method (or equivalent) exist permitting to test the 
randomness of a cluster analysis (eg got by
hclust(). I went through the package "fpc" (maybe too superficially) but dit not find 
such method.

Thanks for any hint,

Patrick Giraudoux

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


FW: [R] glmmPQL and random factors

2004-09-14 Thread Lorenz . Gygax

I have just realised that I sent this to Per only. For those interested on
the list:

-Original Message-
From: Gygax Lorenz FAT 
Sent: Tuesday, September 14, 2004 4:35 PM
To: 'Per Toräng'
Subject: RE: [R] glmmPQL and random factors

Hi Per,

> glmmPQL(Fruit.set~Treat1*Treat2+offset(log10(No.flowers)), 
> random=~1|Plot, family=poisson, data=...)
> 
> Plot is supposed to be nested in (Treat1*Treat2).
> Is this analysis suitable?

As far as I understand the methods and with my experience using such
analyses, I would say that the model is ok the way you specified it.

glmmPQL (and the underlying lme) is so intelligent (a thousand thanks to the
developpers!) as to recognise if the treatments are fixed per plot, i.e.
only one level of the two treatments appears in each plot. The denominator
degrees of freedom in the anova table are adjusted automatically. I.e. your
denominator df should  be the number of plots minus five, the number of dfs
you need for the fixed effects (Treat1, Treat2, the interaction, the
covariate and the one df you always loose from the total of observations).

> Moreover, what is the meaning of typing 
> random=~1|Plot compared to random=~Treat1*Treat2|Plot?

The first version means, that the intercept / overall mean can vary from
plot to plot. I.e. each plot may have another mean due to the fact that it
grows somewhere else in addition to the differing treatments.

The second version tries to model a difference in reaction to treatment 1
and 2 for each of the plots (which does not make sense in your case as each
plot is only subjected to one kind of treatment).

In a crossed design, i.e. if you could have treated your plants individually
and had all treatment combinations in each of the plots, the first version
implies that all the plots react in the same consistent way to the
treatments. I.e. that the general level of each plot may be different, but
the differences due to treatment are the same in each plot, the reaction of
the plots are shifted but have the same shape (this is the same as saying
that you only consider main effects of treatment and plot).

The second version allows to estimate the reactions for each plot, i.e. in
addition to a general shift, the treatments may have (slightly) different
effects in each plot. This is the same as saying that you consider
interactions between your fixed and random effects. See also the terrific
book by Pinheiro & Bates (Mixed Effects Modelling in S and S-Plus, Springer,
2000).

Cheers, Lorenz
- 
Lorenz Gygax
Tel: +41 (0)52 368 33 84 / [EMAIL PROTECTED]  
Centre for proper housing of ruminants and pigs
Swiss Federal Veterinary Office

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] getting started on Bayesian analysis

2004-09-14 Thread HALL, MARK E
 I've found 
 
Bayesian Methods: A Social and Behavioral Sciences Approach
by Jeff Gill 

useful as an introduction.  The examples are written in R and S with generalized scripts for doing 
a variety of problems.  (Though I never got change-point analysis to successfully in R.)

Best, Mark Hall
Mark Hall
Archaeological Research Facility
UC Berkeley
__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] getting started on Bayesian analysis

2004-09-14 Thread Tamas K Papp
I am an economist who decided it's high time that I learned some
Bayesian statistics.  I am following An Introduction to Modern
Bayesian Econometrics by T. Lancaster.

The book recommends using BUGS, but I wonder if there are any
alternatives which are free software and fully integrated to R (which
I have been using for more than two years for numerical computations.)
I would like to learn what R packages (or other software)
statisticians use for Bayesian analysis to R, if there are viable
alternatives to BUGS, etc.

A couple of references to relevant packages, books or online tutorials
would help me a lot.

Thanks

Tamas

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Spare some CPU cycles for testing lme?

2004-09-14 Thread John Christie
On Sep 13, 2004, at 8:47 PM, Tamas K Papp wrote:
It ran smoothly on my installation.
version
 _
platform powerpc-apple-darwin6.8
arch powerpc
os   darwin6.8
system   powerpc, darwin6.8
status
major1
minor9.1
year 2004
month06
day  21
language R
Typical lines from top (if it helps anything; around 45-50k
iterations):
1225 R.bin   90.7%  2:55:12   162  1164  71.7M+ 13.7M  66.6M+  
228M+
1225 R.bin   78.6%  3:18:27   162  1606  81.3M+ 13.7M  75.5M+  
234M+
I can confirm this.  same installation
__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] pairs correlations colors

2004-09-14 Thread Valeria Edefonti
I have the following problem.
I want to use pairs function and get a matrix of scatterplots with the  
correlations in the upper panel and the ordinary scatterplots in the  
lower panel.
Moreover, I want to have points colored in five differet ways in the  
lower panel, because I have five subgroups.
In order to do that I tried to combine examples on pairs function help.
I got a colored matrix using hints on iris dataset.
I got a black and white matrix with correlations using function  
panel.cor, exactly as it is in the example.
Unfortunately, the line:

jpeg(filename="/home/valeria/Thesis/lung/fig/scatterplotcolnames.jpg")
pairs(aggiunta[,1: 
6],labels=c("ALCAM","ITGB5","MSN","CSTB","DHCR24","TRIM29"), main =  
"Scatterplots selected genes",pch=21,
bg = c("red", "green3", "blue",  
"brown","orange")[aggiunta[,7]],upper.panel=panel.cor)
dev.off()

doesn't allow me to get the desidered matrix with colors and  
correlations.
I also tried to create a function panel.col for the lower.panel:

## put colors on the lower panels
 panel.col <- function(datiepheno)
 {
usr <- par("usr"); on.exit(par(usr))
par(bg = c("red", "green3", "blue",  
"brown","orange")[datiepheno[,7]],pch=21, usr = c(0, 1, 0, 1))
}

but it doesn't work as well.
Any idea?
I hope I'll be precise but not too much precise!
Thank you very much
Valeria
__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] rolling n-step predictions for ARIMA models

2004-09-14 Thread Michael Roberts
Hello:

I would like to generate rolling, multiperiod forecasts from an 
estimated ARIMA model, but the function predict.Arima seems 
only to generate forecasts from the last observation in the data 
set.  To implement this, I was looking for an argument like 
'newdata=' in predict.lm.  

I can write some code that does this for my particular problem,
but might there exist a package/function that does this that I 
cannot find?

Thanks,
-Michael




Michael J. Roberts

Resource Economics Division
Production, Management, and Technology
USDA-ERS
(202) 694-5557 (phone)
(202) 694-5775 (fax)

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] memory problem under windows

2004-09-14 Thread Prof Brian Ripley
Did you read the *rest* of what the rw-FAQ says?

  Be aware though that Windows has (in most versions) a maximum amount of
  user virtual memory of 2Gb, and parts of this can be reserved by 
  processes but not used. The version of the memory manager used from R
  1.9.0 allocates large objects in their own memory areas and so is better
  able to make use of fragmented virtual memory than that used previously.

  R can be compiled to use a different memory manager which might be
  better at using large amounts of memory, but is substantially slower
  (making R several times slower on some tasks).

So, it tells you about memory fragmentation, and it tells you about making 
R aware of large-memory versions of Windows and that an alternative memory 
manager can be used.  If you actually tried those, the posting guide asks 
you to indicate it, so I presume you did not.

Also, take seriously the idea of using a more capable operating system 
that is better able to manage 2Gb of RAM.


On Tue, 14 Sep 2004, Christoph Lehmann wrote:

> I have (still) some memory problems, when trying to allocate a huge array:
> 
> WinXP pro, with 2G RAM
> 
> I start R by calling: 
> 
> Rgui.exe --max-mem-size=2Gb (as pointed out in R for windows FAQ)
> 
> R.Version(): i386-pc-mingw32, 9.1, 21.6.2004
> 
> ## and here the problem
> x.dim <- 46
> y.dim <- 58
> slices <- 40
> volumes <- 1040
> a <- rep(0, x.dim * y.dim * slices * volumes)
> dim(a) <- c(x.dim, y.dim, slices, volumes)
> 
> gives me: "Error: cannot allocate vector of size 850425 Kb"
> 
> even though
> 
> memory.limit(size = NA)
> yields2147483648
> 
> and
> 
> memory.size()
> gives 905838768
> 
> so why is that and what can I do against it?
> 
> Many thanks for your kind help
> 
> Cheers
> 
> Christoph
> 
> 

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] memory problem under windows

2004-09-14 Thread James W. MacDonald
Christoph Lehmann wrote:
I have (still) some memory problems, when trying to allocate a huge array:
WinXP pro, with 2G RAM
I start R by calling:   

Rgui.exe --max-mem-size=2Gb (as pointed out in R for windows FAQ)
Not sure that it actually says to use 2Gb there. You might try 
--max-mem-size=2000M, which seems to work better for me.

HTH,
Jim

--
James W. MacDonald
Affymetrix and cDNA Microarray Core
University of Michigan Cancer Center
1500 E. Medical Center Drive
7410 CCGC
Ann Arbor MI 48109
__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] memory problem under windows

2004-09-14 Thread Christoph Lehmann
I have (still) some memory problems, when trying to allocate a huge array:
WinXP pro, with 2G RAM
I start R by calling:   
Rgui.exe --max-mem-size=2Gb (as pointed out in R for windows FAQ)
R.Version(): i386-pc-mingw32, 9.1, 21.6.2004
## and here the problem
x.dim <- 46
y.dim <- 58
slices <- 40
volumes <- 1040
a <- rep(0, x.dim * y.dim * slices * volumes)
dim(a) <- c(x.dim, y.dim, slices, volumes)
gives me: "Error: cannot allocate vector of size 850425 Kb"
even though
memory.limit(size = NA)
yields  2147483648
and
memory.size()
gives 905838768
so why is that and what can I do against it?
Many thanks for your kind help
Cheers
Christoph
--
Christoph LehmannPhone:  ++41 31 930 93 83
Department of Psychiatric NeurophysiologyMobile: ++41 76 570 28 00
University Hospital of Clinical Psychiatry   Fax:++41 31 930 99 61
Waldau[EMAIL PROTECTED]
CH-3000 Bern 60 http://www.puk.unibe.ch/cl/pn_ni_cv_cl_03.html
__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] reshaping some data

2004-09-14 Thread Gabor Grothendieck

Here is another variation.  It uses LOCF which is last
observation carried forward -- a function which takes a
logical vector and for each element provides the index of
the last TRUE value.  The version of LOCF here assumes
that the first element of the argument is TRUE which
happens to be the case here.

LOCF <- function(L) which(L)[cumsum(L}]

is.x <- substr(colnames(x),1,1) == "x"
x. <- unlist(x[,LOCF(is.x)[!is.x]])
names(x.) <- NULL
data.frame(x = x., y = unlist(x[,!is.x]), row.names = NULL) 


I found Peter's solution particularly clever.  Note that it
depends on the y colnames having the same first digit as the
corresponding x colnames; however they need not be in any
specific order, whereas the solution above and my previous 
one below depend on the y names being immediately after the 
x names but do not depend on the detailed content of the names.  
In the present case both these assumptions appear to hold
but in different situations one or the other of these assumptions
might be preferable.



Gabor Grothendieck  myway.com> writes:

: 
: Try this:
: 
: is.x <- substr(colnames(x),1,1) == "x"   # TRUE if col name starts with x
: x. <- unlist(rep(x[,is.x], diff(which(c(is.x,TRUE)))-1))   # repeat x cols
: names(x.) <- NULL
: y. <- unlist(x[,!is.x])
: DF <- data.frame(x = x., y = y., row.names = NULL)
: 
: Sundar Dorai-Raj  PDF.COM> writes:
: 
: : 
: : Hi all,
: :I have a data.frame with the following colnames pattern:
: : 
: : x1 y11 x2 y21 y22 y23 x3 y31 y32 ...
: : 
: : I.e. I have an x followed by a few y's. What I would like to do is turn 
: : this wide format into a tall format with two columns: "x", "y". The 
: : structure is that xi needs to be associated with yij (e.g. x1 should 
: : next to y11 and y12, x2 should be next to y21, y22, and y23, etc.).
: : 
: :   x   y
: : x1 y11
: : x2 y21
: : x2 y22
: : x2 y23
: : x3 y31
: : x3 y32
: : ...
: : 
: : I have looked at ?reshape but I didn't see how it could work with this 
: : structure. I have a solution using nested for loops (see below), but 
: : it's slow and not very efficient. I would like to find a vectorised 
: : solution that would achieve the same thing.
: : 
: : Now, for an example:
: : 
: : x <- data.frame(x1 =  1: 5, y11 =  1: 5,
: :  x2 =  6:10, y21 =  6:10, y22 = 11:15,
: :  x3 = 11:15, y31 = 16:20,
: :  x4 = 16:20, y41 = 21:25, y42 = 26:30, y43 = 31:35)
: : # which are the x columns
: : nmx <- grep("^x", names(x))
: : # which are the y columns
: : nmy <- grep("^y", names(x))
: : # grab y values
: : y <- unlist(x[nmy])
: : # reserve some space for the x's
: : z <- vector("numeric", length(y))
: : # a loop counter
: : k <- 0
: : n <- nrow(x)
: : seq.n <- seq(n)
: : # determine how many times to repeat the x's
: : repy <- diff(c(nmx, length(names(x)) + 1)) - 1
: : for(i in seq(along = nmx)) {
: :for(j in seq(repy[i])) {
: :  # store the x values in the appropriate z indices
: :  z[seq.n + k * n] <- x[, nmx[i]]
: :  # move to next block in z
: :  k <- k + 1
: :}
: : }
: : data.frame(x = z, y = y, row.names = NULL)
: 
: __
: R-help  stat.math.ethz.ch mailing list
: https://stat.ethz.ch/mailman/listinfo/r-help
: PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
: 
:

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] A question please

2004-09-14 Thread Jeff Laake
Lawrence-
Are you familiar with the computer software MARK 
(http://www.cnr.colostate.edu/~gwhite/mark/mark.htm)?
It is a very complete package for analysis of capture-recapture data.  I 
have written an interface to MARK in R but it isn't ready for 
distribution. I'm presently in the process of creating the R help files 
and turning it into an R package. No promises on when I'll be done 
though. Currently my R interface does not support all of the models nor 
capabilities in MARK.  It is an evolving package. I developed it for my 
own use to be able to use the formula and design matrix capabilities in 
R to more easily develop models in MARK. I'm a little hesitant to send 
this message because I'm not looking to provide package support.You 
may also want to be aware of the package WISP for R written by Walter 
Zucchini and David Borchers as a companion to his book on abundance 
estimation. See http://www.ruwpa.st-and.ac.uk/estimating.abundance/WiSP/

I'd be interested in hearing of other responses that you receive.
--jeff
Lawrence Lessner wrote:
Hi Does R have a proceedure/software for capture recapture?  Thank you.
Lawrence Lessner
Please respond to [EMAIL PROTECTED]
__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] A question please

2004-09-14 Thread Spencer Graves
 At "www.r-project.org" -> search -> "R site search", I just got 11 
hits for "capture-recapture".  Did you try this?  They may not all be 
relevant, but I suspect that some of them might be. 

 hope this helps.  spencer graves

Lawrence Lessner wrote:
Hi Does R have a proceedure/software for capture recapture?  Thank you.
Lawrence Lessner
Please respond to [EMAIL PROTECTED]
__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
 

--
Spencer Graves, PhD, Senior Development Engineer
O:  (408)938-4420;  mobile:  (408)655-4567
__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] A question please

2004-09-14 Thread Lawrence Lessner
Hi Does R have a proceedure/software for capture recapture?  Thank you.
Lawrence Lessner
Please respond to [EMAIL PROTECTED]

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Separation between tick marks and tick labels in lattice

2004-09-14 Thread Deepayan Sarkar
On Tuesday 14 September 2004 11:44, Waichler, Scott R wrote:
> I can't find a way to control the distance between tick marks and tick
> labels
> in lattice.  In a stripplot I'm making these components are too close.
> I don't
> see anything like base graphics mgp in the scales list.

There's none in the released version. There is a way in the r-devel version 
version, not via scales, but by the settings, e.g. something like

trellis.par.set(axis.components  = list(left = list(pad1 = 3)))

Deepayan

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] reshaping some data

2004-09-14 Thread Gabor Grothendieck

Try this:

is.x <- substr(colnames(x),1,1) == "x"   # TRUE if col name starts with x
x. <- unlist(rep(x[,is.x], diff(which(c(is.x,TRUE)))-1))   # repeat x cols
names(x.) <- NULL
y. <- unlist(x[,!is.x])
DF <- data.frame(x = x., y = y., row.names = NULL)



Sundar Dorai-Raj  PDF.COM> writes:

: 
: Hi all,
:I have a data.frame with the following colnames pattern:
: 
: x1 y11 x2 y21 y22 y23 x3 y31 y32 ...
: 
: I.e. I have an x followed by a few y's. What I would like to do is turn 
: this wide format into a tall format with two columns: "x", "y". The 
: structure is that xi needs to be associated with yij (e.g. x1 should 
: next to y11 and y12, x2 should be next to y21, y22, and y23, etc.).
: 
:   x   y
: x1 y11
: x2 y21
: x2 y22
: x2 y23
: x3 y31
: x3 y32
: ...
: 
: I have looked at ?reshape but I didn't see how it could work with this 
: structure. I have a solution using nested for loops (see below), but 
: it's slow and not very efficient. I would like to find a vectorised 
: solution that would achieve the same thing.
: 
: Now, for an example:
: 
: x <- data.frame(x1 =  1: 5, y11 =  1: 5,
:  x2 =  6:10, y21 =  6:10, y22 = 11:15,
:  x3 = 11:15, y31 = 16:20,
:  x4 = 16:20, y41 = 21:25, y42 = 26:30, y43 = 31:35)
: # which are the x columns
: nmx <- grep("^x", names(x))
: # which are the y columns
: nmy <- grep("^y", names(x))
: # grab y values
: y <- unlist(x[nmy])
: # reserve some space for the x's
: z <- vector("numeric", length(y))
: # a loop counter
: k <- 0
: n <- nrow(x)
: seq.n <- seq(n)
: # determine how many times to repeat the x's
: repy <- diff(c(nmx, length(names(x)) + 1)) - 1
: for(i in seq(along = nmx)) {
:for(j in seq(repy[i])) {
:  # store the x values in the appropriate z indices
:  z[seq.n + k * n] <- x[, nmx[i]]
:  # move to next block in z
:  k <- k + 1
:}
: }
: data.frame(x = z, y = y, row.names = NULL)

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] reshaping some data

2004-09-14 Thread Sundar Dorai-Raj
Bert,
  Coming up with "nvec" was what I was missing. Modifying your solution 
slightly, here's what I ended up with:

z <- x[, nmx]
nvec <- seq(length(nmx))
z <- unlist(z[, rep(nvec, repy)])
z2 <- data.frame(x = z, y = y, row.names = NULL)
Thanks again,
--sundar
Berton Gunter wrote:
Sundar:
As I understand it, you can easily create an index variable (a pointer,
actually) that will pick out the y columns in order:
z<-yourdataframe
y<-as.vector(z[,indexvar])
So if you could cbind() the x's, you'd be all set.
Again, assuming I understand correctly, the x column you want is:
x<-z[,-indexvar] ## still a frame/matrix
nvec<-seq(length=ncol(x))
x<-as.vector(x[,rep(nvec,times=nvec)])
HTH -- and even if I got it wrong, it was fun, so thanks.
-- Bert
-- Bert Gunter
Genentech Non-Clinical Statistics
South San Francisco, CA
 
"The business of the statistician is to catalyze the scientific learning
process."  - George E. P. Box
 
 


-Original Message-
From: [EMAIL PROTECTED] 
[mailto:[EMAIL PROTECTED] On Behalf Of Sundar 
Dorai-Raj
Sent: Tuesday, September 14, 2004 9:16 AM
To: R-help
Subject: [R] reshaping some data

Hi all,
  I have a data.frame with the following colnames pattern:
x1 y11 x2 y21 y22 y23 x3 y31 y32 ...
I.e. I have an x followed by a few y's. What I would like to 
do is turn 
this wide format into a tall format with two columns: "x", "y". The 
structure is that xi needs to be associated with yij (e.g. x1 should 
next to y11 and y12, x2 should be next to y21, y22, and y23, etc.).

 x   y
x1 y11
x2 y21
x2 y22
x2 y23
x3 y31
x3 y32
...
I have looked at ?reshape but I didn't see how it could work 
with this 
structure. I have a solution using nested for loops (see below), but 
it's slow and not very efficient. I would like to find a vectorised 
solution that would achieve the same thing.

Now, for an example:
x <- data.frame(x1 =  1: 5, y11 =  1: 5,
x2 =  6:10, y21 =  6:10, y22 = 11:15,
x3 = 11:15, y31 = 16:20,
x4 = 16:20, y41 = 21:25, y42 = 26:30, y43 = 31:35)
# which are the x columns
nmx <- grep("^x", names(x))
# which are the y columns
nmy <- grep("^y", names(x))
# grab y values
y <- unlist(x[nmy])
# reserve some space for the x's
z <- vector("numeric", length(y))
# a loop counter
k <- 0
n <- nrow(x)
seq.n <- seq(n)
# determine how many times to repeat the x's
repy <- diff(c(nmx, length(names(x)) + 1)) - 1
for(i in seq(along = nmx)) {
  for(j in seq(repy[i])) {
# store the x values in the appropriate z indices
z[seq.n + k * n] <- x[, nmx[i]]
# move to next block in z
k <- k + 1
  }
}
data.frame(x = z, y = y, row.names = NULL)
__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! 
http://www.R-project.org/posting-guide.html



__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] reshaping some data

2004-09-14 Thread Peter Wolf
Try:
x <- data.frame(x1 =  1: 5, y11 =  1: 5,
   x2 =  6:10, y21 =  6:10, y22 = 11:15,
   x3 = 11:15, y31 = 16:20,
   x4 = 16:20, y41 = 21:25, y42 = 26:30, y43 = 31:35)
df.names<-names(x)
ynames<-df.names[grep("y",df.names)]
xnames<-substring(sub("y","x",ynames),1,2)
cbind(unlist(x[,xnames]),unlist(x[,ynames]))
Peter
Sundar Dorai-Raj wrote:
Hi all,
  I have a data.frame with the following colnames pattern:
x1 y11 x2 y21 y22 y23 x3 y31 y32 ...
I.e. I have an x followed by a few y's. What I would like to do is 
turn this wide format into a tall format with two columns: "x", "y". 
The structure is that xi needs to be associated with yij (e.g. x1 
should next to y11 and y12, x2 should be next to y21, y22, and y23, 
etc.).

 x   y
x1 y11
x2 y21
x2 y22
x2 y23
x3 y31
x3 y32
...
I have looked at ?reshape but I didn't see how it could work with this 
structure. I have a solution using nested for loops (see below), but 
it's slow and not very efficient. I would like to find a vectorised 
solution that would achieve the same thing.

Now, for an example:
x <- data.frame(x1 =  1: 5, y11 =  1: 5,
x2 =  6:10, y21 =  6:10, y22 = 11:15,
x3 = 11:15, y31 = 16:20,
x4 = 16:20, y41 = 21:25, y42 = 26:30, y43 = 31:35)
# which are the x columns
nmx <- grep("^x", names(x))
# which are the y columns
nmy <- grep("^y", names(x))
# grab y values
y <- unlist(x[nmy])
# reserve some space for the x's
z <- vector("numeric", length(y))
# a loop counter
k <- 0
n <- nrow(x)
seq.n <- seq(n)
# determine how many times to repeat the x's
repy <- diff(c(nmx, length(names(x)) + 1)) - 1
for(i in seq(along = nmx)) {
  for(j in seq(repy[i])) {
# store the x values in the appropriate z indices
z[seq.n + k * n] <- x[, nmx[i]]
# move to next block in z
k <- k + 1
  }
}
data.frame(x = z, y = y, row.names = NULL)
__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! 
http://www.R-project.org/posting-guide.html
__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Signs of loadings from princomp on Windows

2004-09-14 Thread Tony Plate
FWIW, I see the same behavior as Francisco on my Windows machine (also an 
installation of the windows binary without trying to install any special 
BLAS libraries):

> library(MASS)
> data(painters)
> pca.painters <- princomp(painters[ ,1:4])
> loadings(pca.painters)
Loadings:
Comp.1 Comp.2 Comp.3 Comp.4
Composition  0.484 -0.376  0.784 -0.101
Drawing  0.424  0.187 -0.280 -0.841
Colour  -0.381 -0.845 -0.211 -0.310
Expression   0.664 -0.330 -0.513  0.432
   Comp.1 Comp.2 Comp.3 Comp.4
SS loadings  1.00   1.00   1.00   1.00
Proportion Var   0.25   0.25   0.25   0.25
Cumulative Var   0.25   0.50   0.75   1.00
> pca.painters <- princomp(painters[ ,1:4])
> loadings(pca.painters)
Loadings:
Comp.1 Comp.2 Comp.3 Comp.4
Composition -0.484 -0.376  0.784 -0.101
Drawing -0.424  0.187 -0.280 -0.841
Colour   0.381 -0.845 -0.211 -0.310
Expression  -0.664 -0.330 -0.513  0.432
   Comp.1 Comp.2 Comp.3 Comp.4
SS loadings  1.00   1.00   1.00   1.00
Proportion Var   0.25   0.25   0.25   0.25
Cumulative Var   0.25   0.50   0.75   1.00
> R.version
 _
platform i386-pc-mingw32
arch i386
os   mingw32
system   i386, mingw32
status
major1
minor9.1
year 2004
month06
day  21
language R
>
My machine is a dual-processor hp xw8000.
I also get the same results with R 2.0.0 dev as in
> R.version
 _
platform i386-pc-mingw32
arch i386
os   mingw32
system   i386, mingw32
status   Under development (unstable)
major2
minor0.0
year 2004
month09
day  13
language R
>
-- Tony Plate
At Tuesday 10:25 AM 9/14/2004, Prof Brian Ripley wrote:
On Tue, 14 Sep 2004, Francisco Chamu wrote:
> I have run this on both Windows 2000 and XP.  All I did was install
> the binaries from CRAN so I think I am using the standard Rblas.dll.
>
> To reproduce what I see you must run the code at the beginning of the
> R session.
We did, as you said `start a clean session'.
I think to reproduce what you see we have to be using your account on your
computer.
> After the second run, all subsequent runs give the same
> result as the second set.
>
> Thanks,
> Francisco
>
>
> On Tue, 14 Sep 2004 08:29:25 +0200, Uwe Ligges
> <[EMAIL PROTECTED]> wrote:
> > Prof Brian Ripley wrote:
> > > I get the second set each time, on Windows, using the build from CRAN.
> > > Which BLAS are you using?
> >
> >
> > Works also well for me with a self compiled R-1.9.1 (both with standard
> > Rblas as well as with the Rblas.dll for Athlon CPU from CRAN).
> > Is this a NT-based version of Windows (NT, 2k, XP)?
> >
> > Uwe
> >
> >
> >
> >
> > > On Tue, 14 Sep 2004, Francisco Chamu wrote:
> > >
> > >
> > >>I start a clean session of R 1.9.1 on Windows and I run the 
following code:
> > >>
> > >>
> > >>>library(MASS)
> > >>>data(painters)
> > >>>pca.painters <- princomp(painters[ ,1:4])
> > >>>loadings(pca.painters)
> > >>
> > >>Loadings:
> > >>Comp.1 Comp.2 Comp.3 Comp.4
> > >>Composition  0.484 -0.376  0.784 -0.101
> > >>Drawing  0.424  0.187 -0.280 -0.841
> > >>Colour  -0.381 -0.845 -0.211 -0.310
> > >>Expression   0.664 -0.330 -0.513  0.432
> > >>
> > >>   Comp.1 Comp.2 Comp.3 Comp.4
> > >>SS loadings  1.00   1.00   1.00   1.00
> > >>Proportion Var   0.25   0.25   0.25   0.25
> > >>Cumulative Var   0.25   0.50   0.75   1.00
> > >>
> > >>However, if I rerun the same analysis, the loadings of the first
> > >>component have the opposite sign (see below), why is that?  I have
> > >>read the note
> > >>in the princomp help that says
> > >>
> > >>"The signs of the columns of the loadings and scores are arbitrary,
> > >> and so may differ between different programs for PCA, and even
> > >> between different builds of R."
> > >>
> > >>However, I still would expect the same signs for two runs in the 
same session.
> > >>
> > >>
> > >>>pca.painters <- princomp(painters[ ,1:4])
> > >>>loadings(pca.painters)
> > >>
> > >>Loadings:
> > >>Comp.1 Comp.2 Comp.3 Comp.4
> > >>Composition -0.484 -0.376  0.784 -0.101
> > >>Drawing -0.424  0.187 -0.280 -0.841
> > >>Colour   0.381 -0.845 -0.211 -0.310
> > >>Expression  -0.664 -0.330 -0.513  0.432
> > >>
> > >>   Comp.1 Comp.2 Comp.3 Comp.4
> > >>SS loadings  1.00   1.00   1.00   1.00
> > >>Proportion Var   0.25   0.25   0.25   0.25
> > >>Cumulative Var   0.25   0.50   0.75   1.00
> > >>
> > >>>R.version
> > >>
> > >> _
> > >>platform i386-pc-mingw32
> > >>arch i386
> > >>os   mingw32
> > >>system   i386, mingw32
> > >>status
> > >>major1
> > >>minor9.1
> > >>year 2004
> > >>month06
> > >>day  21
> > >>language R
> > >>
> > >>BTW, I have tried the same in R 1.9.1 on Debian and I can't reproduce
> > >>what I see
> > >>on Windows.  In fact all the runs give the same as the second run 
on Windows.
> > >>
> > >>-Francisco
> > >>
> > >>__
> > >>[EMAIL PROTECTED] mailing list

RE: [R] Signs of loadings from princomp on Windows

2004-09-14 Thread Liaw, Andy
Ditto here, although not from a fresh session.  Also 1.9.1 binary from CRAN,
on WinXPPro:

> library(MASS)
> data(painters)
> pca.painters <- princomp(painters[ ,1:4])
> loadings(pca.painters)

Loadings:
Comp.1 Comp.2 Comp.3 Comp.4
Composition  0.484 -0.376  0.784 -0.101
Drawing  0.424  0.187 -0.280 -0.841
Colour  -0.381 -0.845 -0.211 -0.310
Expression   0.664 -0.330 -0.513  0.432

   Comp.1 Comp.2 Comp.3 Comp.4
SS loadings  1.00   1.00   1.00   1.00
Proportion Var   0.25   0.25   0.25   0.25
Cumulative Var   0.25   0.50   0.75   1.00
> pca.painters <- princomp(painters[ ,1:4])
> loadings(pca.painters)

Loadings:
Comp.1 Comp.2 Comp.3 Comp.4
Composition -0.484 -0.376  0.784 -0.101
Drawing -0.424  0.187 -0.280 -0.841
Colour   0.381 -0.845 -0.211 -0.310
Expression  -0.664 -0.330 -0.513  0.432

   Comp.1 Comp.2 Comp.3 Comp.4
SS loadings  1.00   1.00   1.00   1.00
Proportion Var   0.25   0.25   0.25   0.25
Cumulative Var   0.25   0.50   0.75   1.00

Andy

> From: Sundar Dorai-Raj
> 
> Hi all,
> I was able to replicate Francisco's observation. I'm using R-1.9.1 
> installed from binaries on Windows 2000 Pro.
> 
> [Previously saved workspace restored]
> 
>  > library(MASS)
>  > data(painters)
>  > pca.painters <- princomp(painters[ ,1:4])
>  > loadings(pca.painters)
> 
> Loadings:
>  Comp.1 Comp.2 Comp.3 Comp.4
> Composition  0.484 -0.376  0.784 -0.101
> Drawing  0.424  0.187 -0.280 -0.841
> Colour  -0.381 -0.845 -0.211 -0.310
> Expression   0.664 -0.330 -0.513  0.432
> 
> Comp.1 Comp.2 Comp.3 Comp.4
> SS loadings  1.00   1.00   1.00   1.00
> Proportion Var   0.25   0.25   0.25   0.25
> Cumulative Var   0.25   0.50   0.75   1.00
>  > pca.painters <- princomp(painters[ ,1:4])
>  > loadings(pca.painters)
> 
> Loadings:
>  Comp.1 Comp.2 Comp.3 Comp.4
> Composition -0.484 -0.376  0.784 -0.101
> Drawing -0.424  0.187 -0.280 -0.841
> Colour   0.381 -0.845 -0.211 -0.310
> Expression  -0.664 -0.330 -0.513  0.432
> 
> Comp.1 Comp.2 Comp.3 Comp.4
> SS loadings  1.00   1.00   1.00   1.00
> Proportion Var   0.25   0.25   0.25   0.25
> Cumulative Var   0.25   0.50   0.75   1.00
>  > R.version
>   _
> platform i386-pc-mingw32
> arch i386
> os   mingw32
> system   i386, mingw32
> status
> major1
> minor9.1
> year 2004
> month06
> day  21
> language R
> 
> Francisco Chamu wrote:
> 
> > I have run this on both Windows 2000 and XP.  All I did was install
> > the binaries from CRAN so I think I am using the standard Rblas.dll.
> > 
> > To reproduce what I see you must run the code at the 
> beginning of the
> > R session.  After the second run, all subsequent runs give the same
> > result as the second set.
> > 
> > Thanks,
> > Francisco
> > 
> > 
> > On Tue, 14 Sep 2004 08:29:25 +0200, Uwe Ligges
> > <[EMAIL PROTECTED]> wrote:
> > 
> >>Prof Brian Ripley wrote:
> >>
> >>>I get the second set each time, on Windows, using the 
> build from CRAN.
> >>>Which BLAS are you using?
> >>
> >>
> >>Works also well for me with a self compiled R-1.9.1 (both 
> with standard
> >>Rblas as well as with the Rblas.dll for Athlon CPU from CRAN).
> >>Is this a NT-based version of Windows (NT, 2k, XP)?
> >>
> >>Uwe
> >>
> >>
> >>
> >>
> >>
> >>>On Tue, 14 Sep 2004, Francisco Chamu wrote:
> >>>
> >>>
> >>>
> I start a clean session of R 1.9.1 on Windows and I run 
> the following code:
> 
> 
> 
> >library(MASS)
> >data(painters)
> >pca.painters <- princomp(painters[ ,1:4])
> >loadings(pca.painters)
> 
> Loadings:
>    Comp.1 Comp.2 Comp.3 Comp.4
> Composition  0.484 -0.376  0.784 -0.101
> Drawing  0.424  0.187 -0.280 -0.841
> Colour  -0.381 -0.845 -0.211 -0.310
> Expression   0.664 -0.330 -0.513  0.432
> 
>   Comp.1 Comp.2 Comp.3 Comp.4
> SS loadings  1.00   1.00   1.00   1.00
> Proportion Var   0.25   0.25   0.25   0.25
> Cumulative Var   0.25   0.50   0.75   1.00
> 
> However, if I rerun the same analysis, the loadings of the first
> component have the opposite sign (see below), why is that?  I have
> read the note
> in the princomp help that says
> 
>    "The signs of the columns of the loadings and scores 
> are arbitrary,
> and so may differ between different programs for PCA, and even
> between different builds of R."
> 
> However, I still would expect the same signs for two runs 
> in the same session.
> 
> 
> 
> >pca.painters <- princomp(painters[ ,1:4])
> >loadings(pca.painters)
> 
> Loadings:
>    Comp.1 Comp.2 Comp.3 Comp.4
> Composition -0.484 -0.376  0.784 -0.101
> Drawing -0.424  0.187 -0.280 -0.841
> Colour   0.381 -0.845 -0.211 -0.310
> Expression  -0.664 -0.330 -0.513  0.432
> 
>   Comp.1 Comp.2 Comp.3 Comp.4

[R] Separation between tick marks and tick labels in lattice

2004-09-14 Thread Waichler, Scott R

I can't find a way to control the distance between tick marks and tick
labels
in lattice.  In a stripplot I'm making these components are too close.
I don't
see anything like base graphics mgp in the scales list.

Thanks for your help,
Scott Waichler
Pacific Northwest National Laboratory
Richland, Washington, USA
[EMAIL PROTECTED]

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


RE: [R] reshaping some data

2004-09-14 Thread Berton Gunter
Sundar:

As I understand it, you can easily create an index variable (a pointer,
actually) that will pick out the y columns in order:

z<-yourdataframe
y<-as.vector(z[,indexvar])

So if you could cbind() the x's, you'd be all set.

Again, assuming I understand correctly, the x column you want is:

x<-z[,-indexvar] ## still a frame/matrix
nvec<-seq(length=ncol(x))
x<-as.vector(x[,rep(nvec,times=nvec)])

HTH -- and even if I got it wrong, it was fun, so thanks.

-- Bert

-- Bert Gunter
Genentech Non-Clinical Statistics
South San Francisco, CA
 
"The business of the statistician is to catalyze the scientific learning
process."  - George E. P. Box
 
 

> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of Sundar 
> Dorai-Raj
> Sent: Tuesday, September 14, 2004 9:16 AM
> To: R-help
> Subject: [R] reshaping some data
> 
> Hi all,
>I have a data.frame with the following colnames pattern:
> 
> x1 y11 x2 y21 y22 y23 x3 y31 y32 ...
> 
> I.e. I have an x followed by a few y's. What I would like to 
> do is turn 
> this wide format into a tall format with two columns: "x", "y". The 
> structure is that xi needs to be associated with yij (e.g. x1 should 
> next to y11 and y12, x2 should be next to y21, y22, and y23, etc.).
> 
>   x   y
> x1 y11
> x2 y21
> x2 y22
> x2 y23
> x3 y31
> x3 y32
> ...
> 
> I have looked at ?reshape but I didn't see how it could work 
> with this 
> structure. I have a solution using nested for loops (see below), but 
> it's slow and not very efficient. I would like to find a vectorised 
> solution that would achieve the same thing.
> 
> Now, for an example:
> 
> x <- data.frame(x1 =  1: 5, y11 =  1: 5,
>  x2 =  6:10, y21 =  6:10, y22 = 11:15,
>  x3 = 11:15, y31 = 16:20,
>  x4 = 16:20, y41 = 21:25, y42 = 26:30, y43 = 31:35)
> # which are the x columns
> nmx <- grep("^x", names(x))
> # which are the y columns
> nmy <- grep("^y", names(x))
> # grab y values
> y <- unlist(x[nmy])
> # reserve some space for the x's
> z <- vector("numeric", length(y))
> # a loop counter
> k <- 0
> n <- nrow(x)
> seq.n <- seq(n)
> # determine how many times to repeat the x's
> repy <- diff(c(nmx, length(names(x)) + 1)) - 1
> for(i in seq(along = nmx)) {
>for(j in seq(repy[i])) {
>  # store the x values in the appropriate z indices
>  z[seq.n + k * n] <- x[, nmx[i]]
>  # move to next block in z
>  k <- k + 1
>}
> }
> data.frame(x = z, y = y, row.names = NULL)
> 
> __
> [EMAIL PROTECTED] mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! 
> http://www.R-project.org/posting-guide.html
> 
__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

Re: [R] Signs of loadings from princomp on Windows

2004-09-14 Thread Sundar Dorai-Raj
Hi all,
I was able to replicate Francisco's observation. I'm using R-1.9.1 
installed from binaries on Windows 2000 Pro.

[Previously saved workspace restored]
> library(MASS)
> data(painters)
> pca.painters <- princomp(painters[ ,1:4])
> loadings(pca.painters)
Loadings:
Comp.1 Comp.2 Comp.3 Comp.4
Composition  0.484 -0.376  0.784 -0.101
Drawing  0.424  0.187 -0.280 -0.841
Colour  -0.381 -0.845 -0.211 -0.310
Expression   0.664 -0.330 -0.513  0.432
   Comp.1 Comp.2 Comp.3 Comp.4
SS loadings  1.00   1.00   1.00   1.00
Proportion Var   0.25   0.25   0.25   0.25
Cumulative Var   0.25   0.50   0.75   1.00
> pca.painters <- princomp(painters[ ,1:4])
> loadings(pca.painters)
Loadings:
Comp.1 Comp.2 Comp.3 Comp.4
Composition -0.484 -0.376  0.784 -0.101
Drawing -0.424  0.187 -0.280 -0.841
Colour   0.381 -0.845 -0.211 -0.310
Expression  -0.664 -0.330 -0.513  0.432
   Comp.1 Comp.2 Comp.3 Comp.4
SS loadings  1.00   1.00   1.00   1.00
Proportion Var   0.25   0.25   0.25   0.25
Cumulative Var   0.25   0.50   0.75   1.00
> R.version
 _
platform i386-pc-mingw32
arch i386
os   mingw32
system   i386, mingw32
status
major1
minor9.1
year 2004
month06
day  21
language R
Francisco Chamu wrote:
I have run this on both Windows 2000 and XP.  All I did was install
the binaries from CRAN so I think I am using the standard Rblas.dll.
To reproduce what I see you must run the code at the beginning of the
R session.  After the second run, all subsequent runs give the same
result as the second set.
Thanks,
Francisco
On Tue, 14 Sep 2004 08:29:25 +0200, Uwe Ligges
<[EMAIL PROTECTED]> wrote:
Prof Brian Ripley wrote:
I get the second set each time, on Windows, using the build from CRAN.
Which BLAS are you using?

Works also well for me with a self compiled R-1.9.1 (both with standard
Rblas as well as with the Rblas.dll for Athlon CPU from CRAN).
Is this a NT-based version of Windows (NT, 2k, XP)?
Uwe


On Tue, 14 Sep 2004, Francisco Chamu wrote:

I start a clean session of R 1.9.1 on Windows and I run the following code:

library(MASS)
data(painters)
pca.painters <- princomp(painters[ ,1:4])
loadings(pca.painters)
Loadings:
  Comp.1 Comp.2 Comp.3 Comp.4
Composition  0.484 -0.376  0.784 -0.101
Drawing  0.424  0.187 -0.280 -0.841
Colour  -0.381 -0.845 -0.211 -0.310
Expression   0.664 -0.330 -0.513  0.432
 Comp.1 Comp.2 Comp.3 Comp.4
SS loadings  1.00   1.00   1.00   1.00
Proportion Var   0.25   0.25   0.25   0.25
Cumulative Var   0.25   0.50   0.75   1.00
However, if I rerun the same analysis, the loadings of the first
component have the opposite sign (see below), why is that?  I have
read the note
in the princomp help that says
  "The signs of the columns of the loadings and scores are arbitrary,
   and so may differ between different programs for PCA, and even
   between different builds of R."
However, I still would expect the same signs for two runs in the same session.

pca.painters <- princomp(painters[ ,1:4])
loadings(pca.painters)
Loadings:
  Comp.1 Comp.2 Comp.3 Comp.4
Composition -0.484 -0.376  0.784 -0.101
Drawing -0.424  0.187 -0.280 -0.841
Colour   0.381 -0.845 -0.211 -0.310
Expression  -0.664 -0.330 -0.513  0.432
 Comp.1 Comp.2 Comp.3 Comp.4
SS loadings  1.00   1.00   1.00   1.00
Proportion Var   0.25   0.25   0.25   0.25
Cumulative Var   0.25   0.50   0.75   1.00

R.version
   _
platform i386-pc-mingw32
arch i386
os   mingw32
system   i386, mingw32
status
major1
minor9.1
year 2004
month06
day  21
language R
BTW, I have tried the same in R 1.9.1 on Debian and I can't reproduce
what I see
on Windows.  In fact all the runs give the same as the second run on Windows.
-Francisco
__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html



__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] repeated measures and covariance structures

2004-09-14 Thread Christian Hennig
Hello Chris,

as far as I know from the Pinheiro and Bates book "Mixed-Effects Models in S
and S-PLUS", the autoregressive parameter has to be specified only as
initial value for the estimation. That is, the parameter will be estimated,
but the result may depend on the prespecified value. You do not need to
specify it. Then, 0 is used as initial value and this may work well (as in
the example in Pinheiro and Bates, p. 241 f.).

Christian

PS: Have you been at a workshop on model selection near Munich in Nov. 2002?
I suspect that I know who you are but I am not sure.

On Tue, 14 Sep 2004, Chris Solomon wrote:

> Hello-
> 
> I'm trying to do some repeated measures ANOVAs. In the past, using SAS,
> I have used the framework outlined in Littell et al.'s "SAS System for
> Mixed Models", using the REPEATED statement in PROC MIXED to model
> variation across time within an experimental unit. SAS allows you to
> specify different within-unit covariance structures (e.g., compound
> symmetric, AR(1), etc.) to determine the best model.
> 
> I'm having trouble figuring out how to do a similar analysis in R. While
> 'lme' will let you choose the class of correlation structure to use, it
> seems to require that you specify this structure rather than using the
> data to estimate the covariance matrix. For example, it seems that to
> specify 'corAR1' as the correlation structure, you have to pick a value
> for rho, the autoregressive parameter.
> 
> So, my question: is there a way to tell 'lme' what sort of covariance
> structure you'd like to model, and then let the function estimate the
> covariances? Or, alternatively, is there a better way to go about this
> sort of repeated measures analysis in R? I've exhausted my available R
> resources and done a pretty good search of the help archives without
> finding a clear answer.
> 
> Thanks much!
> Chris
> 
> 
> 
> ***
> Chris Solomon
> Center for Limnology
> Univ. of Wisconsin
> Phone: (608) 263-2465
> 
> __
> [EMAIL PROTECTED] mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
> 

***
Christian Hennig
Fachbereich Mathematik-SPST/ZMS, Universitaet Hamburg
[EMAIL PROTECTED], http://www.math.uni-hamburg.de/home/hennig/
###
ich empfehle www.boag-online.de

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] repeated measures and covariance structures

2004-09-14 Thread Prof Brian Ripley
On Tue, 14 Sep 2004, Chris Solomon wrote:

> Hello-
> 
> I'm trying to do some repeated measures ANOVAs. In the past, using SAS,
> I have used the framework outlined in Littell et al.'s "SAS System for
> Mixed Models", using the REPEATED statement in PROC MIXED to model
> variation across time within an experimental unit. SAS allows you to
> specify different within-unit covariance structures (e.g., compound
> symmetric, AR(1), etc.) to determine the best model.
> 
> I'm having trouble figuring out how to do a similar analysis in R. While
> 'lme' will let you choose the class of correlation structure to use, it
> seems to require that you specify this structure rather than using the
> data to estimate the covariance matrix. For example, it seems that to
> specify 'corAR1' as the correlation structure, you have to pick a value
> for rho, the autoregressive parameter.

Why does `it seems'? Your information is incorrect.

> So, my question: is there a way to tell 'lme' what sort of covariance
> structure you'd like to model, and then let the function estimate the
> covariances? 

That is the default.  Take a look at the examples in Venables & Ripley or 
Pinheiro & Bates (as recommended in the posting guide and the FAQ).

> Or, alternatively, is there a better way to go about this
> sort of repeated measures analysis in R? I've exhausted my available R
> resources and done a pretty good search of the help archives without
> finding a clear answer.

Did you look at the references in the FAQ?

> Chris Solomon
> Center for Limnology
> Univ. of Wisconsin

You do know where the maintainer of the nlme package works, don't you?
I am sure your University library has a copy or two of Pinheiro & Bates!

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Signs of loadings from princomp on Windows

2004-09-14 Thread Prof Brian Ripley
On Tue, 14 Sep 2004, Francisco Chamu wrote:

> I have run this on both Windows 2000 and XP.  All I did was install
> the binaries from CRAN so I think I am using the standard Rblas.dll.
> 
> To reproduce what I see you must run the code at the beginning of the
> R session.  

We did, as you said `start a clean session'.

I think to reproduce what you see we have to be using your account on your 
computer.

> After the second run, all subsequent runs give the same
> result as the second set.
> 
> Thanks,
> Francisco
> 
> 
> On Tue, 14 Sep 2004 08:29:25 +0200, Uwe Ligges
> <[EMAIL PROTECTED]> wrote:
> > Prof Brian Ripley wrote:
> > > I get the second set each time, on Windows, using the build from CRAN.
> > > Which BLAS are you using?
> > 
> > 
> > Works also well for me with a self compiled R-1.9.1 (both with standard
> > Rblas as well as with the Rblas.dll for Athlon CPU from CRAN).
> > Is this a NT-based version of Windows (NT, 2k, XP)?
> > 
> > Uwe
> > 
> > 
> > 
> > 
> > > On Tue, 14 Sep 2004, Francisco Chamu wrote:
> > >
> > >
> > >>I start a clean session of R 1.9.1 on Windows and I run the following code:
> > >>
> > >>
> > >>>library(MASS)
> > >>>data(painters)
> > >>>pca.painters <- princomp(painters[ ,1:4])
> > >>>loadings(pca.painters)
> > >>
> > >>Loadings:
> > >>Comp.1 Comp.2 Comp.3 Comp.4
> > >>Composition  0.484 -0.376  0.784 -0.101
> > >>Drawing  0.424  0.187 -0.280 -0.841
> > >>Colour  -0.381 -0.845 -0.211 -0.310
> > >>Expression   0.664 -0.330 -0.513  0.432
> > >>
> > >>   Comp.1 Comp.2 Comp.3 Comp.4
> > >>SS loadings  1.00   1.00   1.00   1.00
> > >>Proportion Var   0.25   0.25   0.25   0.25
> > >>Cumulative Var   0.25   0.50   0.75   1.00
> > >>
> > >>However, if I rerun the same analysis, the loadings of the first
> > >>component have the opposite sign (see below), why is that?  I have
> > >>read the note
> > >>in the princomp help that says
> > >>
> > >>"The signs of the columns of the loadings and scores are arbitrary,
> > >> and so may differ between different programs for PCA, and even
> > >> between different builds of R."
> > >>
> > >>However, I still would expect the same signs for two runs in the same session.
> > >>
> > >>
> > >>>pca.painters <- princomp(painters[ ,1:4])
> > >>>loadings(pca.painters)
> > >>
> > >>Loadings:
> > >>Comp.1 Comp.2 Comp.3 Comp.4
> > >>Composition -0.484 -0.376  0.784 -0.101
> > >>Drawing -0.424  0.187 -0.280 -0.841
> > >>Colour   0.381 -0.845 -0.211 -0.310
> > >>Expression  -0.664 -0.330 -0.513  0.432
> > >>
> > >>   Comp.1 Comp.2 Comp.3 Comp.4
> > >>SS loadings  1.00   1.00   1.00   1.00
> > >>Proportion Var   0.25   0.25   0.25   0.25
> > >>Cumulative Var   0.25   0.50   0.75   1.00
> > >>
> > >>>R.version
> > >>
> > >> _
> > >>platform i386-pc-mingw32
> > >>arch i386
> > >>os   mingw32
> > >>system   i386, mingw32
> > >>status
> > >>major1
> > >>minor9.1
> > >>year 2004
> > >>month06
> > >>day  21
> > >>language R
> > >>
> > >>BTW, I have tried the same in R 1.9.1 on Debian and I can't reproduce
> > >>what I see
> > >>on Windows.  In fact all the runs give the same as the second run on Windows.
> > >>
> > >>-Francisco
> > >>
> > >>__
> > >>[EMAIL PROTECTED] mailing list
> > >>https://stat.ethz.ch/mailman/listinfo/r-help
> > >>PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
> > >>
> > >>
> > >
> > >
> > 
> >
> 
> 

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


RE: [R] repeated measures and covariance structures

2004-09-14 Thread Doran, Harold
Yes. Try something akin to

> fm1<- lme(y~time, data, random=~time|ID)

> fm2<-update(fm1, correlation=corAR1(form~time|ID) 

You can then use anova(fm1,fm2) to compare.

Harold

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Chris Solomon
Sent: Tuesday, September 14, 2004 12:07 PM
To: [EMAIL PROTECTED]
Subject: [R] repeated measures and covariance structures

Hello-

I'm trying to do some repeated measures ANOVAs. In the past, using SAS,
I have used the framework outlined in Littell et al.'s "SAS System for
Mixed Models", using the REPEATED statement in PROC MIXED to model
variation across time within an experimental unit. SAS allows you to
specify different within-unit covariance structures (e.g., compound
symmetric, AR(1), etc.) to determine the best model.

I'm having trouble figuring out how to do a similar analysis in R. While
'lme' will let you choose the class of correlation structure to use, it
seems to require that you specify this structure rather than using the
data to estimate the covariance matrix. For example, it seems that to
specify 'corAR1' as the correlation structure, you have to pick a value
for rho, the autoregressive parameter.

So, my question: is there a way to tell 'lme' what sort of covariance
structure you'd like to model, and then let the function estimate the
covariances? Or, alternatively, is there a better way to go about this
sort of repeated measures analysis in R? I've exhausted my available R
resources and done a pretty good search of the help archives without
finding a clear answer.

Thanks much!
Chris



***
Chris Solomon
Center for Limnology
Univ. of Wisconsin
Phone: (608) 263-2465

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide!
http://www.R-project.org/posting-guide.html

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Signs of loadings from princomp on Windows

2004-09-14 Thread Francisco Chamu
I have run this on both Windows 2000 and XP.  All I did was install
the binaries from CRAN so I think I am using the standard Rblas.dll.

To reproduce what I see you must run the code at the beginning of the
R session.  After the second run, all subsequent runs give the same
result as the second set.

Thanks,
Francisco


On Tue, 14 Sep 2004 08:29:25 +0200, Uwe Ligges
<[EMAIL PROTECTED]> wrote:
> Prof Brian Ripley wrote:
> > I get the second set each time, on Windows, using the build from CRAN.
> > Which BLAS are you using?
> 
> 
> Works also well for me with a self compiled R-1.9.1 (both with standard
> Rblas as well as with the Rblas.dll for Athlon CPU from CRAN).
> Is this a NT-based version of Windows (NT, 2k, XP)?
> 
> Uwe
> 
> 
> 
> 
> > On Tue, 14 Sep 2004, Francisco Chamu wrote:
> >
> >
> >>I start a clean session of R 1.9.1 on Windows and I run the following code:
> >>
> >>
> >>>library(MASS)
> >>>data(painters)
> >>>pca.painters <- princomp(painters[ ,1:4])
> >>>loadings(pca.painters)
> >>
> >>Loadings:
> >>Comp.1 Comp.2 Comp.3 Comp.4
> >>Composition  0.484 -0.376  0.784 -0.101
> >>Drawing  0.424  0.187 -0.280 -0.841
> >>Colour  -0.381 -0.845 -0.211 -0.310
> >>Expression   0.664 -0.330 -0.513  0.432
> >>
> >>   Comp.1 Comp.2 Comp.3 Comp.4
> >>SS loadings  1.00   1.00   1.00   1.00
> >>Proportion Var   0.25   0.25   0.25   0.25
> >>Cumulative Var   0.25   0.50   0.75   1.00
> >>
> >>However, if I rerun the same analysis, the loadings of the first
> >>component have the opposite sign (see below), why is that?  I have
> >>read the note
> >>in the princomp help that says
> >>
> >>"The signs of the columns of the loadings and scores are arbitrary,
> >> and so may differ between different programs for PCA, and even
> >> between different builds of R."
> >>
> >>However, I still would expect the same signs for two runs in the same session.
> >>
> >>
> >>>pca.painters <- princomp(painters[ ,1:4])
> >>>loadings(pca.painters)
> >>
> >>Loadings:
> >>Comp.1 Comp.2 Comp.3 Comp.4
> >>Composition -0.484 -0.376  0.784 -0.101
> >>Drawing -0.424  0.187 -0.280 -0.841
> >>Colour   0.381 -0.845 -0.211 -0.310
> >>Expression  -0.664 -0.330 -0.513  0.432
> >>
> >>   Comp.1 Comp.2 Comp.3 Comp.4
> >>SS loadings  1.00   1.00   1.00   1.00
> >>Proportion Var   0.25   0.25   0.25   0.25
> >>Cumulative Var   0.25   0.50   0.75   1.00
> >>
> >>>R.version
> >>
> >> _
> >>platform i386-pc-mingw32
> >>arch i386
> >>os   mingw32
> >>system   i386, mingw32
> >>status
> >>major1
> >>minor9.1
> >>year 2004
> >>month06
> >>day  21
> >>language R
> >>
> >>BTW, I have tried the same in R 1.9.1 on Debian and I can't reproduce
> >>what I see
> >>on Windows.  In fact all the runs give the same as the second run on Windows.
> >>
> >>-Francisco
> >>
> >>__
> >>[EMAIL PROTECTED] mailing list
> >>https://stat.ethz.ch/mailman/listinfo/r-help
> >>PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
> >>
> >>
> >
> >
> 
>

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] reshaping some data

2004-09-14 Thread Sundar Dorai-Raj
Hi all,
  I have a data.frame with the following colnames pattern:
x1 y11 x2 y21 y22 y23 x3 y31 y32 ...
I.e. I have an x followed by a few y's. What I would like to do is turn 
this wide format into a tall format with two columns: "x", "y". The 
structure is that xi needs to be associated with yij (e.g. x1 should 
next to y11 and y12, x2 should be next to y21, y22, and y23, etc.).

 x   y
x1 y11
x2 y21
x2 y22
x2 y23
x3 y31
x3 y32
...
I have looked at ?reshape but I didn't see how it could work with this 
structure. I have a solution using nested for loops (see below), but 
it's slow and not very efficient. I would like to find a vectorised 
solution that would achieve the same thing.

Now, for an example:
x <- data.frame(x1 =  1: 5, y11 =  1: 5,
x2 =  6:10, y21 =  6:10, y22 = 11:15,
x3 = 11:15, y31 = 16:20,
x4 = 16:20, y41 = 21:25, y42 = 26:30, y43 = 31:35)
# which are the x columns
nmx <- grep("^x", names(x))
# which are the y columns
nmy <- grep("^y", names(x))
# grab y values
y <- unlist(x[nmy])
# reserve some space for the x's
z <- vector("numeric", length(y))
# a loop counter
k <- 0
n <- nrow(x)
seq.n <- seq(n)
# determine how many times to repeat the x's
repy <- diff(c(nmx, length(names(x)) + 1)) - 1
for(i in seq(along = nmx)) {
  for(j in seq(repy[i])) {
# store the x values in the appropriate z indices
z[seq.n + k * n] <- x[, nmx[i]]
# move to next block in z
k <- k + 1
  }
}
data.frame(x = z, y = y, row.names = NULL)
__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] repeated measures and covariance structures

2004-09-14 Thread Chris Solomon
Hello-

I'm trying to do some repeated measures ANOVAs. In the past, using SAS,
I have used the framework outlined in Littell et al.'s "SAS System for
Mixed Models", using the REPEATED statement in PROC MIXED to model
variation across time within an experimental unit. SAS allows you to
specify different within-unit covariance structures (e.g., compound
symmetric, AR(1), etc.) to determine the best model.

I'm having trouble figuring out how to do a similar analysis in R. While
'lme' will let you choose the class of correlation structure to use, it
seems to require that you specify this structure rather than using the
data to estimate the covariance matrix. For example, it seems that to
specify 'corAR1' as the correlation structure, you have to pick a value
for rho, the autoregressive parameter.

So, my question: is there a way to tell 'lme' what sort of covariance
structure you'd like to model, and then let the function estimate the
covariances? Or, alternatively, is there a better way to go about this
sort of repeated measures analysis in R? I've exhausted my available R
resources and done a pretty good search of the help archives without
finding a clear answer.

Thanks much!
Chris



***
Chris Solomon
Center for Limnology
Univ. of Wisconsin
Phone: (608) 263-2465

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] date library and boxplots

2004-09-14 Thread [EMAIL PROTECTED]

Hello,

I have been trying to use the date library (mdy.date) to create notched boxplots, but 
have not been successful.  Can anybody help with the code for this command?

Thank you,
Stephanie

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Re: datalist

2004-09-14 Thread Martin Maechler
> "John" == John Zhang <[EMAIL PROTECTED]>
> on Tue, 14 Sep 2004 10:11:15 -0400 (EDT) writes:

John> Hi,
John> The following is a cut/paste from 
http://developer.r-project.org/200update.txt:

John> ...

John> 3) When a package is installed, all the data sets are loaded to see
John> what they produce.  If this is undesirable (because they are
John> enormous, or depend on other packages that need to be installed
John> later, ...), add a `datalist' file to the data subdirectory as
John> described in `Writing R Extensions'.

John> ...



John> I only saw a mentioning of 00Index in the description
John> about the data subdirectory in Writing R
John> Extensions/Package Subdirectories. Could someone point
John> me to the right place or tell me what a 'datalist'
John> file is supposed to be?

You need "Writing R Extensions" from 'R-devel' aka  "2.0.0 unstable".
The manuals of the UN-released versions of R, are typically
available from www.R-project.org, [Documentation] -> "Help pages"
which points to http://stat.ethz.ch/R-manual/

But for the sake of it, here is the entry for 'datalist' 
(found very quickly in Emacs Info):

   If your data files are enormous you can speed up installation
   by providing a file `datalist' in the `data' subdirectory.
   This should have one line per topic that `data()' will find,
   in the format `foo' if `data(foo)' provides `foo', or `foo:
   bar bah' if `data(foo)' provides `bar' and `bah'.

Regards,
Martin

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Howto enlarge fonts size in R- Graphics?

2004-09-14 Thread Thomas Schönhoff
Hello,
Thomas Schönhoff wrote:
Well, thanks, I'll have a look at your advices.
regards
Thomas
__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] How to show the symbol of Angstrom ?

2004-09-14 Thread xiang li
Paul, Thank you very much! They all works!
Sean

On Tue, 14 Sep 2004, Paul Murrell wrote:

> Hi
> 
> 
> xiang li wrote:
> > Also, I am wondering if there is any source where the expressions of 
> > many symbols are collected.
> > Thanks you very much!!!
> 
> 
> (Assuming you mean draw the angstrom symbol on a plot ...)
> 
> There are several ways:
> 
> (i) Specify the character code in octal.  Assuming ISO Latin 1 encoding, 
> something like ...
>plot(1, type="n")
>text(1, 1, "\305")
> ... or ...
>grid.newpage()
>grid.text("\305")
> ... should work.  That should be ok on default setups for Windows and 
> Unix.  On the Mac it might have to be "\201" (untested)  See, e.g., 
> http://czyborra.com/charsets/iso8859.html#ISO-8859-1 (Unix)
> http://www.microsoft.com/typography/unicode/1252.gif (Windows)
> http://kodeks.uni-bamberg.de/Computer/CodePages/MacStd.htm (Mac)
> for other standard "symbols".
> 
> (ii) Use a mathematical expression.  This won't look as good because the 
> ring and the A are not a single coherent glyph, but it should work 
> "everywhere" ...
>plot(1, type="n")
>text(1, 1, expression(ring(A)))
> ... or ...
>grid.newpage()
>grid.text(expression(ring(A)))
> ... demo(plotmath) shows the range of things you can do with this approach.
> 
> (iii) Use a hershey font (again, should work on all platforms and 
> encodings) ...
>plot(1, type="n")
>text(1, 1, "\\oA", vfont=c("sans serif", "plain"))
> ... or ...
>grid.newpage()
>grid.text("\\oA", gp=gpar(fontfamily="HersheySans"))
> ... demo(Hershey) shows the symbols available with this approach.
> 
> Hope that helps.
> 
> Paul
> 

-- 


Li, Xiang (Sean)  | [EMAIL PROTECTED]
Dept of Bioengineering, SEO, MC 063   | ph:   (312) 355-2520
University of Illinois at Chicago | fax:  (312) 996-5921 
Chicago, IL  60607-7052, USA  |

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] erase columns

2004-09-14 Thread Gabor Grothendieck
michele lux  yahoo.it> writes:

> Can somebody remember me which is the command to erase
> columns from a data frame?

To delete a single column assign NULL to it.  That works
because a data frame is a list and that works for lists.
Here are three examples of deleting column 5, the Species 
column, from the iris data set:

   data(iris)
   iris[,5] <- NULL  

   data(iris)
   iris$Species <- NULL

   data(iris)
   iris[,"Species"] <- NULL

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Getting the argument list within a function

2004-09-14 Thread Peter Dalgaard
"Roger D. Peng" <[EMAIL PROTECTED]> writes:

> Is there a way of getting the argument list of a function from within
> that function?  For example, something like
> 
> f <- function(x, y = 3) {
>   fargs <- getFunctionArgList()
>   print(fargs)  ## Should be `alist(x, y = 3)'
> }


> f
function(x, y = 3) {
  fargs <- formals(sys.function())
  print(fargs)  ## Should be equivalent to `alist(x=, y = 3)'
}

(Notice a couple of fine points...)

-- 
   O__   Peter Dalgaard Blegdamsvej 3  
  c/ /'_ --- Dept. of Biostatistics 2200 Cph. N   
 (*) \(*) -- University of Copenhagen   Denmark  Ph: (+45) 35327918
~~ - ([EMAIL PROTECTED]) FAX: (+45) 35327907

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


RE: [R] Getting the argument list within a function

2004-09-14 Thread Liaw, Andy
Is formals() what you're looking for?

Andy

> From: Roger D. Peng
> 
> Is there a way of getting the argument list of a function from within 
> that function?  For example, something like
> 
> f <- function(x, y = 3) {
>   fargs <- getFunctionArgList()
>   print(fargs)  ## Should be `alist(x, y = 3)'
> }
> 
> Thanks,
> 
> -roger
> 
> __
> [EMAIL PROTECTED] mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! 
> http://www.R-project.org/posting-guide.html
> 
>

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Getting the argument list within a function

2004-09-14 Thread Roger D. Peng
Ergh, yes, that's exactly it.  I didn't realize you could use it in 
that way.

Thanks,
-roger
Jeff Gentry wrote:
Is there a way of getting the argument list of a function from within 
that function?  For example, something like

Would formals() work for you here?

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Getting the argument list within a function

2004-09-14 Thread Jeff Gentry
> Is there a way of getting the argument list of a function from within 
> that function?  For example, something like

Would formals() work for you here?

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Getting the argument list within a function

2004-09-14 Thread Roger D. Peng
Is there a way of getting the argument list of a function from within 
that function?  For example, something like

f <- function(x, y = 3) {
fargs <- getFunctionArgList()
print(fargs)  ## Should be `alist(x, y = 3)'
}
Thanks,
-roger
__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Re: datalist

2004-09-14 Thread John Zhang
Hi,

The following is a cut/paste from http://developer.r-project.org/200update.txt:

...

3) When a package is installed, all the data sets are loaded to see
   what they produce.  If this is undesirable (because they are
   enormous, or depend on other packages that need to be installed
   later, ...), add a `datalist' file to the data subdirectory as
   described in `Writing R Extensions'.

...



I only saw a mentioning of 00Index in the description about the data 
subdirectory in Writing R Extensions/Package Subdirectories. Could someone point 
me to the right place or tell me what a 'datalist' file is supposed to be? 


Thanks.


JZ

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] calculating memory usage

2004-09-14 Thread Roger D. Peng
If you only have simple objects in your function, you might be able to 
use a function like

totalMem <- function()  {
  sum(sapply(ls(all = TRUE), function(x) object.size(get(x / 2^20
}
which should give you a rough idea of the memory usage (in MB) in the 
current environment.

-roger
Adaikalavan Ramasamy wrote:
I am comparing two different algorithms in terms of speed and memory
usage. I can calculate the processing time with proc.time() as follows
but am not sure how to calculate the memory usage.
   ptm <- proc.time()
   x <- rnorm(100)
   proc.time() - ptm
I would like to be within R itself since I will test the algorithm
several hundred times and in batch mode. So manually looking up 'top'
may not be feasible. help.seach("memory") suggests memory.profile and gc
but I am not sure how to use these.
Sorry if this is a basic question. Thank you.
Regards, Adai
__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] calculating memory usage

2004-09-14 Thread Prof Brian Ripley
On Tue, 14 Sep 2004, Adaikalavan Ramasamy wrote:

> Many thanks to Prof. Ripley. The problem is that memory.profile does not
> exist in *nix environment and there is probably a very good reason why.

memory.size?

> 
> I was reading help(Memory) and in the Details section :
>  You can find out the current memory consumption (the heap and cons
>  cells used as numbers and megabytes) by typing 'gc()' at the R
>  prompt.
> 
> AFAICS, Ncells is the fixed memory used by the underlying R and Vcells
> is the variable part and depends on the calculations. 
> 
> Would I be able to say that the generating 10 million random numbers
> requires approximately 73.4 Mb (= 26.3 + 80.5 - 26.3 - 7.1) of memory ?
> I double checked this against memory.size() in Windows and they seem to
> agree. Thank you.

No, only that storing 10 million numbers requires 77.3 - 1.0Mb, and

> object.size(x)/1024^2
[1] 76.29397


> > gc()
>  used (Mb) gc trigger (Mb)
> Ncells 456262 12.2 984024 26.3
> Vcells 122697  1.0 929195  7.1
> > 
> > x <- rnorm(1000)
> > gc()
>used (Mb) gc trigger (Mb)
> Ncells   456274 12.2 984024 26.3
> Vcells 10123014 77.3   10538396 80.5
> 
> 
> 
> 
> On Mon, 2004-09-13 at 18:47, Prof Brian Ripley wrote:
> > On Mon, 13 Sep 2004, Adaikalavan Ramasamy wrote:
> > 
> > > I am comparing two different algorithms in terms of speed and memory
> > > usage. I can calculate the processing time with proc.time() as follows
> > > but am not sure how to calculate the memory usage.
> > > 
> > >ptm <- proc.time()
> > >x <- rnorm(100)
> > >proc.time() - ptm
> > 
> > Hmm ... see ?system.time!
> > 
> > > I would like to be within R itself since I will test the algorithm
> > > several hundred times and in batch mode. So manually looking up 'top'
> > > may not be feasible. help.seach("memory") suggests memory.profile and gc
> > > but I am not sure how to use these.
> > 
> > I don't think you can.  You can find out how much memory R is using NOW, 
> > but not the peak memory usage during a calculation.  Nor is that 
> > particularly relevant, as it depends on what was gone on before, the word 
> > length of the platform and the garbage collection settings.
> > 
> > On Windows, starting in a clean session, calling gc() and memory.size(), 
> > then calling your code and memory.size(max=TRUE) will give you a fair 
> > idea, but `top' indicates some Unix-alike.
> 
> 

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] calculating memory usage

2004-09-14 Thread Adaikalavan Ramasamy
Many thanks to Prof. Ripley. The problem is that memory.profile does not
exist in *nix environment and there is probably a very good reason why.

I was reading help(Memory) and in the Details section :
 You can find out the current memory consumption (the heap and cons
 cells used as numbers and megabytes) by typing 'gc()' at the R
 prompt.

AFAICS, Ncells is the fixed memory used by the underlying R and Vcells
is the variable part and depends on the calculations. 

Would I be able to say that the generating 10 million random numbers
requires approximately 73.4 Mb (= 26.3 + 80.5 - 26.3 - 7.1) of memory ?
I double checked this against memory.size() in Windows and they seem to
agree. Thank you.

> gc()
 used (Mb) gc trigger (Mb)
Ncells 456262 12.2 984024 26.3
Vcells 122697  1.0 929195  7.1
> 
> x <- rnorm(1000)
> gc()
   used (Mb) gc trigger (Mb)
Ncells   456274 12.2 984024 26.3
Vcells 10123014 77.3   10538396 80.5




On Mon, 2004-09-13 at 18:47, Prof Brian Ripley wrote:
> On Mon, 13 Sep 2004, Adaikalavan Ramasamy wrote:
> 
> > I am comparing two different algorithms in terms of speed and memory
> > usage. I can calculate the processing time with proc.time() as follows
> > but am not sure how to calculate the memory usage.
> > 
> >ptm <- proc.time()
> >x <- rnorm(100)
> >proc.time() - ptm
> 
> Hmm ... see ?system.time!
> 
> > I would like to be within R itself since I will test the algorithm
> > several hundred times and in batch mode. So manually looking up 'top'
> > may not be feasible. help.seach("memory") suggests memory.profile and gc
> > but I am not sure how to use these.
> 
> I don't think you can.  You can find out how much memory R is using NOW, 
> but not the peak memory usage during a calculation.  Nor is that 
> particularly relevant, as it depends on what was gone on before, the word 
> length of the platform and the garbage collection settings.
> 
> On Windows, starting in a clean session, calling gc() and memory.size(), 
> then calling your code and memory.size(max=TRUE) will give you a fair 
> idea, but `top' indicates some Unix-alike.

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Spare some CPU cycles for testing lme?

2004-09-14 Thread Douglas Bates
Prof Brian D Ripley wrote:
As others have said, this needs tools not CPU cycles: gctorture or valgrind.
Valgrind found (after a few seconds and on the first pass)
==23057== Invalid read of size 4
==23057==at 0x3CF4E645: ssc_symbolic_permute (Mutils.c:373)
==23057==by 0x3CF5BF75: ssclme_create (ssclme.c:168)
==23057==by 0x80AF5E8: do_dotcall
(/users/ripley/R/svn/R-devel/src/main/dotcode.c:640)
==23057==by 0x80CFA84: Rf_eval
(/users/ripley/R/svn/R-devel/src/main/eval.c:399)
==23057==  Address 0x3C7F3BD0 is 4 bytes before a block of size 524 alloc'd
==23057==at 0x3C01CB56: calloc (in
/opt/local/lib/valgrind/vgpreload_memcheck.so)
==23057==by 0x80F9EBE: R_chk_calloc
(/users/ripley/R/svn/R-devel/src/main/memory.c:2151)
==23057==by 0x3CF4E515: ssc_symbolic_permute (Mutils.c:352)
==23057==by 0x3CF5BF75: ssclme_create (ssclme.c:168)
==23057==
==23057== Use of uninitialised value of size 8
==23057==at 0x80C0137: Rf_duplicate
(/users/ripley/R/svn/R-devel/src/main/duplicate.c:160)
==23057==by 0x80BFA85: Rf_duplicate
(/users/ripley/R/svn/R-devel/src/main/duplicate.c:101)
==23057==by 0x80BFEB3: Rf_duplicate
(/users/ripley/R/svn/R-devel/src/main/duplicate.c:154)
==23057==by 0x816ED96: do_subset2_dflt
(/users/ripley/R/svn/R-devel/src/main/subset.c:816)
==23057==
==23057== Conditional jump or move depends on uninitialised value(s)
==23057==at 0x3CF62229: ssclme_fitted (ssclme.c:1587)
==23057==by 0x80AF646: do_dotcall
(/users/ripley/R/svn/R-devel/src/main/dotcode.c:646)
==23057==by 0x80CFA84: Rf_eval
(/users/ripley/R/svn/R-devel/src/main/eval.c:399)
==23057==by 0x80D1D80: do_set
(/users/ripley/R/svn/R-devel/src/main/eval.c:1280)
which is pretty definitive evidence of a problem (possibly 2) in the code.
I strongly recommend valgrind (http://valgrind.kde.org/) if you are using
x86 Linux.  It has found quite a few errors in R and in certain packages
recently.  The only thing to watch is that optimized BLASes will probably
crash it.
You're right.  I had (at least) two thinko's in that code.  The problems 
show up in the lme4 package but the errors in the C code are in the 
Matrix package.  I will upload a repaired version of the Matrix package.

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] overwriting a line in existing .csv file with new data

2004-09-14 Thread Marco Bianchi
Dear R-users,

I have a data matrix with 20 rows and 10 columns which is stored in the hard drive as 
.csv file called c:\DataFile.csv and a 10 elements vector called xVec.

I would like to be able to copy and paste the information contained in xVec into (say) 
the 2nd row of DataFile.csv

Obviously, one way of doing this would be to read the matrix using read.csv() command, 
implement the copy and paste manipulation and save the new matrix again using 
write.table(). However, has anyone used an alternative method which does not require 
use of the read.csv() command?

Regards
Marco Bianchi





This e-mail has been scanned for all viruses by Star. The\ s...{{dropped}}

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Howto enlarge fonts size in R- Graphics?

2004-09-14 Thread Peter Dalgaard
Prof Brian Ripley <[EMAIL PROTECTED]> writes:

> The X11 device has an argument `pointsize' (which may well mean pixel size
> in a particular implementation of X11, as my laptop for example has width,
> height and pointsize all much smaller than specified):  just increase it.
> 
> The advice to look at ?par is incorrect if you want to scale everything, 
> as I think you do.
> 
> If you want auto-launched windows to have a larger font, you need to write 
> a wrapper to X11 and set options(device=wrapper_name).

Also, check out the dpi settings for your display (xdpyinfo). If this
is set lower than the physical reality, then pointsizes are
overestimated (so fonts are smaller than they should be). How to
change it is left as an exercise

BTW, it's 1.9.1, 1.91 is not in the plans (and wouldn't be for the
next forty years or so). 
 
> On Tue, 14 Sep 2004, [ISO-8859-15] Thomas Schönhoff wrote:
> 
> > I am fairly new to GNU R !
> > At the moment I am doing an intensive learning on the basics of GNU 
> > R-1.91, especially graphics like plots and alike,  by reading the 
> > introductory docs!
> > Well, except some occasional glitches (X11 output errors) everything 
> > seems to be fine, thanks to developers for this fine program!
> > But there is a slight problem with the size of fonts in graphics, i.e. 
> > its very hard form me to read labels of variables in plots or other 
> > graphical representations! (yes, I am little short sighted)
> > How may I mainpulate/enlarge the fonts size in graphics from within GNU R ?
> 
> -- 
> Brian D. Ripley,  [EMAIL PROTECTED]
> Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
> University of Oxford, Tel:  +44 1865 272861 (self)
> 1 South Parks Road, +44 1865 272866 (PA)
> Oxford OX1 3TG, UKFax:  +44 1865 272595
> 
> __
> [EMAIL PROTECTED] mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
> 

-- 
   O__   Peter Dalgaard Blegdamsvej 3  
  c/ /'_ --- Dept. of Biostatistics 2200 Cph. N   
 (*) \(*) -- University of Copenhagen   Denmark  Ph: (+45) 35327918
~~ - ([EMAIL PROTECTED]) FAX: (+45) 35327907

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R post-hoc and GUI

2004-09-14 Thread Paolo Ariano
Il mar, 2004-09-14 alle 12:26, Prof Brian Ripley ha scritto:
> BTW, I guess you are working on Windows, but have not told us what OS, 
> even. Please read the posting guide.

sorry, i use DebianGNU/Linux and my collegues windows

thanks
paolo
-- 
Paolo Ariano
Neuroscience PhD Student @ UniTo

Una non esclude l'altra ed entrambe non escludono
i soldi - Paolo A.  


_
For your security, this mail has been scanned and protected by Inflex

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Spare some CPU cycles for testing lme?

2004-09-14 Thread Prof Brian D Ripley
As others have said, this needs tools not CPU cycles: gctorture or valgrind.

Valgrind found (after a few seconds and on the first pass)

==23057== Invalid read of size 4
==23057==at 0x3CF4E645: ssc_symbolic_permute (Mutils.c:373)
==23057==by 0x3CF5BF75: ssclme_create (ssclme.c:168)
==23057==by 0x80AF5E8: do_dotcall
(/users/ripley/R/svn/R-devel/src/main/dotcode.c:640)
==23057==by 0x80CFA84: Rf_eval
(/users/ripley/R/svn/R-devel/src/main/eval.c:399)
==23057==  Address 0x3C7F3BD0 is 4 bytes before a block of size 524 alloc'd
==23057==at 0x3C01CB56: calloc (in
/opt/local/lib/valgrind/vgpreload_memcheck.so)
==23057==by 0x80F9EBE: R_chk_calloc
(/users/ripley/R/svn/R-devel/src/main/memory.c:2151)
==23057==by 0x3CF4E515: ssc_symbolic_permute (Mutils.c:352)
==23057==by 0x3CF5BF75: ssclme_create (ssclme.c:168)
==23057==
==23057== Use of uninitialised value of size 8
==23057==at 0x80C0137: Rf_duplicate
(/users/ripley/R/svn/R-devel/src/main/duplicate.c:160)
==23057==by 0x80BFA85: Rf_duplicate
(/users/ripley/R/svn/R-devel/src/main/duplicate.c:101)
==23057==by 0x80BFEB3: Rf_duplicate
(/users/ripley/R/svn/R-devel/src/main/duplicate.c:154)
==23057==by 0x816ED96: do_subset2_dflt
(/users/ripley/R/svn/R-devel/src/main/subset.c:816)
==23057==
==23057== Conditional jump or move depends on uninitialised value(s)
==23057==at 0x3CF62229: ssclme_fitted (ssclme.c:1587)
==23057==by 0x80AF646: do_dotcall
(/users/ripley/R/svn/R-devel/src/main/dotcode.c:646)
==23057==by 0x80CFA84: Rf_eval
(/users/ripley/R/svn/R-devel/src/main/eval.c:399)
==23057==by 0x80D1D80: do_set
(/users/ripley/R/svn/R-devel/src/main/eval.c:1280)

which is pretty definitive evidence of a problem (possibly 2) in the code.

I strongly recommend valgrind (http://valgrind.kde.org/) if you are using
x86 Linux.  It has found quite a few errors in R and in certain packages
recently.  The only thing to watch is that optimized BLASes will probably
crash it.


On Mon, 13 Sep 2004, Frank Samuelson wrote:

> If anyone has a few extra CPU cycles to spare,
> I'd appreciate it if you could verify a problem that I
> have encountered.  Run the code
> below and tell me if it crashes your R before
> completion.
>
> library(lme4)
> data(bdf)
> dump<-sapply( 1:5, function(i) {
>  fm <- lme(langPOST ~ IQ.ver.cen + avg.IQ.ver.cen, data = bdf,
>random = ~ IQ.ver.cen | schoolNR);
>  cat("  ",i,"\r")
>  0
> })
>
> The above code simply reruns the example from the
> lme help page a large number of times and returns a bunch
> of 0's, so you'll need to have the lme4 and Matrix
> packages installed.  It might take a while to complete,
> but you can always nice it and let it run.
>
> I'm attempting to bootstrap lme() from the lme4 package,
> but it causes a
> segfault after a couple hundred iterations.  This happens on
> my Linux x86 RedHat 7.3, 8.0, 9.0, FC1 systems w/ 1.9.1
> and devel 2.0.0 (not all possible combinations actually
> tested.)
> I've communicated w/ Douglas Bates about this and he
> doesn't appear to have the problem.


-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Excel TDIST and TINV

2004-09-14 Thread Rolf Turner

Brian Ripley wrote:

> We are hardly likely to know what those are in Excel.  Possibly pt
> and qt, but see help.search("Student t distribution") for where to
> look for what R provides.
> 
> I also do not know what Chauvenet's criterion has to do with
> Student's t, and
> 
> http://www.me.umn.edu/education/courses/me8337/chauvenet.txt
> 
> states that the latter would be incorrect.

See also

http://www.stat.uiowa.edu/~jcryer/JSMTalk2001.pdf

for insight into the ``wisdom'' of using Excel in the first place.

cheers,

Rolf Turner
[EMAIL PROTECTED]

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Howto enlarge fonts size in R- Graphics?

2004-09-14 Thread Prof Brian Ripley
The X11 device has an argument `pointsize' (which may well mean pixel size
in a particular implementation of X11, as my laptop for example has width,
height and pointsize all much smaller than specified):  just increase it.

The advice to look at ?par is incorrect if you want to scale everything, 
as I think you do.

If you want auto-launched windows to have a larger font, you need to write 
a wrapper to X11 and set options(device=wrapper_name).

On Tue, 14 Sep 2004, [ISO-8859-15] Thomas Schönhoff wrote:

> I am fairly new to GNU R !
> At the moment I am doing an intensive learning on the basics of GNU 
> R-1.91, especially graphics like plots and alike,  by reading the 
> introductory docs!
> Well, except some occasional glitches (X11 output errors) everything 
> seems to be fine, thanks to developers for this fine program!
> But there is a slight problem with the size of fonts in graphics, i.e. 
> its very hard form me to read labels of variables in plots or other 
> graphical representations! (yes, I am little short sighted)
> How may I mainpulate/enlarge the fonts size in graphics from within GNU R ?

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Howto enlarge fonts size in R- Graphics?

2004-09-14 Thread Uwe Ligges
Thomas Schönhoff wrote:
Hi,
I am fairly new to GNU R !
At the moment I am doing an intensive learning on the basics of GNU 
R-1.91, especially graphics like plots and alike,  by reading the 
introductory docs!
Well, except some occasional glitches (X11 output errors) everything 
seems to be fine, thanks to developers for this fine program!
But there is a slight problem with the size of fonts in graphics, i.e. 
its very hard form me to read labels of variables in plots or other 
graphical representations! (yes, I am little short sighted)
How may I mainpulate/enlarge the fonts size in graphics from within GNU R ?
See ?plot, ?plot.default and ?par, in particular look out for all 
arguments containing the letters "cex" 

Uwe Ligges
BTW: It is called R-1.9.1


Thanks for your time
Thomas

System: GNU/Linux (Debian Sid)
GNU R: 1.91
__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! 
http://www.R-project.org/posting-guide.html
__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


RE: [R] Howto enlarge fonts size in R- Graphics?

2004-09-14 Thread Daniel Hoppe
Hi Thomas!

Try ?par at the R prompt, there you will get all the necessary
information to change the appearance of graphics.

Daniel

--
Daniel Hoppe
Department of Marketing
University of Vienna
Bruenner Strasse 72
1210 Vienna
Austria 


> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of Thomas 
> Schönhoff
> Sent: Tuesday, September 14, 2004 10:47 AM
> To: R User-Liste
> Subject: [R] Howto enlarge fonts size in R- Graphics?
> 
> 
> Hi,
> 
> I am fairly new to GNU R !
> At the moment I am doing an intensive learning on the basics of GNU 
> R-1.91, especially graphics like plots and alike,  by reading the 
> introductory docs!
> Well, except some occasional glitches (X11 output errors) everything 
> seems to be fine, thanks to developers for this fine program! 
> But there is a slight problem with the size of fonts in 
> graphics, i.e. 
> its very hard form me to read labels of variables in plots or other 
> graphical representations! (yes, I am little short sighted)
> How may I mainpulate/enlarge the fonts size in graphics from 
> within GNU R ?
> 
> Thanks for your time
> 
> Thomas
> 
> 
> 
> System: GNU/Linux (Debian Sid)
> GNU R: 1.91
> 
> __
> [EMAIL PROTECTED] mailing list 
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read 
> the posting guide! http://www.R-project.org/posting-guide.html
>

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Howto enlarge fonts size in R- Graphics?

2004-09-14 Thread Thomas Schönhoff
Hi,
I am fairly new to GNU R !
At the moment I am doing an intensive learning on the basics of GNU 
R-1.91, especially graphics like plots and alike,  by reading the 
introductory docs!
Well, except some occasional glitches (X11 output errors) everything 
seems to be fine, thanks to developers for this fine program!
But there is a slight problem with the size of fonts in graphics, i.e. 
its very hard form me to read labels of variables in plots or other 
graphical representations! (yes, I am little short sighted)
How may I mainpulate/enlarge the fonts size in graphics from within GNU R ?

Thanks for your time
Thomas

System: GNU/Linux (Debian Sid)
GNU R: 1.91
__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Spare some CPU cycles for testing lme?

2004-09-14 Thread Thomas Schönhoff
Hello,
Frank Samuelson schrieb:
If anyone has a few extra CPU cycles to spare,
I'd appreciate it if you could verify a problem that I
have encountered.  Run the code
below and tell me if it crashes your R before
completion.
library(lme4)
data(bdf)
dump<-sapply( 1:5, function(i) {
fm <- lme(langPOST ~ IQ.ver.cen + avg.IQ.ver.cen, data = bdf,
  random = ~ IQ.ver.cen | schoolNR);
cat("  ",i,"\r")
0
})

I also tested your code by using R-1.91 under Debian Sid. After hundreds 
of iterations it ended up with the already noticed "segmentation fault"

HTH
Thomas
__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R post-hoc and GUI

2004-09-14 Thread A.J. Rossini


More importantly -- if you are looking for something "not beta", you
need to be very careful.  "beta" quality is in the eyes of the
claimer, and it isn't uniform.  

In fact, while you could wrap up a usable GUI for your specific needs,
it might be less of a waste of time to write simple functions and
spend the time with training.

Why, I can imagine that the arguments about GUIs would waste more time
than would be spent creating a cheatsheet and the functions/scripts to
simplify everything.

Of course, your situation might differ.

best,
-tony



Christian Schulz <[EMAIL PROTECTED]> writes:

> Perhaps the package Rcmdr is a compromise for the 
> people don't like command-line  software.
>
> christian
>
>
> Am Dienstag, 14. September 2004 12:02 schrieb Paolo Ariano:
>> Hi *
>> i've done my anova anlysis but now i need post-hoc test, are these
>> included in R ?
>>
>> I've a Big problem, working with people that don't like to use
>> command-line software (but prefer something like openoffice) does
>> someone is trying to do a usable GUI for R ? i'm reading something on R
>> commander SciView and others but all seem to be beta. I'd like to make
>> possible to make anova and post-hoc more simple to my collegues ;)
>>
>> thanks
>> paolo
>
> __
> [EMAIL PROTECTED] mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
>

-- 
Anthony Rossini Research Associate Professor
[EMAIL PROTECTED]http://www.analytics.washington.edu/ 
Biomedical and Health Informatics   University of Washington
Biostatistics, SCHARP/HVTN  Fred Hutchinson Cancer Research Center
UW (Tu/Th/F): 206-616-7630 FAX=206-543-3461 | Voicemail is unreliable
FHCRC  (M/W): 206-667-7025 FAX=206-667-4812 | use Email

CONFIDENTIALITY NOTICE: This e-mail message and any attachme...{{dropped}}

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R post-hoc and GUI

2004-09-14 Thread Prof Brian Ripley
On Tue, 14 Sep 2004, Paolo Ariano wrote:

> i've done my anova anlysis but now i need post-hoc test, are these
> included in R ?

Yes, in function TukeyHSD and in package multcomp for example.  There are
worked examples in the MASS scripts.

> I've a Big problem, working with people that don't like to use
> command-line software (but prefer something like openoffice) does
> someone is trying to do a usable GUI for R ? i'm reading something on R
> commander SciView and others but all seem to be beta. I'd like to make
> possible to make anova and post-hoc more simple to my collegues ;)

That's your choice, but have you looked at package Rcmdr?  You cna extend 
its menus if you want to.

BTW, I guess you are working on Windows, but have not told us what OS, 
even. Please read the posting guide.

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R post-hoc and GUI

2004-09-14 Thread Christian Schulz
Perhaps the package Rcmdr is a compromise for the 
people don't like command-line  software.

christian


Am Dienstag, 14. September 2004 12:02 schrieb Paolo Ariano:
> Hi *
> i've done my anova anlysis but now i need post-hoc test, are these
> included in R ?
>
> I've a Big problem, working with people that don't like to use
> command-line software (but prefer something like openoffice) does
> someone is trying to do a usable GUI for R ? i'm reading something on R
> commander SciView and others but all seem to be beta. I'd like to make
> possible to make anova and post-hoc more simple to my collegues ;)
>
> thanks
> paolo

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] R-2.0.0 CMD check . and datasets

2004-09-14 Thread Robin Hankin
Hello everyone
I'm having a little difficulty with R-2.0.0 CMD check.  My field is 
Bayesian calibration of computer models.

The problem is  that I have a large collection of toy datasets, that 
in R-1.9.1 were specified with lines
like this:

x.toy <- 1:6
y.toy <- computer.model(x.toy)
z.toy <- reality(x.toy)
in file ./data/toys.R ; functions computer.model() and reality() are 
defined in ./R/calibrator.R.

[In this application,  the (toy) functions computer.model() and 
reality() are the objects of inference, as
per the standard Bayesian approach.  The functions are nonrandom in 
that they are deterministic but
random in the Bayesian sense.  Thus y.toy and z.toy are observations 
of (random) functions].

In the Real World, one would have access to x.toy, y.toy, and z.toy 
but not (of course) computer.model()
or reality().  These functions should never be seen or referred to 
because they are Unknown.

So, in many of the code examples, I use  things like 
"some.function(y.toy, z.toy)" . . . and  I need
y.toy and z.toy to be consistent between different functions.

I think R-2.0.0 sources ./data/toys.R *before* the files in ./R/   ; 
and this throws an error in
R2 CMD check, because the functions are not found.

What is best practice to generate this kind of toy dataset?

--
Robin Hankin
Uncertainty Analyst
Southampton Oceanography Centre
SO14 3ZH
tel +44(0)23-8059-7743
[EMAIL PROTECTED] (edit in obvious way; spam precaution)
__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] R post-hoc and GUI

2004-09-14 Thread Paolo Ariano
Hi *
i've done my anova anlysis but now i need post-hoc test, are these
included in R ?

I've a Big problem, working with people that don't like to use
command-line software (but prefer something like openoffice) does
someone is trying to do a usable GUI for R ? i'm reading something on R
commander SciView and others but all seem to be beta. I'd like to make
possible to make anova and post-hoc more simple to my collegues ;)

thanks
paolo
-- 
Paolo Ariano
Neuroscience PhD Student @ UniTo

Una non esclude l'altra ed entrambe non escludono
i soldi - Paolo A.  


_
For your security, this mail has been scanned and protected by Inflex

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] glmmPQL and random factors

2004-09-14 Thread Per Toräng
Hello!

I have tested the effect of two treatments on fruit set, i.e. fruits per plant. 
The treatments had two levels each giving four different treatment 
combinations. Forty separated plots were subjected to one of the treatment 
combinations so each combination was replicated ten times. I intend to analyze 
my data using the glmmPQL procedure in the following way.

glmmPQL(Fruit.set~Treat1*Treat2+offset(log10(No.flowers)), random=~1|Plot, 
family=poisson, data=…)

Plot is supposed to be nested in (Treat1*Treat2).
Is this analysis suitable? Moreover, what is the meaning of typing 
random=~1|Plot compared to random=~Treat1*Treat2|Plot?

Cheers
Per Toräng
[EMAIL PROTECTED]

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] erase columns

2004-09-14 Thread Wolski
?subset

/E
*** REPLY SEPARATOR  ***

On 9/14/2004 at 10:44 AM michele lux wrote:

>>>Can somebody remember me which is the command to erase
>>>columns from a data frame?
>>>Thanks Michele
>>>
>>>
>>> 
>>>___
>>>
>>>http://it.seriea.fantasysports.yahoo.com/
>>>
>>>__
>>>[EMAIL PROTECTED] mailing list
>>>https://stat.ethz.ch/mailman/listinfo/r-help
>>>PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html



Dipl. bio-chem. Witold Eryk Wolski @ MPI-Moleculare Genetic   
Ihnestrasse 63-73 14195 Berlin'v'
tel: 0049-30-83875219/   \   
mail: [EMAIL PROTECTED]---W-Whttp://www.molgen.mpg.de/~wolski 
  [EMAIL PROTECTED]

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] maximization subject to constaint

2004-09-14 Thread Thomas Lumley
On Mon, 13 Sep 2004, Shuangge Ma wrote:
constrOptim() will do this, but it isn't a particularly efficient 
algorithm when the number of constraints is large.

-thomas

Hello:
I have been trying to program the following maximization problem and would
definitely welcome some help.
the target function: sum_{i} f(alpha, beta'X_{i}),
where alpha and beta are unknown d-dim parameter,
f is a known function an X_{i} are i.i.d. r.v.
I need to maximize the above sum, under the constaint that:
beta'X_{i}+alpha<=1, for i=1,...,n.
For one dimension, it is kind of trivial. What should I do with high
dimensional alpha and beta?  Thanks for your time,
Shuangge Ma, Ph.D.
__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Thomas Lumley   Assoc. Professor, Biostatistics
[EMAIL PROTECTED]   University of Washington, Seattle
__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


RE: [R] erase columns

2004-09-14 Thread Rau, Roland
Hi, 

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of michele lux
Sent: Dienstag, 14. September 2004 10:44
To: [EMAIL PROTECTED]
Subject: [R] erase columns

Can somebody remember me which is the command to erase
columns from a data frame?
Thanks Michele

I hope the following code-piece helps what you are looking for:

mydf <- as.data.frame(matrix(runif(100),ncol=5))

### if you want to erase the third column, do:
mydf <- mydf[,-3]


mydf2 <- as.data.frame(matrix(runif(100),ncol=20))

### if you want to erase the first, third and twentieth column, do:
mydf2 <- mydf2[,-c(1,5,20)]

Ciao,
Roland


+
This mail has been sent through the MPI for Demographic Rese...{{dropped}}

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] erase columns

2004-09-14 Thread michele lux
Can somebody remember me which is the command to erase
columns from a data frame?
Thanks Michele



___

http://it.seriea.fantasysports.yahoo.com/

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Can I find the datetime an object was last assigned to/saved?

2004-09-14 Thread Gabor Grothendieck
  rhotrading.com> writes:

> I'm using v 1.9.1 under Windoz XP.
> 
> Can I do the equivalent of "ls -l" on my R objects? R's "ls()" lists
> only the names.

Check out this thread where it was previously discussed:

http://tolstoy.newcastle.edu.au/R/help/04/05/0207.html

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Can I find the datetime an object was last assigned to/saved?

2004-09-14 Thread Uwe Ligges
Prof Brian Ripley wrote:
On Tue, 14 Sep 2004, Uwe Ligges wrote:

[EMAIL PROTECTED] wrote:

I'm using v 1.9.1 under Windoz XP.
Can I do the equivalent of "ls -l" on my R objects? R's "ls()" lists
only the names.
For example ll() in package gregmisc.

But that does not give datetimes, since they are not recorded.
Ups, sorry. Forgot the subject line while reading the body ...
Uwe
It also needs a warning on object sizes, which had it credited its use of 
object.size you would have been able to find.  Would the maintainer please

1) Give credit where credit is due and add a \seealso, and
2) Include an appropriate warning.
__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Can I find the datetime an object was last assigned to/saved?

2004-09-14 Thread Prof Brian Ripley
On Tue, 14 Sep 2004, Uwe Ligges wrote:

> [EMAIL PROTECTED] wrote:
> 
> > I'm using v 1.9.1 under Windoz XP.
> > 
> > Can I do the equivalent of "ls -l" on my R objects? R's "ls()" lists
> > only the names.
> 
> For example ll() in package gregmisc.

But that does not give datetimes, since they are not recorded.

It also needs a warning on object sizes, which had it credited its use of 
object.size you would have been able to find.  Would the maintainer please

1) Give credit where credit is due and add a \seealso, and
2) Include an appropriate warning.

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] memory allocation error message

2004-09-14 Thread Prof Brian Ripley
On 14 Sep 2004, Peter Dalgaard wrote:

> "Prodromos Zanis" <[EMAIL PROTECTED]> writes:
> 
> > Dear all
> > 
> > I use the library(netCDF) to read in NCEP data. The file I want to
> > read has size 113 Mb. When i try to read it I get the following
> > message:
> > 
> > Error: cannot allocate vector of size 221080 Kb
> > In addition: Warning message: 
> > Reached total allocation of 255Mb: see help(memory.size) 
> > 
> > I get a similar message when I try to read a file with 256 Mb in a
> > PC with 2 GigaByte RAM.
> > 
> > Is there something that I can do to handle this problem of reading
> > big netCDF files with R-project.
> 
> Did you read help(memory.size)? and follow instructions therein?

Also, netCDF has been withdrawn from CRAN, and you might want to use ncdf 
or RNetCDF instead.  (Windows ports of both are available now: see the 
ReadMe on the CRAN windows contrib area.)

If the message really was similar you are using R < 1.6.0 and need to 
upgrade.

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Can I find the datetime an object was last assigned to/saved?

2004-09-14 Thread Uwe Ligges
[EMAIL PROTECTED] wrote:
I'm using v 1.9.1 under Windoz XP.
Can I do the equivalent of "ls -l" on my R objects? R's "ls()" lists
only the names.
For example ll() in package gregmisc.
Uwe Ligges
Thanks!
 

David L. Reiner
 

Rho Trading
440 S. LaSalle St -- Suite 620
Chicago  IL  60605
 

312-362-4963 (voice)
312-362-4941 (fax)
 

 

[[alternative HTML version deleted]]
__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] memory allocation error message

2004-09-14 Thread Uwe Ligges
Prodromos Zanis wrote:
Dear all
I use the library(netCDF) to read in NCEP data. The file I want to read has size 113 
Mb.
When i try to read it I get the following message:
Error: cannot allocate vector of size 221080 Kb
In addition: Warning message: 
Reached total allocation of 255Mb: see help(memory.size) 
So this is an R version < 1.9.0 !
1. Please upgrade.
2. Please read ?Memory and learn how to increase the maximum amount of 
memory consumed by R under Windows.

Uwe Ligges

I get a similar message when I try to read a file with 256 Mb in a PC with 2 GigaByte 
RAM.
Is there something that I can do to handle this problem of reading big netCDF files 
with R-project.
I look forward for your help.
Prodromos Zanis

Dr. Prodromos Zanis
Research Centre for Atmospheric Physics and Climatology 
Academy of Athens
3rd September 131, Athens 11251, Greece
Tel. +30 210 8832048
Fax: +30 210 8832048
e-mail: [EMAIL PROTECTED]
Web address: http://users.auth.gr/~zanis/
*

[[alternative HTML version deleted]]
__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html