[R] lme output

2007-12-05 Thread Marc Bernard
Dear all,
   
  I noticed the following in the call of lme using msVerbose.
   
  fm1 - lme(distance ~ age, data = Orthodont, control = 
lmeControl(msVerbose=T))
   
9  318.073: -0.567886 0.152479  1.98021
 10  318.073: -0.567191 0.152472  1.98009
 11  318.073: -0.567208 0.152473  1.98010

   
  fm2 - lme(distance ~ age, random =~age, data = Orthodont, 
lmeControl(msVerbose=T))
   
7  318.073: -0.342484  1.75530  4.44650
  8  318.073: -0.342507  1.75539  4.44614
  9  318.073: -0.342497  1.75539  4.44614

   
  The two model are equivalent and give the same estimates. However, the 
optimal parameters in the profiled log-likelihood are not the same? why?
   
  As I usually thought, the parameters optimised in the profiled likelihood are 
the log of the precision matrix. The latter can be  derived  as a Cholesky 
factorization of  the product between the residuals variance and the inverse of 
the random effects covariance. When I check that it's not the case for model 
fm1 even if it's equivalent to model fm2.
   
   log(chol(((summary(fm1)$sigma)^2)*solve( matrix(getVarCov(fm1), nrow=2
   
 [,1][,2]
[1,] -0.3424971 1.492037
[2,]  -Inf   1.755388

  log(chol(((summary(fm2)$sigma)^2)*solve( matrix(getVarCov(fm2), nrow=2

   
 [,1][,2]
[1,] -0.3424971 1.492037
[2,]  -Inf   1.755388

   
  In the two mdels, this terms are equals to the optimized parameters in fm2 
not in fm1. I am missing something I suppose.
   
  Bests,
   
  Bernard

 
-

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Multiple stacked barplots on the same graph?

2007-12-05 Thread hadley wickham
And qplot(x=Categorie,y=Total,data=mydata,geom=bar,fill=Part) + coord_flip()

makes it a bit easier to read the labels.

Hadley

On Dec 5, 2007 8:33 AM, Domenico Vistocco [EMAIL PROTECTED] wrote:
 This command works:

 qplot(x=Categorie,y=Total,data=mydata,geom=bar,fill=Part)

 for your data.

 domenico vistocco

 Stéphane CRUVEILLER wrote:

  Hi,
 
  the same error message is displayed with geom=bar as parameter.
  here is the output of dput:
 
   dput(mydata)
  structure(list(Categorie = structure(c(1L, 12L, 8L, 2L, 5L, 7L,
  16L, 6L, 15L, 11L, 10L, 13L, 14L, 3L, 4L, 9L, 17L, 1L, 12L, 8L,
  2L, 5L, 7L, 16L, 6L, 15L, 11L, 10L, 13L, 14L, 3L, 4L, 9L, 17L
  ), .Label = c(Amino acid biosynthesis, Biosynthesis of cofactors,
  prosthetic groups, and carriers,
  Cell envelope, Cellular processes, Central intermediary metabolism,
  DNA metabolism, Energy metabolism, Fatty acid and phospholipid
  metabolism,
  Mobile and extrachromosomal element functions, Protein fate,
  Protein synthesis, Purines, pyrimidines, nucleosides, and
  nucleotides,
  Regulatory functions, Signal transduction, Transcription,
  Transport and binding proteins, Unknown function), class = factor),
 Part = structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L,
 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L), .Label = c(common,
 specific), class = factor), Total = c(3.03, 1.65, 1.52,
 2.85, 3.4, 11.81, 10.51, 1.95, 2.08, 2.51, 2.23, 7.63, 1.88,
 2.76, 7.21, 1.08, 20.75, 0.35, 0.17, 0.08, 0.18, 0.42, 2.05,
 1.98, 0.63, 0.17, 0.2, 0.3, 1.58, 0.27, 0.83, 1.38, 3.56,
 11.63), chr1 = c(4.55, 2.37, 1.77, 4.68, 3.19, 12.49, 13.56,
 2.81, 3.13, 4.58, 3.26, 7.3, 2.06, 3.41, 7.9, 0.22, 22.45,
 0.16, 0.06, 0.09, 0.19, 0.09, 0.7, 0.85, 0.22, 0.06, 0.03,
 0.32, 0.66, 0.06, 0.63, 0.38, 1.14, 6.17), chr2 = c(1.68,
 1.06, 1.55, 1.02, 4.57, 13.87, 7.85, 0.98, 1.06, 0.27, 1.2,
 9.88, 2.13, 2.53, 7.71, 0.4, 22.38, 0.71, 0.35, 0.09, 0.22,
 0.98, 3.9, 3.24, 0.22, 0.22, 0.49, 0.31, 2.79, 0.62, 1.33,
 1.95, 0.44, 16), pl = c(0, 0, 0, 0, 0, 0.17, 4.27, 1.03,
 0.34, 0, 0.68, 0.68, 0, 0.17, 1.54, 8.38, 5.3, 0, 0, 0, 0,
 0, 2.22, 3.25, 4.44, 0.51, 0, 0.17, 1.88, 0, 0, 4.62, 28.72,
 24.27)), .Names = c(Categorie, Part, Total, chr1,
  chr2, pl), class = data.frame, row.names = c(NA, -34L))
 
 
  thx,
 
  Stéphane.
 
  hadley wickham wrote:
  On Dec 4, 2007 10:34 AM, Stéphane CRUVEILLER
  [EMAIL PROTECTED] wrote:
 
  Hi,
 
  I tried this method but it seems that there is something wrong with my
  data frame:
 
 
  when I type in:
 
qplot(x=as.factor(Categorie),y=Total,data=mydata)
 
  It displays a graph with 2 points in each category...
  but if  I add the parameter geom=histogram
 
qplot(x=as.factor(Categorie),y=Total,data=mydata,geom=histogram)
 
 
  Error in storage.mode(test) - logical :
  object y not found
 
  any hint about this...
 
 
  Could you copy and paste the output of dput(mydata) ?
 
  (And I'd probably write the plot call as: qplot(Categorie, Total,
  data=mydata, geom=bar), since it is a bar plot, not a histogram)
 
 
 





-- 
http://had.co.nz/

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Plotting error bars in xy-direction

2007-12-05 Thread Ben Bolker



Hans W Borchers wrote:
 
 Dear R-help,
 
 I am looking for a function that will plot error bars in x- or y-direction
 (or 
 both), the same as the Gnuplot function 'plot' can achieve with:
 
 plot file.dat with xyerrorbars,...
 
 Rsite-searching led me to the functions 'errbar' and 'plotCI' in the
 Hmisc, 
 gregmisc, and plotrix packages. As I understand the descriptions and
 examples, 
 none of these functions provides horizontal error bars.
 
 Looking into 'errbar' and using segments, I wrote a small function for
 myself 
 adding these kinds of error bars to existing plots. I would still be
 interested 
 to know what the standard R solution is.
 
 Regards,  Hans Werner
 
 

plotCI from plotrix will do horizontal error bars --
from ?plotCI:

  err: The direction of error bars: x for horizontal, y for
  vertical (xy would be nice but is not implemented yet;
  don't know quite how everything would be specified.  See
  examples for composing a plot with simultaneous horizontal
  and vertical error bars)

  Ben Bolker

  
-- 
View this message in context: 
http://www.nabble.com/Plotting-error-bars-in-xy-direction-tf4948535.html#a14174151
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Which Linux OS on Athlon amd64, to comfortably run R?

2007-12-05 Thread Thibaut Jombart
8rino-Luca Pantani wrote:

Dear R-users.
I eventually bought myself a new computer with the following 
characteristics:

Processor AMD ATHLON 64 DUAL CORE 4000+ (socket AM2)
Mother board ASR SK-AM2 2
Ram Corsair Value 1 GB DDR2 800 Mhz
Hard Disk WESTERN DIGITAL 160 GB SATA2 8MB

I'm a newcomer to the Linux world.
I started using it (Ubuntu 7.10 at work and FC4 on laptop) on a regular 
basis on May.
I must say I'm quite comfortable with it, even if I have to re-learn a 
lot of things.  But this is not a problem, I will improve my knowledge 
with time.

My main problem now, is that I installed Ubuntu 7.10 Gutsy Gibbon on the 
new one amd64.

To install R on it i followed the directions found here
http://help.nceas.ucsb.edu/index.php/Installing_R_on_Ubuntu

but unfortunately it did not work.

After reading some posts on the R-SIG-debian list, such as

https://stat.ethz.ch/pipermail/r-sig-debian/2007-October/000253.html

I immediately realize that an amd64 is not the right processor to make 
life easy.

Therefore I would like to know from you, how can I solve this problem:
Should I install the i386 version of R ?
Should I install another flavour of Linux ?
Which one ?
Fedora Core 7 ?
Debian ?

Thanks a lot, for any suggestion

  

Hi,

I've got an Athlon 64bits 3000+ processor and Ubuntu LTS (dapper) 
installed (64 bits version) on my laptop. I do not have any problem to 
install R from the sources, as long as the correct 
libraries/compilers/etc. are installed. But it is no pain if you just 
follow what the configure script tells you (and use apt-get to install 
missing packages). I guess a common mistake is to forget to install 
-dev versions of packages, which sometimes contain required headers. 
However, you should not have troubles installing R on different R 
distributions, 64bits or not.

Hope this help.

Thibaut, 64bit-linux-Ruser and still alive.

-- 
##
Thibaut JOMBART
CNRS UMR 5558 - Laboratoire de Biométrie et Biologie Evolutive
Universite Lyon 1
43 bd du 11 novembre 1918
69622 Villeurbanne Cedex
Tél. : 04.72.43.29.35
Fax : 04.72.43.13.88
[EMAIL PROTECTED]
http://lbbe.univ-lyon1.fr/-Jombart-Thibaut-.html?lang=en
http://pbil.univ-lyon1.fr/software/adegenet/

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Which Linux OS on Athlon amd64, to comfortably run R?

2007-12-05 Thread 8rino-Luca Pantani
Dear R-users.
I eventually bought myself a new computer with the following 
characteristics:

Processor AMD ATHLON 64 DUAL CORE 4000+ (socket AM2)
Mother board ASR SK-AM2 2
Ram Corsair Value 1 GB DDR2 800 Mhz
Hard Disk WESTERN DIGITAL 160 GB SATA2 8MB

I'm a newcomer to the Linux world.
I started using it (Ubuntu 7.10 at work and FC4 on laptop) on a regular 
basis on May.
I must say I'm quite comfortable with it, even if I have to re-learn a 
lot of things.  But this is not a problem, I will improve my knowledge 
with time.

My main problem now, is that I installed Ubuntu 7.10 Gutsy Gibbon on the 
new one amd64.

To install R on it i followed the directions found here
http://help.nceas.ucsb.edu/index.php/Installing_R_on_Ubuntu

but unfortunately it did not work.

After reading some posts on the R-SIG-debian list, such as

https://stat.ethz.ch/pipermail/r-sig-debian/2007-October/000253.html

I immediately realize that an amd64 is not the right processor to make 
life easy.

Therefore I would like to know from you, how can I solve this problem:
Should I install the i386 version of R ?
Should I install another flavour of Linux ?
Which one ?
Fedora Core 7 ?
Debian ?

Thanks a lot, for any suggestion

-- 
Ottorino-Luca Pantani, Università di Firenze
Dip. Scienza del Suolo e Nutrizione della Pianta
P.zle Cascine 28 50144 Firenze Italia
Tel 39 055 3288 202 (348 lab) Fax 39 055 333 273 
[EMAIL PROTECTED]  http://www4.unifi.it/dssnp/

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Which Linux OS on Athlon amd64, to comfortably run R?

2007-12-05 Thread Prof Brian Ripley
Note that Ottorino has only 1GB of RAM installed, which makes a 64-bit 
version of R somewhat moot.  See chapter 8 of


http://cran.r-project.org/doc/manuals/R-admin.html

I would install a i386 version of R on x86_64 Linux unless I had 2Gb or 
more of RAM.  I don't know how easily that works on Ubuntu these days, but 
I would try it.



On Wed, 5 Dec 2007, Ljubomir J. Buturovic wrote:



Hi Ottorino,

I have been using R on 64-bit Ubuntu for about a year without
problems, both Intel and AMD CPUs. Installing and using several
packages (e1071, svmpath, survival) also works. However, I had to
install R from source:

$ gunzip -c R-2.6.1.tar.gz | tar xvf -
$ cd R-2.6.1
$ ./configure --enable-R-shlib; make; make pdf
# make install; make install-pdf

Notice that `make install' has to be run as root.

I am using Feisty Fawn (Ubuntu 7.04), although I doubt that makes a
difference.

Hope this helps,

Ljubomir

8rino-Luca Pantani writes:
 Dear R-users.
 I eventually bought myself a new computer with the following
 characteristics:

 Processor AMD ATHLON 64 DUAL CORE 4000+ (socket AM2)
 Mother board ASR SK-AM2 2
 Ram Corsair Value 1 GB DDR2 800 Mhz
 Hard Disk WESTERN DIGITAL 160 GB SATA2 8MB

 I'm a newcomer to the Linux world.
 I started using it (Ubuntu 7.10 at work and FC4 on laptop) on a regular
 basis on May.
 I must say I'm quite comfortable with it, even if I have to re-learn a
 lot of things.  But this is not a problem, I will improve my knowledge
 with time.

 My main problem now, is that I installed Ubuntu 7.10 Gutsy Gibbon on the
 new one amd64.

 To install R on it i followed the directions found here
 http://help.nceas.ucsb.edu/index.php/Installing_R_on_Ubuntu

 but unfortunately it did not work.

 After reading some posts on the R-SIG-debian list, such as

 https://stat.ethz.ch/pipermail/r-sig-debian/2007-October/000253.html

 I immediately realize that an amd64 is not the right processor to make
 life easy.

 Therefore I would like to know from you, how can I solve this problem:
 Should I install the i386 version of R ?
 Should I install another flavour of Linux ?
 Which one ?
 Fedora Core 7 ?
 Debian ?

 Thanks a lot, for any suggestion

 --
 Ottorino-Luca Pantani, Università di Firenze
 Dip. Scienza del Suolo e Nutrizione della Pianta
 P.zle Cascine 28 50144 Firenze Italia
 Tel 39 055 3288 202 (348 lab) Fax 39 055 333 273
 [EMAIL PROTECTED]  http://www4.unifi.it/dssnp/


--
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R function for percentrank

2007-12-05 Thread Martin Maechler
I'm coming late to this, but this *does* need a correction
just for the archives !

 MS == Marc Schwartz [EMAIL PROTECTED]
 on Sat, 01 Dec 2007 13:33:21 -0600 writes:

MS On Sat, 2007-12-01 at 18:40 +, David Winsemius wrote:
 David Winsemius [EMAIL PROTECTED] wrote in 
 news:[EMAIL PROTECTED]:
 
  tom soyer [EMAIL PROTECTED] wrote in
  news:[EMAIL PROTECTED]: 
  
  John,
  
  The Excel's percentrank function works like this: if one has a number,
  x for example, and one wants to know the percentile of this number in
  a given data set, dataset, one would type =percentrank(dataset,x) in
  Excel to calculate the percentile. So for example, if the data set is
  c(1:10), and one wants to know the percentile of 2.5 in the data set,
  then using the percentrank function one would get 0.166, i.e., 2.5 is
  in the 16.6th percentile. 
  
  I am not sure how to program this function in R. I couldn't find it as
  a built-in function in R either. It seems to be an obvious choice for
  a built-in function. I am very surprised, but maybe we both missed it.
   
  My nomination for a function with a similar result would be ecdf(), 
the 
  empirical cumulative distribution function. It is of class function 
 so 
  efforts to index ecdf(.)[.] failed for me.

I think you did not understand ecdf() !!!
It *returns* a function,
that you can then apply to old (or new) data; see below

MS You can use ls.str() to look into the function environment:

 ls.str(environment(ecdf(x)))
MS f :  num 0
MS method :  int 2
MS n :  int 25
MS x :  num [1:25] -2.215 -1.989 -0.836 -0.820 -0.626 ...
MS y :  num [1:25] 0.04 0.08 0.12 0.16 0.2 0.24 0.28 0.32 0.36 0.4 ...
MS yleft :  num 0
MS yright :  num 1



MS You can then use get() or mget() within the function environment to
MS return the requisite values. Something along the lines of the following
MS within the function percentrank():

MS percentrank - function(x, val)
MS {
MS env.x - environment(ecdf(x))
MS res - mget(c(x, y), env.x)
MS Ind - which(sapply(seq(length(res$x)),
MS function(i) isTRUE(all.equal(res$x[i], val
MS res$y[Ind]
MS }

sorry Marc, but Yuck !!

- this  percentrank() only works when you apply it to original x[i] values
- only works for 'val' of length 1
- is a complicated hack

and absolutely unneeded  (see below)

MS Thus:

MS set.seed(1)
MS x - rnorm(25)

 x
MS [1] -0.62645381  0.18364332 -0.83562861  1.59528080  0.32950777
MS [6] -0.82046838  0.48742905  0.73832471  0.57578135 -0.30538839
MS [11]  1.51178117  0.38984324 -0.62124058 -2.21469989  1.12493092
MS [16] -0.04493361 -0.01619026  0.94383621  0.82122120  0.59390132
MS [21]  0.91897737  0.78213630  0.07456498 -1.98935170  0.61982575


 percentrank(x, 0.48742905)
MS [1] 0.56

[gives 0.52 in my version of R ]

Well, that is *THE SAME*  as using  ecdf() the way you 
should have used it :

  ecdf(x)(0.48742905)

{in two lines, that is

  mypercR - ecdf(x)
  mypercR(0.48742905)

 which maybe easier to understand, if you have never used the
 nice concept that underlies all of

 approxfun(), splinefun() or ecdf()
}

You can also use

  ecdf(x)(x)

and indeed check that it is identical to the convoluted
percentrank() function above :

 ecdf(x)(0.48742905)
[1] 0.52
 ecdf(x)(x)
 [1] 0.20 0.44 0.12 1.00 0.48 0.16 0.56 0.72 0.60 0.28 0.96 0.52 0.24 0.04 0.92
[16] 0.32 0.36 0.88 0.80 0.64 0.84 0.76 0.40 0.08 0.68
 all(ecdf(x)(x) == sapply(x, function(v) percentrank(x,v)))
[1] TRUE
 


Regards (and apologies for my apparent indignation ;-)
by the author of ecdf() ,

Martin Maechler, ETH Zurich  


MS One other approach, which returns the values and their respective rank
MS percentiles is:

  cumsum(prop.table(table(x)))

[.. snip ]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Java parser for R data file?

2007-12-05 Thread David Coppit
Hi everyone,

Has anyone written a parser in Java for either the ASCII or binary format
produced by save()? I need to parse a single large 2D array that is
structured like this:

list(
  32609_1 = c(-9549.39231289146, -9574.07159324482, ... ),
  32610_2 = c(-6369.12526971635, -6403.99620977124, ... ),
  32618_2 = c(-2138.29095689061, -2057.9229403233, ... ),
...
)

Or, given that I'm dealing with just a single array, would it be better to
roll my own I/O using write.table or write.matrix from the MASS package?

Thanks,
David

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Which Linux OS on Athlon amd64, to comfortably run R?

2007-12-05 Thread Peter Dalgaard
Prof Brian Ripley wrote:
 Note that Ottorino has only 1GB of RAM installed, which makes a 64-bit
 version of R somewhat moot.  See chapter 8 of

 http://cran.r-project.org/doc/manuals/R-admin.html

Only somewhat. The Opteron actually has 1GB too (Hey, it was bought in
2004!  And the main point was to see whether 64 bit builds would work at
all) but 16GB of swap. So large data sets will fit but be processed slowly.
 I would install a i386 version of R on x86_64 Linux unless I had 2Gb
 or more of RAM.  I don't know how easily that works on Ubuntu these
 days, but I would try it.
It's not like the 64 bit build feels slow for basic usage, though. I
don't think you need to bother with mixing architectures unless you have
applications where it really matters (CPU intensive, but below 32-bit
addressing limitations). Buying more RAM is much to be preferred.

Now what kind of RAM does my Opteron board take... ?



-- 
   O__   Peter Dalgaard Øster Farimagsgade 5, Entr.B
  c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
 (*) \(*) -- University of Copenhagen   Denmark  Ph:  (+45) 35327918
~~ - ([EMAIL PROTECTED])  FAX: (+45) 35327907

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Which Linux OS on Athlon amd64, to comfortably run R?

2007-12-05 Thread Dirk Eddelbuettel
On Wed, Dec 05, 2007 at 06:11:40PM +0100, Peter Dalgaard wrote:
 One oddity about Ubuntu is that there are no CRAN builds for 64bit.

Volunteers would be welcomed with open arms.

Dirk,

-- 
Three out of two people have difficulties with fractions.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] confidence intervals for y predicted in non linear regression

2007-12-05 Thread Peter Dalgaard
Prof Brian Ripley wrote:
 You mean the package nls2 at

 http://w3.jouy.inra.fr/unites/miaj/public/AB/nls2/install.html

 and not the unfortunately named nls2 that has just appeared on CRAN?

 The first is not really a 'package' in the R sense.

Actually, it is one (sort of), but it is broken. The instructions say
that you can use R CMD INSTALL to install in R  2.0.0 (!) With a
current R, you can try but it dies:

No man pages found in package 'nls2'
** building package indices ...
Warning in file(file, r, encoding = encoding) :
  cannot open file '../R/init.R', reason 'No such file or directory'
Error in file(file, r, encoding = encoding) : unable to open connection
Calls: Anonymous ... switch - sys.source - eval - eval - source -
file
Execution halted
ERROR: installing package indices failed
** Removing '/home/bs/pd/Rlibrary/nls2'

...and there were some odd goings-on at the start as well.

-- 
   O__   Peter Dalgaard Øster Farimagsgade 5, Entr.B
  c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
 (*) \(*) -- University of Copenhagen   Denmark  Ph:  (+45) 35327918
~~ - ([EMAIL PROTECTED])  FAX: (+45) 35327907

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Working with ts objects

2007-12-05 Thread Richard Saba
I am relatively new to R and object oriented programming. I have relied on
SAS for most of my data analysis.  I teach an introductory undergraduate
forecasting course using the Diebold text and I am considering using R in
addition to SAS and Eviews in the course. I work primarily with univariate
or multivariate time series data. I am having a great deal of difficulty
understanding and working with ts objects particularly when it comes to
referencing variables in plot commands or in formulas. The confusion is
amplified when certain procedures (lm for example) coerce the ts object
into a data.frame before application with the results that the output is
stored in a data.frame object. 
For example the two sets of code below replicate examples from chapter 2 and
6 in the text. In the first set of code if I were to replace
anscombe-read.table(fname, header=TRUE) with
anscombe-ts(read.table(fname, header=TRUE)) the plot() commands would
generate errors. The objects x1, y1 ...  would not be recognized. In
this case I would have to reference the specific column in the anscombe data
set. If I would have constructed the data set from several different data
sets using the ts.intersect() function (see second code below)the problem
becomes even more involved and keeping track of which columns are associated
with which variables can be rather daunting. All I wanted was to plot actual
vs. predicted values of hstarts and the residuals from the model. 

Given the difficulties I have encountered I know my students will have
similar problems. Is there a source other than the basic R manuals that I
can consult and recommend to my students that will help get a handle on
working with time series objects? I found the Shumway Time series analysis
and its applications with R Examples website very helpful but many
practical questions involving manipulation of time series data still remain.
Any help will be appreciated.
Thanks,

Richard Saba
Department of Economics
Auburn University
Email:  [EMAIL PROTECTED]
Phone:  334 844-2922




anscombe-read.table(fname, header=TRUE)
names(anscombe)-c(x1,y1,x2,y2,x3,y3,x4,y4)  
reg1-lm(y1~1 + x1, data=anscombe)
reg2-lm(y2~1 + x2, data=anscombe)
reg3-lm(y3~1 + x3, data=anscombe)
reg4-lm(y4~1 + x4, data=anscombe)
summary(reg1)
summary(reg2)
summary(reg3)   
summary(reg4)
par(mfrow=c(2,2))
plot(x1,y1)
abline(reg1)
plot(x2,y2)
abline(reg2)
plot(x3,y3)
abline(reg3)
plot(x4,y4)
abline(reg4)

..
fname-file.choose()
tab6.1-ts(read.table(fname, header=TRUE),frequency=12,start=c(1946,1))
month-cycle(tab6.1)
year-floor(time(tab6.1))
dat1-ts.intersect(year,month,tab6.1)
dat2-window(dat1,start=c(1946,1),end=c(1993,12)) 
reg1-lm(tab6.1~1+factor(month),data=dat2, na.action=NULL)
summary(reg1)   
hstarts-dat2[,3] 
plot1-ts.intersect(hstarts,reg1$fitted.value,reg1$resid)
plot.ts(plot1[,1])
lines(plot1[,2], col=red)
plot.ts(plot[,3], ylab=Residuals)

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Which Linux OS on Athlon amd64, to comfortably run R?

2007-12-05 Thread Patrick Connolly
On Wed, 05-Dec-2007 at 06:11PM +0100, Peter Dalgaard wrote:

[]

| One oddity about Ubuntu is that there are no CRAN builds for 64bit.
| Presumably, the Debian packages work, or you can get the build
| script and make your own build. This is not really within my
| domain, though.

I've always used rpms (or debs for Debian-type OSes) but I install R
from source which is very easy to do.  Adding R packages with
install.packages() is also extremely easy.  If you have the 64bit OS,
it will compile R as 64 bit (enless you make some modifications to the
standard configuration).

One downside of that is that you'd be unable to use packages that have
only 32 bit versions.  One such is ASReml-R but if you never intend to
use such things, the only other consideration I can think of is the
relatively small amount of memory.  No great benefits of 64 bit
without lots of memory, but a few downsides.

HTH

-- 
~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.   
   ___Patrick Connolly   
 {~._.~} Great minds discuss ideas
 _( Y )_Middle minds discuss events 
(:_~*~_:)Small minds discuss people  
 (_)-(_)   . Anon
  
~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Java parser for R data file?

2007-12-05 Thread Prof Brian Ripley
On Wed, 5 Dec 2007, David Coppit wrote:

 Hi everyone,

 Has anyone written a parser in Java for either the ASCII or binary format
 produced by save()? I need to parse a single large 2D array that is
 structured like this:

 list(
  32609_1 = c(-9549.39231289146, -9574.07159324482, ... ),
  32610_2 = c(-6369.12526971635, -6403.99620977124, ... ),
  32618_2 = c(-2138.29095689061, -2057.9229403233, ... ),
...
 )

 Or, given that I'm dealing with just a single array, would it be better to
 roll my own I/O using write.table or write.matrix from the MASS package?

It would be much easier.  The save() format is far more complex than you 
need.  However, I would use writeBin() to write a binary file and read 
that in in Java, avoiding the binary - ASCII - binary conversion.

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Which Linux OS on Athlon amd64, to comfortably run R?

2007-12-05 Thread Peter Dalgaard
8rino-Luca Pantani wrote:
 Dear R-users.
 I eventually bought myself a new computer with the following 
 characteristics:

 Processor AMD ATHLON 64 DUAL CORE 4000+ (socket AM2)
 Mother board ASR SK-AM2 2
 Ram Corsair Value 1 GB DDR2 800 Mhz
 Hard Disk WESTERN DIGITAL 160 GB SATA2 8MB

 I'm a newcomer to the Linux world.
 I started using it (Ubuntu 7.10 at work and FC4 on laptop) on a regular 
 basis on May.
 I must say I'm quite comfortable with it, even if I have to re-learn a 
 lot of things.  But this is not a problem, I will improve my knowledge 
 with time.

 My main problem now, is that I installed Ubuntu 7.10 Gutsy Gibbon on the 
 new one amd64.

 To install R on it i followed the directions found here
 http://help.nceas.ucsb.edu/index.php/Installing_R_on_Ubuntu

 but unfortunately it did not work.

 After reading some posts on the R-SIG-debian list, such as

 https://stat.ethz.ch/pipermail/r-sig-debian/2007-October/000253.html

 I immediately realize that an amd64 is not the right processor to make 
 life easy.

 Therefore I would like to know from you, how can I solve this problem:
 Should I install the i386 version of R ?
 Should I install another flavour of Linux ?
 Which one ?
 Fedora Core 7 ?
 Debian ?

 Thanks a lot, for any suggestion

   
Amd64 architecture should not be a major issue for R on any of the major
platforms, as far as I know. I have Fedora 7 (soon-ish F8) on the big
machine back home (dual Opteron) and that one never had any major issues
with either of source builds or the official RPMs. The main (only?)
thing that still tends to bite people on 64 bit is browser plugins,
notably Java.

In general, I've been happy with Fedora, although its desire to update
itself constantly does require a good 'Net connection.

My SUSE desktop at work is also 64 bit and happy to deal with R (in fact
the official release builds are made on it). The KDE desktop has a few
annoying misfeatures (to me), though, and you need a little special
setup to include Detlefs repository as an install source.

One oddity about Ubuntu is that there are no CRAN builds for 64bit.
Presumably, the Debian packages work, or you can get the build script
and make your own build. This is not really within my domain, though.


-- 
   O__   Peter Dalgaard Øster Farimagsgade 5, Entr.B
  c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
 (*) \(*) -- University of Copenhagen   Denmark  Ph:  (+45) 35327918
~~ - ([EMAIL PROTECTED])  FAX: (+45) 35327907

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] newbie lapply question

2007-12-05 Thread Ranjan Bagchi


On Wed, 5 Dec 2007, Prof Brian Ripley wrote:
[...]

Thanks I'll read it more carefully.

 Perhaps if you told us what you are trying to achieve we might be able to 
 help you achieve it.


I have a function which takes a date as an argument.  I've tested it, and 
I'd like to run it over a range of dates.  So I'm looking at apply- or 
map- type functions.

 -- 
 Brian D. Ripley,  [EMAIL PROTECTED]
 Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
 University of Oxford, Tel:  +44 1865 272861 (self)
 1 South Parks Road, +44 1865 272866 (PA)
 Oxford OX1 3TG, UKFax:  +44 1865 272595




__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Dealing with NA's in a data matrix

2007-12-05 Thread Amit Patel
Hi I have a matrix with NA value that I would like to convert these to a value 
of 0.
any suggestions
 
Kind Regards
Amit Patel




  ___

ttp://uk.promotions.yahoo.com/forgood/
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Displaying numerics to full double precision

2007-12-05 Thread Ben Bolker



Jeff Delmerico wrote:
 
 I'm working on a shared library of C functions for use with R, and I want
 to create a matrix in R and pass it to the C routines.  I know R computes
 and supposedly stores numerics in double precision, but when I create a
 matrix of random numerics using rnorm(), the values are displayed in
 single precision, and also exported in single precision when I pass them
 out to my C routines.  An example is below:
 
 a - matrix(rnorm(16, mean=10, sd=4), nrow=4)
 a
   [,1]  [,2]  [,3]  [,4]
 [1,] 14.907606 17.572872 19.708977  9.809943
 [2,]  9.322041 13.624452  7.745254  7.596176
 [3,] 10.642408  6.151546  9.937434  6.913875
 [4,] 14.617647  5.577073  8.217559 12.115465
 storage.mode(a)
 [1] double
 
 Does anyone know if there is a way to change the display or storage
 settings so that the values will be displayed to their full precision?  
 Or does rnorm only produce values to single precision? 
 
 Any assistance would be greatly appreciated.
 
 Thanks,
 Jeff Delmerico
 

options(digits) # 7
options(digits=x)

  I may be mistaken, but I think the values are indeed exported
as double precision -- the issue here is just a display setting.

  Ben Bolker


-- 
View this message in context: 
http://www.nabble.com/Displaying-numerics-to-full-double-precision-tf4950807.html#a14178707
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Is R portable?

2007-12-05 Thread Barry Rowlingson
Peter Dalgaard wrote:

 The toolchain availability tends to get in the way. Linux-based gadgets
 could prove easier. I do wonder from time to time whether there really
 is a market for R on cellphones...

  As soon as someone writes library(ringtone) there might be :)

  And I think you'd have to turn off predictive text. Can someone with a 
mobile/cellphone tell me what 'hist(runif(100))' comes up as? [1]

Barry

[1] No, I haven't got one.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Is R portable?

2007-12-05 Thread Ted Harding
On 05-Dec-07 18:57:58, Barry Rowlingson wrote:
 Peter Dalgaard wrote:
 
 The toolchain availability tends to get in the way. Linux-based
 gadgets could prove easier. I do wonder from time to time whether
 there really is a market for R on cellphones...
 
 As soon as someone writes library(ringtone) there might be :)
 
 And I think you'd have to turn off predictive text. Can someone
 with a mobile/cellphone tell me what 'hist(runif(100))' comes up as?
 [1]
 
 Barry
 
 [1] No, I haven't got one.

I have a very old cellphone whose display, when I switch it on,
looks very much like what one would expect from 'hist(runif(100))'.

I'm not using it any more.

Ted.


E-Mail: (Ted Harding) [EMAIL PROTECTED]
Fax-to-email: +44 (0)870 094 0861
Date: 05-Dec-07   Time: 19:32:42
-- XFMail --

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] 2/3d interpolation from a regular grid to another regular grid

2007-12-05 Thread Scionforbai
 I just read the description in ?Krig in the package fields which says:
  Fits a surface to irregularly spaced data. 

Yes, that is the most general case. Regular data location is a subset
of irregular. Anyway, kriging, just one g, after the name of Danie
Krige, the south african statistician who first applied such method
for minig survey.

 My problem is simpler
...
 So it is really purely numerical.
...
 I just hoped that R had that already coded ...

Of course R has ... ;) If your grids are really as simple as the
example you posted above, and you have a really little variability,
all you need is a moving average, the arithmetic mean of the two
nearest points belonging to grid1 and grid2 respectively. I assume
that your regularly shaped grids are values stored in matrix objects.

The functions comes from the diff.default code (downloading the R
source code, I assure, is worth):

my.interp - function(x, lag = 1)
{
r - unclass(x)  # don't want class-specific subset methods
i1 - -1:-lag
r - (r[i1] + r[-length(r):-(length(r)-lag+1)])/2
class(r) - oldClass(x)
return(r)
}

Finally,

g1 - apply(grid1val,1,my.interp)
g2 - apply(grid2val,2,my.interp)

give the interpolations on gridFinal, provided that all gridFinal
points are within the grid1 and grid2 ones.

If you want the mean from 4 points, you apply once more with lag=3,
cbind/rbind to the result columns/rows o NAs, and you calculate the
mean of the points of the two matrixes.
This is the simplest (and quickest) moving average that you can do.
For more complicated examples, and for 3d, you have to go a little
further, but the principle holds.

ScionForbai

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Interpretation of 'Intercept' in a 2-way factorial lm

2007-12-05 Thread Gustaf Granath
Hi all,

I hope this question is not too trivial. I can't find an explanation
anywhere (Stats and R books, R-archives) so now I have to turn to the R-list.

Question:

If you have a factorial design with two factors (say A and B with two
levels each). What does the intercept coefficient with
treatment.contrasts represent??

Here is an example without interaction where A has two levels A1 and
A2, and B has two levels B1 and B2. So R takes as a baseline A1 and B1.

coef( summary ( lm ( fruit ~ A + B, data = test)))

Estimate   Std. Error  t value   Pr(|t|)
(Intercept)   2.716667   0.5484828   4.953058   7.879890e-04
A26.27   0.633   9.894737   3.907437e-06
B25.17   0.633   8.157895   1.892846e-05

I understand that the mean of A2 is +6.3 more than A1, and
that B2 is 5.2 more than B1.

So the question is: Is the intercept A1 and B1 combined as one mean
(the baseline)? or is it something else? Does this number actually
tell me anything
useful (2.716)??

What does the model (y = intercept  + ??) look like then? I can't understand
how both factors (A and B) can have the same intercept?

Thanks in advance!!

Gustaf Granath

Dept of Plant Ecology
Uppsala University, Sweden

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] newbie lapply question

2007-12-05 Thread Domenico Vistocco
I am not sure to understand your problem, but it seems to me that you 
can use directly the function on the range of the dates:

  x=as.Date(c('2007-01-01','2007-01-02'))
  fff=function(x){y=x+1;return(y)}
  fff(x)
[1] 2007-01-02 2007-01-03
  class(fff(x))
[1] Date

Perhaps your function use a different input (not a vector of dates but a 
dataframe)?

domenico vistocco

Ranjan Bagchi wrote:
 On Wed, 5 Dec 2007, Prof Brian Ripley wrote:
   
 [...]
 

 Thanks I'll read it more carefully.

   
 Perhaps if you told us what you are trying to achieve we might be able to 
 help you achieve it.

 

 I have a function which takes a date as an argument.  I've tested it, and 
 I'd like to run it over a range of dates.  So I'm looking at apply- or 
 map- type functions.

   
 -- 
 Brian D. Ripley,  [EMAIL PROTECTED]
 Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
 University of Oxford, Tel:  +44 1865 272861 (self)
 1 South Parks Road, +44 1865 272866 (PA)
 Oxford OX1 3TG, UKFax:  +44 1865 272595



 

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] how to interpolate a plot with a logistic curve

2007-12-05 Thread Simone Gabbriellini
hello,

I have this simple question. This is my dataset

size
1   57
2   97
3   105
4   123
5   136
6   153
7   173
8   180
9   193
10  202
11  213
12  219
13  224
14  224
15  248
16  367
17  496
18  568
19  618
20  670
21  719
22  774
23  810
24  814
25  823

I plot it with:

plot(generalstats[,1], type=b, xlab=Mesi, ylab=Numero di  
vertici, main=);

and try to interpolate with a linear regression with

abline(lm(generalstats[, 
1 
]~ 
c(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25)),  
lty=3, col=red);

how to interpolate the data with a logistic curve? I cannot find the -  
I suppose easy - solution..

thank you,
Simone

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] how to interpolate a plot with a logistic curve

2007-12-05 Thread Dylan Beaudette
On Wednesday 05 December 2007, Simone Gabbriellini wrote:
 hello,

 I have this simple question. This is my dataset

   size
 1 57
 2 97
 3 105
 4 123
 5 136
 6 153
 7 173
 8 180
 9 193
 10202
 11213
 12219
 13224
 14224
 15248
 16367
 17496
 18568
 19618
 20670
 21719
 22774
 23810
 24814
 25823

 I plot it with:

 plot(generalstats[,1], type=b, xlab=Mesi, ylab=Numero di
 vertici, main=);

 and try to interpolate with a linear regression with

 abline(lm(generalstats[,
 1
 ]~
 c(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25)),
 lty=3, col=red);

 how to interpolate the data with a logistic curve? I cannot find the -
 I suppose easy - solution..

 thank you,
 Simone


try:

glm(formula, data, family=binomial())

require(Design)
lrm()


Cheers,

-- 
Dylan Beaudette
Soil Resource Laboratory
http://casoilresource.lawr.ucdavis.edu/
University of California at Davis
530.754.7341

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R function for percentrank

2007-12-05 Thread Marc Schwartz

On Wed, 2007-12-05 at 18:42 +0100, Martin Maechler wrote:
 I'm coming late to this, but this *does* need a correction
 just for the archives !
 
  MS == Marc Schwartz [EMAIL PROTECTED]
  on Sat, 01 Dec 2007 13:33:21 -0600 writes:
 
 MS On Sat, 2007-12-01 at 18:40 +, David Winsemius wrote:
  David Winsemius [EMAIL PROTECTED] wrote in 
  news:[EMAIL PROTECTED]:
  
   tom soyer [EMAIL PROTECTED] wrote in
   news:[EMAIL PROTECTED]: 
   
   John,
   
   The Excel's percentrank function works like this: if one has a 
 number,
   x for example, and one wants to know the percentile of this number 
 in
   a given data set, dataset, one would type =percentrank(dataset,x) in
   Excel to calculate the percentile. So for example, if the data set 
 is
   c(1:10), and one wants to know the percentile of 2.5 in the data 
 set,
   then using the percentrank function one would get 0.166, i.e., 2.5 
 is
   in the 16.6th percentile. 
   
   I am not sure how to program this function in R. I couldn't find it 
 as
   a built-in function in R either. It seems to be an obvious choice 
 for
   a built-in function. I am very surprised, but maybe we both missed 
 it.

   My nomination for a function with a similar result would be ecdf(), 
 the 
   empirical cumulative distribution function. It is of class 
 function 
  so 
   efforts to index ecdf(.)[.] failed for me.
 
 I think you did not understand ecdf() !!!
 It *returns* a function,
 that you can then apply to old (or new) data; see below
 
 MS You can use ls.str() to look into the function environment:
 
  ls.str(environment(ecdf(x)))
 MS f :  num 0
 MS method :  int 2
 MS n :  int 25
 MS x :  num [1:25] -2.215 -1.989 -0.836 -0.820 -0.626 ...
 MS y :  num [1:25] 0.04 0.08 0.12 0.16 0.2 0.24 0.28 0.32 0.36 0.4 ...
 MS yleft :  num 0
 MS yright :  num 1
 
 
 
 MS You can then use get() or mget() within the function environment to
 MS return the requisite values. Something along the lines of the 
 following
 MS within the function percentrank():
 
 MS percentrank - function(x, val)
 MS {
 MS env.x - environment(ecdf(x))
 MS res - mget(c(x, y), env.x)
 MS Ind - which(sapply(seq(length(res$x)),
 MS function(i) isTRUE(all.equal(res$x[i], val
 MS res$y[Ind]
 MS }
 
 sorry Marc, but Yuck !!
 
 - this  percentrank() only works when you apply it to original x[i] values
 - only works for 'val' of length 1
 - is a complicated hack
 
 and absolutely unneeded  (see below)
 
 MS Thus:
 
 MS set.seed(1)
 MS x - rnorm(25)
 
  x
 MS [1] -0.62645381  0.18364332 -0.83562861  1.59528080  0.32950777
 MS [6] -0.82046838  0.48742905  0.73832471  0.57578135 -0.30538839
 MS [11]  1.51178117  0.38984324 -0.62124058 -2.21469989  1.12493092
 MS [16] -0.04493361 -0.01619026  0.94383621  0.82122120  0.59390132
 MS [21]  0.91897737  0.78213630  0.07456498 -1.98935170  0.61982575
 
 
  percentrank(x, 0.48742905)
 MS [1] 0.56
 
 [gives 0.52 in my version of R ]
 
 Well, that is *THE SAME*  as using  ecdf() the way you 
 should have used it :
 
   ecdf(x)(0.48742905)
 
 {in two lines, that is
 
   mypercR - ecdf(x)
   mypercR(0.48742905)
 
  which maybe easier to understand, if you have never used the
  nice concept that underlies all of
 
  approxfun(), splinefun() or ecdf()
 }
 
 You can also use
 
   ecdf(x)(x)
 
 and indeed check that it is identical to the convoluted
 percentrank() function above :
 
  ecdf(x)(0.48742905)
 [1] 0.52
  ecdf(x)(x)
  [1] 0.20 0.44 0.12 1.00 0.48 0.16 0.56 0.72 0.60 0.28 0.96 0.52 0.24 0.04 
 0.92
 [16] 0.32 0.36 0.88 0.80 0.64 0.84 0.76 0.40 0.08 0.68
  all(ecdf(x)(x) == sapply(x, function(v) percentrank(x,v)))
 [1] TRUE
  
 
 
 Regards (and apologies for my apparent indignation ;-)
 by the author of ecdf() ,
 
 Martin Maechler, ETH Zurich  

Martin,

Thanks for the corrections. In hindsight, now seeing the intended use of
ecdf() in the fashion you describe above, it is now clear that my
approach in response to David's query was un-needed and over the top.
Yuck is quite appropriate... :-)

As I was going through this exercise, it did seem overly complicated,
given R's usual elegant philosophy about such things. I suppose if I had
looked at the source for plot.stepfun(), it would have been more evident
as to how the y values are acquired.

In reviewing the examples in ?ecdf, I think that an example using
something along the lines of the discussion here more explicitly, would
be helpful. It is not crystal clear from the examples, that one can use
ecdf() in this fashion, though the use of 12 * Fn(tt) hints at it.

Perhaps:

##-- Simple didactical  ecdf  example:
x - rnorm(12)
Fn - ecdf(x)
Fn
Fn(x)  # returns the percentiles for x
...


Thanks again Martin and no offense taken...  :-)

Regards,

Marc


Re: [R] kalman filter random walk

2007-12-05 Thread Giovanni Petris

You may want to look at package dlm.

Giovanni

 Date: Wed, 05 Dec 2007 12:05:00 -0600
 From: Alexander Moreno [EMAIL PROTECTED]
 Sender: [EMAIL PROTECTED]
 Precedence: list
 
 Hi,
 
 I'm trying to use the kalman filter to estimate the variable drift of a
 random walk, given that I have a vector of time series data.  Anyone have
 any thoughts on how to do this in R?
 
 Thanks,
 Alex
 
   [[alternative HTML version deleted]]
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.
 
 

-- 

Giovanni Petris  [EMAIL PROTECTED]
Department of Mathematical Sciences
University of Arkansas - Fayetteville, AR 72701
Ph: (479) 575-6324, 575-8630 (fax)
http://definetti.uark.edu/~gpetris/

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Which Linux OS on Athlon amd64, to comfortably run R?

2007-12-05 Thread Ljubomir J. Buturovic

Hi Ottorino,

I have been using R on 64-bit Ubuntu for about a year without
problems, both Intel and AMD CPUs. Installing and using several
packages (e1071, svmpath, survival) also works. However, I had to
install R from source:

$ gunzip -c R-2.6.1.tar.gz | tar xvf -
$ cd R-2.6.1
$ ./configure --enable-R-shlib; make; make pdf
# make install; make install-pdf

Notice that `make install' has to be run as root.

I am using Feisty Fawn (Ubuntu 7.04), although I doubt that makes a
difference.

Hope this helps,

Ljubomir

8rino-Luca Pantani writes:
  Dear R-users.
  I eventually bought myself a new computer with the following 
  characteristics:
  
  Processor AMD ATHLON 64 DUAL CORE 4000+ (socket AM2)
  Mother board ASR SK-AM2 2
  Ram Corsair Value 1 GB DDR2 800 Mhz
  Hard Disk WESTERN DIGITAL 160 GB SATA2 8MB
  
  I'm a newcomer to the Linux world.
  I started using it (Ubuntu 7.10 at work and FC4 on laptop) on a regular 
  basis on May.
  I must say I'm quite comfortable with it, even if I have to re-learn a 
  lot of things.  But this is not a problem, I will improve my knowledge 
  with time.
  
  My main problem now, is that I installed Ubuntu 7.10 Gutsy Gibbon on the 
  new one amd64.
  
  To install R on it i followed the directions found here
  http://help.nceas.ucsb.edu/index.php/Installing_R_on_Ubuntu
  
  but unfortunately it did not work.
  
  After reading some posts on the R-SIG-debian list, such as
  
  https://stat.ethz.ch/pipermail/r-sig-debian/2007-October/000253.html
  
  I immediately realize that an amd64 is not the right processor to make 
  life easy.
  
  Therefore I would like to know from you, how can I solve this problem:
  Should I install the i386 version of R ?
  Should I install another flavour of Linux ?
  Which one ?
  Fedora Core 7 ?
  Debian ?
  
  Thanks a lot, for any suggestion
  
  -- 
  Ottorino-Luca Pantani, Universit? di Firenze
  Dip. Scienza del Suolo e Nutrizione della Pianta
  P.zle Cascine 28 50144 Firenze Italia
  Tel 39 055 3288 202 (348 lab) Fax 39 055 333 273 
  [EMAIL PROTECTED]  http://www4.unifi.it/dssnp/
  
  __
  R-help@r-project.org mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
  and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Displaying numerics to full double precision

2007-12-05 Thread Ben Bolker



Jeff Delmerico wrote:
 
 Thanks Ben, that fixed the display within R.  However, even after changing
 the display settings, the matrix elements still appear to be exported in
 single precision.  The matrix object is being passed into my C routines as
 an SEXP Numeric type, and somewhere along the way, some of the digits are
 getting lost.  
 Here's the relevant bit of my C code:
 
 SEXP
 divideMatrix(SEXP matrix_in, SEXP sub_height, SEXP sub_width, SEXP fileS)
 ...
 if ( isMatrix(matrix_in)  isNumeric(matrix_in) )
 {
   /* Use R macros to convert from SEXP to C types */
   matrix = REAL(matrix_in);
   height = INTEGER(GET_DIM(matrix_in))[0];
   width = INTEGER(GET_DIM(matrix_in))[1];
   subW = INTEGER_VALUE(sub_width);
   subH = INTEGER_VALUE(sub_height);
 ...
 }
 
 Am I using the wrong macro to convert into a double in C?  Any ideas?
 
 Thanks,
 Jeff Delmerico
 
 
 Ben Bolker wrote:
 
 
 
 Jeff Delmerico wrote:
 
 I'm working on a shared library of C functions for use with R, and I
 want to create a matrix in R and pass it to the C routines.  I know R
 computes and supposedly stores numerics in double precision, but when I
 create a matrix of random numerics using rnorm(), the values are
 displayed in single precision, and also exported in single precision
 when I pass them out to my C routines.  An example is below:
 
 a - matrix(rnorm(16, mean=10, sd=4), nrow=4)
 a
   [,1]  [,2]  [,3]  [,4]
 [1,] 14.907606 17.572872 19.708977  9.809943
 [2,]  9.322041 13.624452  7.745254  7.596176
 [3,] 10.642408  6.151546  9.937434  6.913875
 [4,] 14.617647  5.577073  8.217559 12.115465
 storage.mode(a)
 [1] double
 
 Does anyone know if there is a way to change the display or storage
 settings so that the values will be displayed to their full precision?  
 Or does rnorm only produce values to single precision? 
 
 Any assistance would be greatly appreciated.
 
 Thanks,
 Jeff Delmerico
 
 
 options(digits) # 7
 options(digits=x)
 
   I may be mistaken, but I think the values are indeed exported
 as double precision -- the issue here is just a display setting.
 
   Ben Bolker
 
 
 
 
 

 I'm not sure.
  I do know  that Rinternals.h has

#define REAL(x) ((double *) DATAPTR(x))

  so that doesn't seem to be the problem ...

  Ben


-- 
View this message in context: 
http://www.nabble.com/Displaying-numerics-to-full-double-precision-tf4950807.html#a14179985
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Dealing with NA's in a data matrix

2007-12-05 Thread Henrique Dallazuanna
 x[is.na(x)] - 0

On 05/12/2007, Amit Patel [EMAIL PROTECTED] wrote:
 Hi I have a matrix with NA value that I would like to convert these to a 
 value of 0.
 any suggestions

 Kind Regards
 Amit Patel




   ___

 ttp://uk.promotions.yahoo.com/forgood/
 [[alternative HTML version deleted]]

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.



-- 
Henrique Dallazuanna
Curitiba-Paraná-Brasil
25° 25' 40 S 49° 16' 22 O

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Interpretation of 'Intercept' in a 2-way factorial lm

2007-12-05 Thread Peter Dalgaard
Gustaf Granath wrote:
 Hi all,

 I hope this question is not too trivial. I can't find an explanation
 anywhere (Stats and R books, R-archives) so now I have to turn to the R-list.

 Question:

 If you have a factorial design with two factors (say A and B with two
 levels each). What does the intercept coefficient with
 treatment.contrasts represent??

 Here is an example without interaction where A has two levels A1 and
 A2, and B has two levels B1 and B2. So R takes as a baseline A1 and B1.

 coef( summary ( lm ( fruit ~ A + B, data = test)))

 Estimate   Std. Error  t value   Pr(|t|)
 (Intercept)   2.716667   0.5484828   4.953058   7.879890e-04
 A26.27   0.633   9.894737   3.907437e-06
 B25.17   0.633   8.157895   1.892846e-05

 I understand that the mean of A2 is +6.3 more than A1, and
 that B2 is 5.2 more than B1.

 So the question is: Is the intercept A1 and B1 combined as one mean
 (the baseline)? or is it something else? Does this number actually
 tell me anything
 useful (2.716)??

 What does the model (y = intercept  + ??) look like then? I can't understand
 how both factors (A and B) can have the same intercept?

   
Consider an AxB crosstable of (fitted) means. Upper left corner is
intercept , add A2, B2, or both to get the other three cells.

-- 
   O__   Peter Dalgaard Øster Farimagsgade 5, Entr.B
  c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
 (*) \(*) -- University of Copenhagen   Denmark  Ph:  (+45) 35327918
~~ - ([EMAIL PROTECTED])  FAX: (+45) 35327907

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] newbie lapply question

2007-12-05 Thread Ranjan Bagchi
Hi --

I just noticed the following (R 2.6.1 on OSX)

 lapply(c(as.Date('2007-01-01')), I)
[[1]]
[1] 13514

This is a bit surprising.. Why does lapply unclass the object?  Sorry for 
such a basic question, I don't seem able to produce the right google keywords.

Ranjan

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Interpretation of 'Intercept' in a 2-way factorial lm

2007-12-05 Thread Daniel Malter
You estimate a model with the Factors A or B either present (1) or not
present (0) and with an intercept. Thus you would predict:

For both A and B not present: Intercept
For A only present: Intercept+coef(A)
For B only preseent: Intercept+coef(B)
For both present: Intercept+coef(A)+coef(B).

Again, you would interpret the intercept as the value of fruit when A and
B are not present (or inactive). If the intercept is not meaningful in your
setting and you just want to know if both groups differ, then you want to
use function aov I guess. What is your fruit variable? I would also
suggest to visually inspect your data. That always helps :) The code is also
down below.

Look at the following example in which 4 x 10 Ys are drawn randomly from
normal distributions with equal variance but different means. The first ten
observations have both A and B not present (i.e. 0) as specified in the
vectors a and b. The mean of these observations where A and B are zero
is 1 as specified in y1=rnorm(10, - 1 -,1). As you will see if you run
this code, the estimated Intercept is 1.0512 which is close to 1 (the true
mean). As you see (just confirming what was said above), this is the average
of the baseline (or reference group if you will) when both A and B are
absent.

y1=rnorm(10,1,1)
y2=rnorm(10,2,1)
y3=rnorm(10,3,1)
y4=rnorm(10,4,1)

a=c(rep(0,20),rep(1,20))
b=c(rep(0,10),rep(1,10),rep(0,10),rep(1,10))

y=c(y1,y2,y3,y4)

data=data.frame(cbind(y,a,b))

Plot

interaction.plot(a,b,y)

Models

summary(lm(y~factor(a)+factor(b),data=data)

Compare this to

summary(aov(y~factor(a)+factor(b),data=data)

Cheers,
Daniel 


-
cuncta stricte discussurus
-

-Ursprüngliche Nachricht-
Von: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Im
Auftrag von Gustaf Granath
Gesendet: Wednesday, December 05, 2007 2:32 PM
An: r-help@r-project.org
Betreff: [R] Interpretation of 'Intercept' in a 2-way factorial lm

Hi all,

I hope this question is not too trivial. I can't find an explanation
anywhere (Stats and R books, R-archives) so now I have to turn to the
R-list.

Question:

If you have a factorial design with two factors (say A and B with two levels
each). What does the intercept coefficient with treatment.contrasts
represent??

Here is an example without interaction where A has two levels A1 and A2, and
B has two levels B1 and B2. So R takes as a baseline A1 and B1.

coef( summary ( lm ( fruit ~ A + B, data = test)))

Estimate   Std. Error  t value   Pr(|t|)
(Intercept)   2.716667   0.5484828   4.953058   7.879890e-04
A26.27   0.633   9.894737   3.907437e-06
B25.17   0.633   8.157895   1.892846e-05

I understand that the mean of A2 is +6.3 more than A1, and that B2 is 5.2
more than B1.

So the question is: Is the intercept A1 and B1 combined as one mean (the
baseline)? or is it something else? Does this number actually tell me
anything useful (2.716)??

What does the model (y = intercept  + ??) look like then? I can't understand
how both factors (A and B) can have the same intercept?

Thanks in advance!!

Gustaf Granath

Dept of Plant Ecology
Uppsala University, Sweden

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] File based configuration

2007-12-05 Thread Thomas Allen
I'm wanting to run R scripts non-interactively as part of a
technology independent framework.
I want control over the behaviour of these processes by specifying
various global variables in a configuration file that would be
passed as a command line argument.

I'm wondering if you know of any R support for configuration file
formats. (i.e. any functions that would read a configuration file of
some common format)

For example:
-The .properties configuration format for java seems to be quite
popular, would I have to read it in by writing some kind of java
extension to R?
-An XML configuration format could also be possible, but it's overkill
for my needs.

Any help would be greatly appreciated

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Java parser for R data file?

2007-12-05 Thread Michael Hoffman
David Coppit wrote:
 Hi everyone,
 
 Has anyone written a parser in Java for either the ASCII or binary format
 produced by save()?

You might want to consider using the hdf5 package to save the array in 
HDF5 format. There are HDF5 libraries for Java as well 
http://hdf.ncsa.uiuc.edu/hdf-java-html/. I have never used them, but 
it works quite well for transferring data between R and Python.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] coxme frailty model standard errors?

2007-12-05 Thread Corey Sparks
Hello,
I am running R 2.6.1 on windows xp 
I am trying to fit a cox proportional hazard model with a shared
Gaussian frailty term using coxme
My model is specified as:

nofit1-coxme(Surv(Age,cen1new)~ Sex+bo2+bo3,random=~1|isl,data=mydat)

With x1-x3 being dummy variables, and isl being the community level
variable with 4 levels.

Does anyone know if there is a way to get the standard error for the
random effect, like in nofit1$var?  I would like to know if my random
effect is worth writing home about.

Any help would be most appreciated
Corey Sparks

I can get the following output
nofit1-coxme(Surv(Age,cen1new)~ Sex+bo2+bo3,random=~1|isl, data=no1901)
nofit1
Cox mixed-effects model fit by maximum likelihood
  Data: no1901 
  n=959 (2313 observations deleted due to missingness)
  Iterations= 3 69 
NULL Integrated Penalized
Log-likelihood -600.0795  -581.1718 -577.9682

  Penalized loglik: chisq= 44.22 on 5.61 degrees of freedom, p= 4.3e-08 
 Integrated loglik: chisq= 37.82 on 4 degrees of freedom, p= 1.2e-07 

Fixed effects: Surv(Age, cen1new) ~ Sex + bo2 + bo3 
 coef exp(coef)  se(coef)z  p
Sex 0.2269214  1.254731 0.2151837 1.05 0.2900
bo2 0.5046991  1.656487 0.2510523 2.01 0.0440
bo3 1.0606144  2.888145 0.2726000 3.89 0.0001

Random effects: ~1 | isl 
isl
Variance: 0.3876189



Corey Sparks
Assistant Professor
Department of Demography and Organization Studies
University of Texas-San Antonio
One UTSA Circle
San Antonio TX 78249
Phone: 210 458 6858
[EMAIL PROTECTED]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] coxme frailty model standard errors?

2007-12-05 Thread Corey Sparks
Hello,
I am running R 2.6.1 on windows xp
I am trying to fit a cox proportional hazard model with a shared
Gaussian frailty term using coxme My model is specified as:

nofit1-coxme(Surv(Age,cen1new)~ Sex+bo2+bo3,random=~1|isl,data=mydat)

With x1-x3 being dummy variables, and isl being the community level
variable with 4 levels.

Does anyone know if there is a way to get the standard error for the
random effect, like in nofit1$var?  I would like to know if my random
effect is worth writing home about.

Any help would be most appreciated
Corey Sparks

I can get the following output
nofit1-coxme(Surv(Age,cen1new)~ Sex+bo2+bo3,random=~1|isl, data=no1901)
nofit1 Cox mixed-effects model fit by maximum likelihood
  Data: no1901
  n=959 (2313 observations deleted due to missingness)
  Iterations= 3 69 
NULL Integrated Penalized Log-likelihood -600.0795
-581.1718 -577.9682

  Penalized loglik: chisq= 44.22 on 5.61 degrees of freedom, p= 4.3e-08
Integrated loglik: chisq= 37.82 on 4 degrees of freedom, p= 1.2e-07 

Fixed effects: Surv(Age, cen1new) ~ Sex + bo2 + bo3 
 coef exp(coef)  se(coef)z  p
Sex 0.2269214  1.254731 0.2151837 1.05 0.2900
bo2 0.5046991  1.656487 0.2510523 2.01 0.0440
bo3 1.0606144  2.888145 0.2726000 3.89 0.0001

Random effects: ~1 | isl 
isl
Variance: 0.3876189

Corey Sparks
Assistant Professor
Department of Demography and Organization Studies
University of Texas-San Antonio
One UTSA Circle
San Antonio TX 78249
Phone: 210 458 6858
[EMAIL PROTECTED]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] 2/3d interpolation from a regular grid to another regular grid

2007-12-05 Thread jiho
On 2007-December-05  , at 16:47 , Scionforbai wrote:
 I just read the description in ?Krig in the package fields which  
 says:
  Fits a surface to irregularly spaced data. 

 Yes, that is the most general case. Regular data location is a subset
 of irregular. Anyway, kriging, just one g, after the name of Danie
 Krige, the south african statistician who first applied such method
 for minig survey.

ooops. sorry about the typo.

 My problem is simpler
 ...
 So it is really purely numerical.
 ...
 I just hoped that R had that already coded ...

 Of course R has ... ;) If your grids are really as simple as the
 example you posted above, and you have a really little variability,
 all you need is a moving average, the arithmetic mean of the two
 nearest points belonging to grid1 and grid2 respectively. I assume
 that your regularly shaped grids are values stored in matrix objects.

 The functions comes from the diff.default code (downloading the R
 source code, I assure, is worth):

I can imagine it is indeed. I use the source of packages functions  
very often.

 my.interp - function(x, lag = 1)
 {
 r - unclass(x)  # don't want class-specific subset methods
 i1 - -1:-lag
 r - (r[i1] + r[-length(r):-(length(r)-lag+1)])/2
 class(r) - oldClass(x)
 return(r)
 }

 Finally,

 g1 - apply(grid1val,1,my.interp)
 g2 - apply(grid2val,2,my.interp)

 give the interpolations on gridFinal, provided that all gridFinal
 points are within the grid1 and grid2 ones.

 If you want the mean from 4 points, you apply once more with lag=3,
 cbind/rbind to the result columns/rows o NAs, and you calculate the
 mean of the points of the two matrixes.
 This is the simplest (and quickest) moving average that you can do.
 For more complicated examples, and for 3d, you have to go a little
 further, but the principle holds.

Thanks very much. I'll test this soon (and it looks like the vector  
operation might even be directly translatable in Fortran which is  
nice since I'll need to do it in Fortran too).

Thanks again.

JiHO
---
http://jo.irisson.free.fr/

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Learning to do randomized block design analysis

2007-12-05 Thread T.K.
Dear R-Helpers

(1)
After a night's sleep, I realized why the other helpers think differently
from me.
I agree with others that it may be better to use multi-stratum model but I
was a bit surprised since they seem to think 'block' variable should *not*
be a fixed effect.

A. Others seemed to think,
Kevin is trying to estimate 'multi-stratum' model since he is using
Error(block)

B. I thought
Kevin is trying to estimate a simple ANOVA model (*not* random effects
model) but did not use the right R code

I thought so for the following reasons.
1) I looked up the book in Amazon and browsed the index using ' Search
Inside' function. It does not seem to cover random effects model.
2) The description of the book says it uses Minitab so I guessed Kevin is
getting R code from somewhere else.
3) In addition, Kevin's code looked very similar to the example code of
'aov'.
The example code of 'aov' has the following code segment involving 'block'
variable
## as a test, not particularly sensible statistically
npk.aovE - aov(yield ~ N*P*K + Error(block), npk)
So, I guessed Kevin might have gotten his code from here. After emailing
Kevin, I found that he is using the code from 'split-plot' section
of MASS, so my guess is not that far off.

(2)
I got this new bits of information from Kevin.

The data set is from a psychological experiment and subjects are *assigned*
to one of the blocks according to their scores on a test. Subjects with the
lowest scores are assigned to block A, and highest to block E. These blocks
were *not* randomly chosen from a larger set of blocks. Then the treatment
was randomized within each 'block'.

Given this new information, I think it is okay to solve Kevin's question
simply by using
aov(Score.changes ~ Therapy + Block, data=table)
assuming the fixed effects of 'block'.

I would appreciate your correction if I am mistaken here.

==
T.K. (Tae-kyun) Kim
Ph.D. student
Department of Marketing
Marshall School of Business
University of Southern California
==

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Displaying numerics to full double precision

2007-12-05 Thread Liviu Andronic
On 12/5/07, Jeff Delmerico [EMAIL PROTECTED] wrote:
 Does anyone know if there is a way to change the display or storage settings
 so that the values will be displayed to their full precision?   Or does
 rnorm only produce values to single precision?

 Any assistance would be greatly appreciated.

As far as I know (beware, I'm a novice), internally R stores to and
uses full precision. The display of the data, however, is controlled
by digits. You'd need to put, say, options(digits=7) in your
Rprofile.site (if it doesn't exist, creat it in the R etc/ folder).
You might also be interested by scipen. Check ?options.

Regards,
Liviu

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Multiple stacked barplots on the same graph?

2007-12-05 Thread Domenico Vistocco
This command works:

qplot(x=Categorie,y=Total,data=mydata,geom=bar,fill=Part)

for your data.

domenico vistocco

Stéphane CRUVEILLER wrote:
 Hi,

 the same error message is displayed with geom=bar as parameter.
 here is the output of dput:

  dput(mydata)
 structure(list(Categorie = structure(c(1L, 12L, 8L, 2L, 5L, 7L,
 16L, 6L, 15L, 11L, 10L, 13L, 14L, 3L, 4L, 9L, 17L, 1L, 12L, 8L,
 2L, 5L, 7L, 16L, 6L, 15L, 11L, 10L, 13L, 14L, 3L, 4L, 9L, 17L
 ), .Label = c(Amino acid biosynthesis, Biosynthesis of cofactors, 
 prosthetic groups, and carriers,
 Cell envelope, Cellular processes, Central intermediary metabolism,
 DNA metabolism, Energy metabolism, Fatty acid and phospholipid 
 metabolism,
 Mobile and extrachromosomal element functions, Protein fate,
 Protein synthesis, Purines, pyrimidines, nucleosides, and 
 nucleotides,
 Regulatory functions, Signal transduction, Transcription,
 Transport and binding proteins, Unknown function), class = factor),
Part = structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L,
2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L), .Label = c(common,
specific), class = factor), Total = c(3.03, 1.65, 1.52,
2.85, 3.4, 11.81, 10.51, 1.95, 2.08, 2.51, 2.23, 7.63, 1.88,
2.76, 7.21, 1.08, 20.75, 0.35, 0.17, 0.08, 0.18, 0.42, 2.05,
1.98, 0.63, 0.17, 0.2, 0.3, 1.58, 0.27, 0.83, 1.38, 3.56,
11.63), chr1 = c(4.55, 2.37, 1.77, 4.68, 3.19, 12.49, 13.56,
2.81, 3.13, 4.58, 3.26, 7.3, 2.06, 3.41, 7.9, 0.22, 22.45,
0.16, 0.06, 0.09, 0.19, 0.09, 0.7, 0.85, 0.22, 0.06, 0.03,
0.32, 0.66, 0.06, 0.63, 0.38, 1.14, 6.17), chr2 = c(1.68,
1.06, 1.55, 1.02, 4.57, 13.87, 7.85, 0.98, 1.06, 0.27, 1.2,
9.88, 2.13, 2.53, 7.71, 0.4, 22.38, 0.71, 0.35, 0.09, 0.22,
0.98, 3.9, 3.24, 0.22, 0.22, 0.49, 0.31, 2.79, 0.62, 1.33,
1.95, 0.44, 16), pl = c(0, 0, 0, 0, 0, 0.17, 4.27, 1.03,
0.34, 0, 0.68, 0.68, 0, 0.17, 1.54, 8.38, 5.3, 0, 0, 0, 0,
0, 2.22, 3.25, 4.44, 0.51, 0, 0.17, 1.88, 0, 0, 4.62, 28.72,
24.27)), .Names = c(Categorie, Part, Total, chr1,
 chr2, pl), class = data.frame, row.names = c(NA, -34L))


 thx,

 Stéphane.

 hadley wickham wrote:
 On Dec 4, 2007 10:34 AM, Stéphane CRUVEILLER 
 [EMAIL PROTECTED] wrote:
  
 Hi,

 I tried this method but it seems that there is something wrong with my
 data frame:


 when I type in:

   qplot(x=as.factor(Categorie),y=Total,data=mydata)

 It displays a graph with 2 points in each category...
 but if  I add the parameter geom=histogram

   qplot(x=as.factor(Categorie),y=Total,data=mydata,geom=histogram)


 Error in storage.mode(test) - logical :
 object y not found

 any hint about this...
 

 Could you copy and paste the output of dput(mydata) ?

 (And I'd probably write the plot call as: qplot(Categorie, Total,
 data=mydata, geom=bar), since it is a bar plot, not a histogram)

   


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Significance of clarkevans in spatstat

2007-12-05 Thread Jens Oldeland
Dear R-Users,

I was wondering if there is a way to test the significance of the 
clarkevans statistic in spatstat package?
I did not find any related function or the related values to calculate 
it by hand.
does someone has any ideas?

thank you,
Jens

-- 
+
Dipl.Biol. Jens Oldeland
University of Hamburg
Biocentre Klein Flottbek and Botanical Garden
Ohnhorststr. 18
22609 Hamburg,
Germany

Tel:0049-(0)40-42816-407
Fax:0049-(0)40-42816-543
Mail:   [EMAIL PROTECTED]
[EMAIL PROTECTED]  (for attachments  2mb!!)
http://www.biologie.uni-hamburg.de/bzf/fbda005/fbda005.htm
+

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] weighted Cox proportional hazards regression

2007-12-05 Thread Terry Therneau
 I'm getting unexpected results from the coxph function when using
 weights from counter-matching.  For example, the following code
 produces a parameter estimate of -1.59 where I expect 0.63:

  I agree with Thomas' answer wrt using offset instead of weight.  One way to 
understand this is to look at the score equation for the Cox model, which is
sum over the deaths of (x[i] - xbar[i])
x[i] is the covariate vector of the ith death
xbar[i] is the average of all the subjects who were at risk at the time of the 
ith death.

  In situations where one samples selected controls, the score equation will be 
correct if one fixes up xbar so that it is an estimate of the population mean 
(all those in the population that were at risk for a death) rather than being 
the mean of just those in the sample.  Use of an offset statement allows one to 
reweight xbar without changing the rest of the score equation.  It's kind of a 
trick, see Therneau and Li, Lifetime Data Analysis, 1999, p99-112 for a simple 
example of how it works.  Langholz and Borgan give details on exactly how to 
correctly reweight using some old results from sampling theory - it is just a 
little bit more subtle than one would guess, but not too different from the 
obvious.
  
Terry Therneau

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] rgdal for R.2.4.0?

2007-12-05 Thread Prof Brian Ripley
On Wed, 5 Dec 2007, bernardo lagos alvarez wrote:

 Hi,

 Know anyone where to find the package rgdal for R.2.4.0?

On CRAN: the current version has

Package: rgdal
Title: Bindings for the Geospatial Data Abstraction Library
Version: 0.5-20
Date: 2007-11-07
Depends: R (= 2.3.0), methods, sp

Or were you looking for a binary version for an unstated platform?  (A 
Windows binary is there and should be available via the Rgui menus.)


-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Junk or not Junk ???

2007-12-05 Thread Loren Engrav
Thank you

As per advice from several R users I have set

r-project.org, stat.math.ethz.ch,fhcrc.org, stat.ethz.ch, math.ethz.ch,
hypatia.math.ethz.ch

 all to be safe domains

But still some R emails go to Junk and require to be found manually

I have explored the issue with Univ Wash computing to no avail

Is this just how it is or have I still missed the fix to keep R emails out
of junk?

Thank you

Loren Engrav
Univ Wash
Seattle



 From: Duncan Murdoch [EMAIL PROTECTED]
 Date: Mon, 03 Dec 2007 22:10:18 -0500
 To: Loren Engrav [EMAIL PROTECTED]
 Cc: RHelp r-help@r-project.org
 Subject: Re: [R] Junk or not Junk
 
 On 03/12/2007 8:56 PM, Loren Engrav wrote:
 So a message from
 
 Benilton Carvalho [EMAIL PROTECTED] (sent by
 [EMAIL PROTECTED])
 
  arrives and goes in the Junk Mail even tho I have set @r-project.org to not
 be junk
 
 Why does this go in Junk mail if @r-project.org is defined as not junk?
 
 Why are you asking us about how you have your mail filters set up?
 
 If you didn't set them up yourself, you should find out from your local
 admin who did, and ask them.
 
 Duncan Murdoch

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] confidence intervals for y predicted in non linear regression

2007-12-05 Thread Ndoye Souleymane

Hi, Salut,
 
You should use the package nsl2 (only for Linux distribution)
Vous pouvez utiliser le package nls2 (Linux seulement)
 
Regards,
 
Souleymane Date: Tue, 4 Dec 2007 16:07:57 +0100 From: [EMAIL PROTECTED] To: 
[EMAIL PROTECTED] CC: [EMAIL PROTECTED] Subject: Re: [R] confidence intervals 
for y predicted in non linear regression  hi, hi all,  you can consult 
these links: http://finzi.psych.upenn.edu/R/Rhelp02a/archive/43008.html 
https://stat.ethz.ch/pipermail/r-help/2004-October/058703.html  hope this 
help   pierre   Selon Florencio González [EMAIL PROTECTED]:Hi, 
I´m trying to plot a nonlinear regresion with the confidence bands for  the 
curve obtained, similar to what nlintool or nlpredci functions in Matlab  
does, but I no figure how to. In nls the option is there but not implemented  
yet.   Is there a plan to implement the in a relative near future?   
Thanks in advance, Florencio La información contenida en este 
e-mail y sus ficheros adjuntos es totalmente  confidencial y no debería ser 
usado si no fuera usted alguno de los  destinatarios. Si ha recibido este 
e-mail por error, por favor avise al  remitente y bórrelo de su buzón o de 
cualquier otro medio de almacenamiento.  This email is confidential and 
should not be used by anyone who is not the  original intended recipient. If 
you have received this e-mail in error  please inform the sender and delete 
it from your mailbox or any other  storage mechanism.  [[alternative HTML 
version deleted]]__ 
R-help@r-project.org mailing list 
https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html and provide commented, minimal, 
self-contained, reproducible code.
_
Vous êtes plutôt Desperate ou LOST ? Personnalisez votre PC avec votre 
[[replacing trailing spam]]

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Quadratic programming

2007-12-05 Thread Berwin A Turlach
G'day Serge,

On Wed, 5 Dec 2007 11:25:41 +0100
de Gosson de Varennes Serge (4100)
[EMAIL PROTECTED] wrote:

 I am using the quadprog package and its solve.QP routine to solve and
 quadratic programming problem with inconsistent constraints, which
 obviously doesn't work since the constraint matrix doesn't have full
 rank. 

I guess it will help to fix some terminology first.  In my book,
inconsistent constraints are constraints that cannot be fulfilled
simultaneously, e.g. something like x_1 = 3 and x_1 = 5 for an
obvious example.  Thus, a problem with inconsistent constraints cannot
be solved, regardless of the rank of the constraint matrix.  (Anyway,
that matrix is typically not square, so would be be talking about full
column rank or full row rank?)

Of course, it can happen that the constraints are consistent but that 
there are some redundancy in the specified constraints, e.g. a simply
case would be x_1 = 0, x_2 = 0 and x_1+x_2 = 0; if the first
two constraints are fulfilled, then the last one is automatically
fulfilled too. In my experience, it can happen that solve.QP comes to
the conclusion that a constraint that ought to be fulfilled, given the
constraints that have already been enforced, is deemed to be violated
and to be inconsistent with the constraints already enforced. In that
case solve.QP stops, rather misleadingly, with the message that the
constraints are inconsistent.  

I guess the package should be worked over by someone with a better
understanding of the kind of fudges that do not come back to bite and of
finite precision arithmetic than the original author's appreciation of
such issues when the code was written. ;-))

 A way to solve this is to perturb the objective function and
 the constraints with a parameter that changes at each iteration (so
 you can dismiss it), but that's where it gets tricky! Solve.QP
 doesn't seem to admitt constant terms, it need Dmat (a matrix) and
 dvec (a vector) as defined in the package description. Now, some
 might object that a constant is a vector but the problem looks like
 this
 
 Min f(x) = (1/2)x^t Q x + D^t x + d

It is a bit unclear to me what you call the constant term.  Is it `d'?
In that case, it does not perturb the constraints and it is irrelevant
for the minimizer of f(x); also for the minimizer of f(x) under linear
constraints.  Regardless of d, the solution is always the same.  I do
not know of any quadratic programming solver that allows `d' as input,
probably because it is irrelevant for determining the solution of the
problem.

 Can anyone help me, PLEASEEE?

In my experience, rescaling the problem might help, i.e. use Q* = Q/2
and D*=D/2 instead of the original Q and D; but do not forget to
rescale the constraints accordingly.  

Or you might want to try another quadratic program solver in R, e.g.
ipop() in package kernlab.

Hope this helps.

Best wishes,

Berwin

=== Full address =
Berwin A TurlachTel.: +65 6516 4416 (secr)
Dept of Statistics and Applied Probability+65 6516 6650 (self)
Faculty of Science  FAX : +65 6872 3919   
National University of Singapore 
6 Science Drive 2, Blk S16, Level 7  e-mail: [EMAIL PROTECTED]
Singapore 117546http://www.stat.nus.edu.sg/~statba

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] bootstrapping on the growth curve

2007-12-05 Thread Suyan Tian
Hi, I am trying to get 95% CI s around a quantile growth curve, but  
my result looks strange to me.

Here is the function I defined myself

boot.qregress-function(mat1, group,  quantile, int, seed.1){

 boot.fit-NULL
 set.seed(seed.1)
 for (i in 1:int){

 index-sample((unique(mat1$Subject[mat1$Group==group])), length 
(unique(mat1$Subject[mat1$Group==group])), replace=TRUE)

 #make the bootstrapping dataset
 mat.junk-NULL
 for (j in 1: length(index)){

 mat.junk-rbind(mat.junk, mat1[mat1$Subject==index[j], ])  
}

boot.fit-cbind(boot.fit, cobs(mat.junk$Day, mat.junk$Weight,   
constraint=none,  degree=2, tau=quantile, lambda=-1)$fitted)  

}

boot.fit

}


The curves I made from the bootstrapping is attached, I don't  
understand why for a group, the 5% curve drops suddenly around time  
130. I am thinking about missingness since before 130 there are 50  
patients, but after day 130 there are only 40 patients for this group.

Any suggestions on the R-code (especially about how to do the  
bootstrapping for the growth curves) or why the drops happened would  
be appreciated.

Thanks a lot,

Suyan
   


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Which Linux OS on Athlon amd64, to comfortably run R?

2007-12-05 Thread Scionforbai
 but unfortunately it did not work.

What did not work? Provide some information more...
By the way, isn't 'gfortran' the new GNU fortran compiler which
replaced 'g77'? Or not on Ubuntu?

 I immediately realize that an amd64 is not the right processor to make
 life easy.

??? Example of bad extrapolation ;)

 Should I install the i386 version of R ?

I assume you are talking about installing from source. You installed
the i386 version of Ubuntu? Then yes. Else no.

But rather let apt-get do it for you. Just install the binary provided
by the Ubuntu community. It is the best way to get things working and
avoid problems.

 Should I install another flavour of Linux ?

It depends. Ubuntu is good to start, and has the widest users base;
Archlinux my best choice (but you need to be already somewhat
advanced).

ScionForbai

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] about color palettes, colorRamp etc

2007-12-05 Thread Martin Maechler
   [if you get this twice: it seems to have not made it through, yesterday]

 Earl == Earl F Glynn [EMAIL PROTECTED]
 on Mon, 3 Dec 2007 13:26:11 -0600 writes:

Earl affy snp [EMAIL PROTECTED] wrote in message
Earl news:[EMAIL PROTECTED]
 For example, it should go from very red---red---less
 red---darkgreen---very green coinciding with the
 descending order of values, just like the very left panel
 shown in
 http://www.bme.unc.edu/research/Bioinformatics.FunctionalGenomics.html

Earl This looks like the MatLab palette that's in
Earl tim.colors:

Earl library(fields)  # tim.colors:  Matlab-like color palette

Earl N - 100
Earl par(lend=square)
Earl plot(rep(1,N), type=h, col=tim.colors(N), lwd=6, ylim=c(0,1))

Well, the R help page  ?colorRamp

in its 'examples' section
has an example of this Matlab-lik color scheme, calling them
'jet.colors', easily constructed with the nice
colorRampPalette() function [I've just posted about to R-help as well].

Please say
   
   example(colorRamp)
in R
and slowly watch the output, and I expect you will never ever
want to use the horrible Matlab-like color palette again..

Regards,
Martin Maechler, ETH Zurich


Earl efg

Earl Earl F. Glynn
Earl Scientific Programmer
Earl Stowers Institute for Medical Research

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] newbie lapply question

2007-12-05 Thread Prof Brian Ripley
On Wed, 5 Dec 2007, Ranjan Bagchi wrote:

 Hi --

 I just noticed the following (R 2.6.1 on OSX)

 lapply(c(as.Date('2007-01-01')), I)
 [[1]]
 [1] 13514

 This is a bit surprising.. Why does lapply unclass the object?  Sorry for
 such a basic question, I don't seem able to produce the right google keywords.

Did you not read the help page?:

Arguments:

X: a vector (atomic or list) or an expressions vector.  Other
   objects (including classed objects) will be coerced by
   'as.list'.

and

 as.list(c(as.Date('2007-01-01')))
[[1]]
[1] 13514

BTW, the c() is redundant here: you are concatenating one item only.

As to why as.list() removes the class, read its help page which tells you.

Perhaps if you told us what you are trying to achieve we might be able to 
help you achieve it.

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Export to LaTeX using xtable() - Control the digits to the right of the separator [solved]

2007-12-05 Thread Liviu Andronic
Hello everyone,

The thread title speaks for itself. Here's the code that worked for me:

 numSummary(finance[,Employees], statistics=c(mean, sd, quantiles))
 mean   sd   0%  25%  50%  75%   100%  n NA
 11492.92 29373.14 1777 3040 4267 6553 179774 53  5

 str(numSummary(finance[,Employees], statistics=c(mean, sd, 
 quantiles)))
List of 5
 $ type  : num 3
 $ table : num [1, 1:7] 11493 29373  1777  3040  4267 ...
  ..- attr(*, dimnames)=List of 2
  .. ..$ : chr 
  .. ..$ : chr [1:7] mean sd 0% 25% ...
 $ statistics: chr [1:3] mean sd quantiles
 $ n : Named num 53
  ..- attr(*, names)= chr data
 $ NAs   : Named num 5
  ..- attr(*, names)= chr data
 - attr(*, class)= chr numSummary

 xtable(numSummary(finance[,Employees], statistics=c(mean, sd, 
 quantiles))$table, digit = c(0,0,2,2,2,0,0,0))
% latex table generated in R 2.6.1 by xtable 1.5-2 package
% Wed Dec  5 14:37:51 2007
\begin{table}[ht]
\begin{center}
\begin{tabular}{}
  \hline
  mean  sd  0\%  25\%  50\%  75\%  100\% \\
  \hline
1  11493  29373.14  1777.00  3040.00  4267  6553  179774 \\
   \hline
\end{tabular}
\end{center}
\end{table}

Regards,
Liviu

-- Forwarded message --
From: Romain Francois [EMAIL PROTECTED]
Date: Dec 5, 2007 2:10 PM
Subject: RE: [R] alternatives to latex() or xtable() ?
To: Liviu Andronic [EMAIL PROTECTED]


You need to look at the digits argument of xtable that would allow you
to control this i think.

   xtable( numSummary( iris[,1:4] ) , digit = c( 0, 0, 2,2,2,2,2,2,0) )
 % latex table generated in R 2.6.0 by xtable 1.5-2 package
 % Wed Dec 05 13:07:47 2007
 \begin{table}[ht]
 \begin{center}
 \begin{tabular}{r}
   \hline
   mean  sd  0\%  25\%  50\%  75\%  100\%  n \\
   \hline
 Sepal.Length  6  0.83  4.30  5.10  5.80  6.40  7.90  150 \\
   Sepal.Width  3  0.44  2.00  2.80  3.00  3.30  4.40  150 \\
   Petal.Length  4  1.77  1.00  1.60  4.35  5.10  6.90  150 \\
   Petal.Width  1  0.76  0.10  0.30  1.30  1.80  2.50  150 \\
\hline
 \end{tabular}
 \end{center}
 \end{table}



 -Original Message-
 From: Liviu Andronic [mailto:[EMAIL PROTECTED]
 Sent: Wed 05/12/2007 13:07
 To: Romain Francois
 Subject: Re: [R] alternatives to latex() or xtable() ?

 I have not yet understood how to set the number of displayed digits
 after the period (not sure how to express correctly in English) in the
 exported TeX code. For example, I would like to make all numbers
 display as integers. Or, I would like to have 123.00 numbers display
 as integers and the rest 123.212(3) display as 123.21. Do you know how
 this is done within R? (I understand that I can perfectly do this
 manually in the TeX code).

 Thanks in advance,
 Liviu

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Asymmetrically dependent variables to 2D-map?

2007-12-05 Thread Atte Tenkanen
Hello,

I'm searching for a method which maps variables of this kind of table, see 
below, to 2-dimensional space, like in multidimensional scaling. However, this 
table is asymmetric: for example, variable T1 affects T2 more than T2 affects 
T1(0.41 vs. 0.21). 

 DEPTABLE
 T1T2 T3T4
T1 0.00 0.41 0.24 1.18
T2 0.21 0.00 0.46 0.12
T3 0.80 0.89 0.00 0.20
T4 0.09 1.04 0.17 0.00

Any suggestions? Something like gplot+mds+weighted arrays?

Atte Tenkanen
University of Turku, Finland

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] 2/3d interpolation from a regular grid to another regular grid

2007-12-05 Thread jiho
On 2007-December-04  , at 21:38 , Scionforbai wrote:
 - krigging in package fields, which also requires irregular spaced  
 data

 That kriging requires irregularly spaced data sounds new to me ;) It
 cannot be, you misread something (I feel free to say that even if I
 never used that package).

Of Krigging I only know the name and general intent so I gladly line  
up to your opinion.
I just read the description in ?Krig in the package fields which says:
 Fits a surface to irregularly spaced data. 
But there are probably other Krigging methods I overlooked.

 It can be tricky doing kriging, though, if you're not comfortable with
 a little bit of geostatistics. You have to infer a variogram model for
 each data set; you possibly run into non-stationarity or anisotropy,
 which are indeed very well treated (maybe at best) by kriging in one
 of its forms, but ... it takes more than this list to help you then;
 basically kriging requires modelling, so it is often very difficult to
 set up an automatic procedure. I can reccomend kriging if the spatial
 variability of your data (compared to grid refinement) is quite
 important.

This was the impression I had too: that Krigging is an art in itself  
and that it requires you to know much about your data. My problem is  
simpler: the variability is not very large between grid points (it is  
oceanic current velocity data so it is highly auto-correlated  
spatially) and I can get grids fine enough for variability to be low  
anyway. So it is really purely numerical.

 In other simple cases, a wheighted mean using the (squared) inverse of
 the distance as wheight and a spherical neighbourhood could be the
 simpliest way to perform the interpolation.

Yes, that would be largely enough for me. I had C routines for 2D  
polynomial interpolation of a similar cases and low order polynomes  
gave good results. I just hoped that R had that already coded  
somewhere in an handy and generic function rather than having to  
recode it myself in a probably highly specialized and not reusable  
manner.

Thank you very much for you answer and if someone knows a function  
doing what is described above, that would be terrific.

JiHO
---
http://jo.irisson.free.fr/

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] predict error for survreg with natural splines

2007-12-05 Thread Gad Abraham
Charles C. Berry wrote:
 On Wed, 5 Dec 2007, Gad Abraham wrote:
 
 Hi,

 The following error looks like a bug to me but perhaps someone can shed
 light on it:

  library(splines)
  library(survival)
  s - survreg(Surv(futime, fustat) ~ ns(age, knots=c(50, 60)),
 data=ovarian)
  n - data.frame(age=rep(mean(ovarian$age), 10))
  predict(s, newdata=n)
 Error in qr.default(t(const)) :
   NA/NaN/Inf in foreign function call (arg 1)

 Thanks,
 Gad
 
 Gad,
 
 I think I have it now.
 
 survreg does not automatically place the boundary knots in its $terms 
 component.
 
 You can force this by hand:

Thanks Chuck and Moshe, manually setting the boundary fixes the problem.

Cheers,
Gad

-- 
Gad Abraham
Department of Mathematics and Statistics
The University of Melbourne
Parkville 3010, Victoria, Australia
email: [EMAIL PROTECTED]
web: http://www.ms.unimelb.edu.au/~gabraham

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] prediction R-squared

2007-12-05 Thread Tom Fitzhugh
Hi,
 
I need to compute a prediction R-squared for a linear regression.  I
have figured out how to compute Allen's PRESS statistic (using the PRESS
function in the MPV library), but also want to compute the R-squared
that goes along with this statistic.  I have read that this is computed
like an adjusted R-squared, but using the same regressions that are used
to compute the PRESS statistic.  I am trying to duplicate the prediction
R-squared that is computed in Minitab.
 
Thanks for any help!
 
Tom

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Displaying numerics to full double precision

2007-12-05 Thread Jeff Delmerico

I'm working on a shared library of C functions for use with R, and I want to
create a matrix in R and pass it to the C routines.  I know R computes and
supposedly stores numerics in double precision, but when I create a matrix
of random numerics using rnorm(), the values are displayed in single
precision, and also exported in single precision when I pass them out to my
C routines.  An example is below:

 a - matrix(rnorm(16, mean=10, sd=4), nrow=4)
 a
  [,1]  [,2]  [,3]  [,4]
[1,] 14.907606 17.572872 19.708977  9.809943
[2,]  9.322041 13.624452  7.745254  7.596176
[3,] 10.642408  6.151546  9.937434  6.913875
[4,] 14.617647  5.577073  8.217559 12.115465
 storage.mode(a)
[1] double

Does anyone know if there is a way to change the display or storage settings
so that the values will be displayed to their full precision?   Or does
rnorm only produce values to single precision? 

Any assistance would be greatly appreciated.

Thanks,
Jeff Delmerico
-- 
View this message in context: 
http://www.nabble.com/Displaying-numerics-to-full-double-precision-tf4950807.html#a14175334
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] HTML help search in R 2.6.0 v 2.6.1

2007-12-05 Thread Peter Dalgaard
Michael Bibo wrote:
 I am running R on a corporate Windows XP SP2 machine on which I do not
 have 
 administrator privileges or access to most settings in Control Panel. 
 R is 
 installed from my limited user account.  The version of the JVM I have

 installed is perhaps best described as antique:
  
   
 system(paste(java -version),show.output.on.console=T)
 
 java version 1.4.1
 Java(TM) 2 Runtime Environment, Standard Edition (build 1.4.1-b21)
 Java HotSpot(TM) Client VM (build 1.4.1-b21, mixed mode)
  
 The HTML help search applet has worked for all versions of R up to and

 including 2.6.0, but in 2.6.1 the java applet is not initialised
 (Applet 
 SearchEngine notinited).  I have checked Appendix D of the admin  
 installation manual, and the test java applet referred to does not
 load, 
 but the web page says that java 1.4.2 is required.  Other java applets
 do run 
 in the browser, including R 2.6.0 HTML help search, so I presume java
 is 
 enabled. 
  
 Has something changed from 2.6.0 to 2.6.1 that may require JVM  1.4.1?
  If 
 so, I can use that information to request an upgrade of my JVM.
  

   
Hmm, could be. They got rebuilt on my system and committted at some 
point in the 2.6.1 run-in. and that system has 1.5.0. I did check that 
things still worked, but I didn't think the version would matter. The 
actual Java code is unchanged, so you could copy the .class files over 
from 2.6.0 ( let us know if it works)

-- 
   O__   Peter Dalgaard Øster Farimagsgade 5, Entr.B
  c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
 (*) \(*) -- University of Copenhagen   Denmark  Ph:  (+45) 35327918
~~ - ([EMAIL PROTECTED])  FAX: (+45) 35327907

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] correlation coefficient from qq plot

2007-12-05 Thread Tom Fitzhugh
Hi,
 
I am trying to figure out how to get the correlation coefficient for a
QQ plot (residual plot).  So to be more precise, I am creating the plot
like this:
 
qq.plot(rstudent(regrname), main = rformula, col=1) 
 
But want to also access (or compute) the correlation coefficient for
that plot.
 
Thanks,  
 
Tom

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] alternatives to latex() or xtable() ?

2007-12-05 Thread Liviu Andronic
On 12/5/07, Romain Francois [EMAIL PROTECTED] wrote:



 Hello,

  My guess is that you are actually talking about the numSummary function in
 Rcmdr, not in abind. In that case, you can look how the structure of the
 output is like:

   str( numSummary( iris[,1:4] ) )
  List of 4
   $ type  : num 3
   $ table : num [1:4, 1:7] 5.843 3.057 3.758 1.199 0.828 ...
..- attr(*, dimnames)=List of 2
.. ..$ : chr [1:4] Sepal.Length Sepal.Width Petal.Length
 Petal.Width
.. ..$ : chr [1:7] mean sd 0% 25% ...
   $ statistics: chr [1:3] mean sd quantiles
   $ n : Named num [1:4] 150 150 150 150
..- attr(*, names)= chr [1:4] Sepal.Length Sepal.Width
 Petal.Length Petal.Width
   - attr(*, class)= chr numSummary

  and then use the table element from it:

   xtable( numSummary( iris[,1:4] )$table )
  % latex table generated in R 2.6.0 by xtable 1.5-2 package
  % Wed Dec 05 12:16:44 2007
  \begin{table}[ht]
  \begin{center}
  \begin{tabular}{}
\hline
mean  sd  0\%  25\%  50\%  75\%  100\% \\
\hline
  Sepal.Length  5.84  0.83  4.30  5.10  5.80  6.40  7.90 \\
Sepal.Width  3.06  0.44  2.00  2.80  3.00  3.30  4.40 \\
Petal.Length  3.76  1.77  1.00  1.60  4.35  5.10  6.90 \\
Petal.Width  1.20  0.76  0.10  0.30  1.30  1.80  2.50 \\
 \hline
  \end{tabular}
  \end{center}
  \end{table}

  Otherwise, you can define your own xtable.numSummary function that would
 wrap this up. (This one does not do everything as it does not take into
 account the groups argument of numSummary, so you might want to do something
 else if you have used it, ...)

   xtable.numSummary - function( x, ...){
  +  out - cbind( x$table, n = x$n )
  +  xtable( out, ... )
  + }
xtable( numSummary( iris[,1:4] ) )
  % latex table generated in R 2.6.0 by xtable 1.5-2 package
  % Wed Dec 05 12:20:13 2007
  \begin{table}[ht]
  \begin{center}
  \begin{tabular}{r}
\hline
mean  sd  0\%  25\%  50\%  75\%  100\%  n \\
\hline
  Sepal.Length  5.84  0.83  4.30  5.10  5.80  6.40  7.90  150.00 \\
Sepal.Width  3.06  0.44  2.00  2.80  3.00  3.30  4.40  150.00 \\
Petal.Length  3.76  1.77  1.00  1.60  4.35  5.10  6.90  150.00 \\
Petal.Width  1.20  0.76  0.10  0.30  1.30  1.80  2.50  150.00 \\
 \hline
  \end{tabular}
  \end{center}
  \end{table}

  Hope this helps,
  Romain

It helped. Thanks.
Liviu

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Dealing with NA's in a data matrix

2007-12-05 Thread Gregor Gorjanc
Henrique Dallazuanna wwwhsd at gmail.com writes:
  x[is.na(x)] - 0
 
 On 05/12/2007, Amit Patel amitpatel_ak at yahoo.co.uk wrote:
  Hi I have a matrix with NA value that I would like to convert these to a
value of 0.
  any suggestions

also

library(gdata)
x - matrix(rnorm(16), nrow=4, ncol=4)
x[1, 1] - NA
NAToUnknown(x, unknown=0)

Gregor

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] confidence intervals for y predicted in non linearregression

2007-12-05 Thread Florencio González

Hi Thanks for your suggestion, I'm trying to install this package in Ubuntu
(7.10) but unsuccessfully. Also tried in MacOSX, and no success too.

 

 

  _  

De: Ndoye Souleymane [mailto:[EMAIL PROTECTED] 
Enviado el: miércoles, 05 de diciembre de 2007 13:38
Para: [EMAIL PROTECTED]; Florencio González
CC: [EMAIL PROTECTED]
Asunto: RE: [R] confidence intervals for y predicted in non linearregression

 

Hi, Salut,
 
You should use the package nsl2 (only for Linux distribution)
Vous pouvez utiliser le package nls2 (Linux seulement)
 
Regards,
 
Souleymane

 Date: Tue, 4 Dec 2007 16:07:57 +0100
 From: [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 CC: [EMAIL PROTECTED]
 Subject: Re: [R] confidence intervals for y predicted in non linear
regression
 
 hi, hi all,
 
 you can consult these links:
 http://finzi.psych.upenn.edu/R/Rhelp02a/archive/43008.html
 https://stat.ethz.ch/pipermail/r-help/2004-October/058703.html
 
 hope this help
 
 
 pierre
 
 
 Selon Florencio González [EMAIL PROTECTED]:
 
 
  Hi, I´m trying to plot a nonlinear regresion with the confidence bands
for
  the curve obtained, similar to what nlintool or nlpredci functions in
Matlab
  does, but I no figure how to. In nls the option is there but not
implemented
  yet.
 
  Is there a plan to implement the in a relative near future?
 
  Thanks in advance, Florencio
 
 
 
  La información contenida en este e-mail y sus ficheros adjuntos es
totalmente
  confidencial y no debería ser usado si no fuera usted alguno de los
  destinatarios. Si ha recibido este e-mail por error, por favor avise al
  remitente y bórrelo de su buzón o de cualquier otro medio de
almacenamiento.
  This email is confidential and should not be used by anyone who is not
the
  original intended recipient. If you have received this e-mail in error
  please inform the sender and delete it from your mailbox or any other
  storage mechanism.
  [[alternative HTML version deleted]]
 
 
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.



  _  

Besoin d'un e-mail ? Créez gratuitement un compte Windows Live Hotmail et
bénéficiez d'un filtre antivirus gratuit ! Windows Live
http://www.windowslive.fr/hotmail/default.asp  Hotmail



La información contenida en este e-mail y sus ficheros adjuntos es totalmente 
confidencial y no debería ser usado si no fuera usted alguno de los 
destinatarios. Si ha recibido este e-mail por error, por favor avise al 
remitente y bórrelo de su buzón o de cualquier otro medio de almacenamiento.   
This email is confidential and should not be used by anyone who is not the 
original intended  recipient. If you have received this e-mail in  error please 
inform the sender and delete it from  your mailbox or any other storage 
mechanism.
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] HTML help search in R 2.6.0 v 2.6.1

2007-12-05 Thread Michael Bibo
Peter Dalgaard p.dalgaard at biostat.ku.dk writes:


   
  Has something changed from 2.6.0 to 2.6.1 that may require JVM  1.4.1?
   If 
  so, I can use that information to request an upgrade of my JVM.
   
 

 Hmm, could be. They got rebuilt on my system and committted at some 
 point in the 2.6.1 run-in. and that system has 1.5.0. I did check that 
 things still worked, but I didn't think the version would matter. The 
 actual Java code is unchanged, so you could copy the .class files over 
 from 2.6.0 ( let us know if it works)
 
Excellent!  That worked.  Thanks, Peter.
If this is going to be a continuing issue for future versions, would the best 
advice be to upgrade JVM anyway, at least to 1.5.0?

Michael

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] File based configuration

2007-12-05 Thread Gabor Grothendieck
See the following (the email seems to have wrapped 2 lines onto one
at one point but it should be obvious):

http://tolstoy.newcastle.edu.au/R/e2/help/07/06/18853.html

On Dec 5, 2007 5:05 PM, Thomas Allen [EMAIL PROTECTED] wrote:
 I'm wanting to run R scripts non-interactively as part of a
 technology independent framework.
 I want control over the behaviour of these processes by specifying
 various global variables in a configuration file that would be
 passed as a command line argument.

 I'm wondering if you know of any R support for configuration file
 formats. (i.e. any functions that would read a configuration file of
 some common format)

 For example:
 -The .properties configuration format for java seems to be quite
 popular, would I have to read it in by writing some kind of java
 extension to R?
 -An XML configuration format could also be possible, but it's overkill
 for my needs.

 Any help would be greatly appreciated

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Information criteria for kmeans

2007-12-05 Thread Serguei Kaniovski

Hello,

how is, for example, the Schwarz criterion is defined for kmeans? It should
be something like:

k - 2
vars - 4
nobs - 100

dat - rbind(matrix(rnorm(nobs, sd = 0.3), ncol = vars),
   matrix(rnorm(nobs, mean = 1, sd = 0.3), ncol = vars))

colnames(dat) - paste(var,1:4)

(cl - kmeans(dat, k))

schwarz - sum(cl$withinss)+ vars*k*log(nobs)

Thanks for your help,
Serguei

Austrian Institute of Economic Research (WIFO)

P.O.Box 91  Tel.: +43-1-7982601-231
1103 Vienna, AustriaFax: +43-1-7989386

Mail: [EMAIL PROTECTED]
http://www.wifo.ac.at/Serguei.Kaniovski
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] nearest correlation to polychoric

2007-12-05 Thread John Fox
Dear Jens,

I've submitted a new version (0.7-4) of the polycor package to CRAN. The
hetcor() function now uses your nearcor() in sfsmisc to make the returned
correlation matrix positive-definite if it is not already.

I know that quite some time has elapsed since you raised this issue, and I
apologize for taking so long to deal with it. (I've also kept track of your
suggestions for the sem package, and will respond to them when I next make
substantial modifications to the package -- though not in the near future.)

Thank you,
 John


John Fox, Professor
Department of Sociology
McMaster University
Hamilton, Ontario
Canada L8S 4M4
905-525-9140x23604
http://socserv.mcmaster.ca/jfox 
 

 -Original Message-
 From: Jens Oehlschlägel [mailto:[EMAIL PROTECTED] 
 Sent: Friday, July 13, 2007 2:42 PM
 To: [EMAIL PROTECTED]; 
 [EMAIL PROTECTED]; [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED]
 Subject: RE: [R] nearest correlation to polychoric
 
 Dimitris,
 
 Thanks a lot for the quick response with the pointer to 
 posdefify. Using its logic as an afterburner to the algorithm 
 of Higham seems to work.
 
 Martin,
 
  Jens, could you make your code (mentioned below) available 
 to the community, or even donate to be included as a new 
 method of posdefify() ?
 
 Nice opportunity to give-back. Below is the R code for 
 nearcor and .Rd help file. A quite natural place for nearcor 
 would be John Fox' package polycor, what do you think?
 
 John?
 
 Best regards
 
 
 Jens Oehlschlägel
 
 
 # Copyright (2007) Jens Oehlschlägel
 # GPL licence, no warranty, use at your own risk
 
 #! \name{nearcor}
 #! \alias{nearcor}
 #! \title{ function to find the nearest proper correlation 
 matrix given an improper one } #! \description{
 #!   This function smooths a improper correlation matrix as 
 it can result from \code{\link{cor}} with 
 \code{use=pairwise.complete.obs} or \code{\link[polycor]{hetcor}}.
 #! }
 #! \usage{
 #! nearcor(R, eig.tol = 1e-06, conv.tol = 1e-07, posd.tol = 
 1e-08, maxits = 100, verbose = FALSE) #! } #! \arguments{
 #!   \item{R}{ a square symmetric approximate correlation matrix }
 #!   \item{eig.tol}{ defines relative positiveness of 
 eigenvalues compared to largest, default=1.0e-6 }
 #!   \item{conv.tol}{ convergence tolerance for algorithm, 
 default=1.0e-7  }
 #!   \item{posd.tol}{ tolerance for enforcing positive 
 definiteness, default=1.0e-8 }
 #!   \item{maxits}{ maximum number of iterations allowed }
 #!   \item{verbose}{ set to TRUE to verbose convergence }
 #! }
 #! \details{
 #!   This implements the algorithm of Higham (2002), then 
 forces symmetry, then forces positive definiteness using code 
 from \code{\link[sfsmisc]{posdefify}}.
 #!   This implementation does not make use of direct LAPACK 
 access for tuning purposes as in the MATLAB code of Lucas (2001).
 #!   The algorithm of Knol DL and ten Berge (1989) (not 
 implemented here) is more general in (1) that it allows 
 contraints to fix some rows (and columns) of the matrix and 
 (2) to force the smallest eigenvalue to have a certain value.
 #! }
 #! \value{
 #!   A LIST, with components
 #!   \item{cor}{resulting correlation matrix}
 #!   \item{fnorm}{Froebenius norm of difference of input and output}
 #!   \item{iterations}{number of iterations used}
 #!   \item{converged}{logical}
 #! }
 #! \references{
 #!Knol, DL and ten Berge, JMF (1989). Least-squares 
 approximation of an improper correlation matrix by a proper 
 one.  Psychometrika, 54, 53-61.
 #!   \cr  Higham (2002). Computing the nearest correlation 
 matrix - a problem from finance, IMA Journal of Numerical 
 Analysis, 22, 329-343.
 #!   \cr  Lucas (2001). Computing nearest covariance and 
 correlation matrices. A thesis submitted to the University of 
 Manchester for the degree of Master of Science in the Faculty 
 of Science and Engeneering.
 #! }
 #! \author{ Jens Oehlschlägel }
 #! \seealso{ \code{\link[polycor]{hetcor}}, 
 \code{\link{eigen}}, \code{\link[sfsmisc]{posdefify}} } #! \examples{
 #!   cat(pr is the example matrix used in Knol DL, ten Berge 
 (1989)\n)
 #!   pr - structure(c(1, 0.477, 0.644, 0.478, 0.651, 0.826, 
 0.477, 1, 0.516,
 #!   0.233, 0.682, 0.75, 0.644, 0.516, 1, 0.599, 0.581, 0.742, 0.478,
 #!   0.233, 0.599, 1, 0.741, 0.8, 0.651, 0.682, 0.581, 0.741, 
 1, 0.798,
 #!   0.826, 0.75, 0.742, 0.8, 0.798, 1), .Dim = c(6, 6))
 #!
 #!   nr - nearcor(pr)$cor
 #!   plot(pr[lower.tri(pr)],nr[lower.tri(nr)])
 #!   round(cbind(eigen(pr)$values, eigen(nr)$values), 8)
 #!
 #!   cat(The following will fail:\n)
 #!   try(factanal(cov=pr, factors=2))
 #!   cat(and this should work\n)
 #!   try(factanal(cov=nr, factors=2))
 #!
 #!   \dontrun{
 #! library(polycor)
 #!
 #! n - 400
 #! x - rnorm(n)
 #! y - rnorm(n)
 #!
 #! x1 - (x + rnorm(n))/2
 #! x2 - (x + rnorm(n))/2
 #! x3 - (x + rnorm(n))/2
 #! x4 - (x + rnorm(n))/2
 #!
 #! y1 - (y + rnorm(n))/2
 #!

[R] Plotting error bars in xy-direction

2007-12-05 Thread Hans W. Borchers
Dear R-help,

I am looking for a function that will plot error bars in x- or y-direction (or 
both), the same as the Gnuplot function 'plot' can achieve with:

plot file.dat with xyerrorbars,...

Rsite-searching led me to the functions 'errbar' and 'plotCI' in the Hmisc, 
gregmisc, and plotrix packages. As I understand the descriptions and examples, 
none of these functions provides horizontal error bars.

Looking into 'errbar' and using segments, I wrote a small function for myself 
adding these kinds of error bars to existing plots. I would still be interested 
to know what the standard R solution is.

Regards,  Hans Werner

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] confidence intervals for y predicted in non linear regression

2007-12-05 Thread Gabor Grothendieck
I don't think this is referring to the nls2 package on CRAN but
rather something else.

On Dec 5, 2007 7:37 AM, Ndoye Souleymane [EMAIL PROTECTED] wrote:

 Hi, Salut,

 You should use the package nsl2 (only for Linux distribution)
 Vous pouvez utiliser le package nls2 (Linux seulement)

 Regards,

 Souleymane Date: Tue, 4 Dec 2007 16:07:57 +0100 From: [EMAIL PROTECTED] 
 To: [EMAIL PROTECTED] CC: [EMAIL PROTECTED] Subject: Re: [R] confidence 
 intervals for y predicted in non linear regression  hi, hi all,  you can 
 consult these links: 
 http://finzi.psych.upenn.edu/R/Rhelp02a/archive/43008.html 
 https://stat.ethz.ch/pipermail/r-help/2004-October/058703.html  hope this 
 help   pierre   Selon Florencio González [EMAIL PROTECTED]:
 Hi, I´m trying to plot a nonlinear regresion with the confidence bands for  
 the curve obtained, similar to what nlintool or nlpredci functions in Matlab 
  does, but I no figure how to. In nls the option is there but not 
 implemented  yet.   Is there a plan to implement the in a relative near 
 future?   Thanks in advance, Florencio La información 
 contenida en este e-mail y sus ficheros adjuntos es totalmente  
 confidencial y no debería ser usado si no fuera usted alguno de los  
 destinatarios. Si ha recibido este e-mail por error, por favor avise al  
 remitente y bórrelo de su buzón o de cualquier otro medio de almacenamiento. 
  This email is confidential and should not be used by anyone who is not the 
  original intended recipient. If you have received this e-mail in error  
 please inform the sender and delete it from your mailbox or any other  
 storage mechanism.  [[alternative HTML version deleted]]
 __ R-help@r-project.org mailing 
 list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the 
 posting guide http://www.R-project.org/posting-guide.html and provide 
 commented, minimal, self-contained, reproducible code.
 _
 Vous êtes plutôt Desperate ou LOST ? Personnalisez votre PC avec votre 
 [[replacing trailing spam]]

[[alternative HTML version deleted]]


 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] logistic regression using glm,which y is set to be 1

2007-12-05 Thread Bin Yue

Dear friends :
using the glm function and setting family=binomial, I got a list of
coefficients.
The coefficients reflect the effects  of predicted variables on the
probability of the response to be 1.
My response variable consists of  A and D . I don't know which level of
the response was set to be 1.
is the first element of the response set to be 1?
   Thank all in advance.
   Regards,

-
Best regards,
Bin Yue

*
student for a Master program in South Botanical Garden , CAS

-- 
View this message in context: 
http://www.nabble.com/logistic-regression-using-%22glm%22%2Cwhich-%22y%22-is-set-to-be-%221%22-tf4953617.html#a14185060
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] logistic regression using glm,which y is set to be 1

2007-12-05 Thread Marc Schwartz

On Wed, 2007-12-05 at 18:06 -0800, Bin Yue wrote:
 Dear friends :
 using the glm function and setting family=binomial, I got a list of
 coefficients.
 The coefficients reflect the effects  of predicted variables on the
 probability of the response to be 1.
 My response variable consists of  A and D . I don't know which level of
 the response was set to be 1.
 is the first element of the response set to be 1?
Thank all in advance.
Regards,
 
 -
 Best regards,
 Bin Yue


As per the Details section of ?glm:

For binomial and quasibinomial families the response can also be
specified as a factor (when the first level denotes failure and all
others success) ...


So use:

  levels(response.variable)

and that will give you the factor levels, where the first level is 0 and
the second level is 1. 

If you work in a typical English based locale with default alpha based
level ordering, it will likely be A (Alive?) is 0 and D (Dead?) is 1.

HTH,

Marc Schwartz

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] logistic regression using glm,which y is set to be 1

2007-12-05 Thread Bin Yue

Dear Marc Schwartz:
 When I ask R2.6.0 for windows, the information it gives does not contain
much about family=binomial .
 You said that there is a detail section of ?glm. I want to read it
thoroughly. Could  you tell me where and how I can find the detail section
of ?glm.
   Thank you very much .
   Best regards,
 Bin Yue
  
 

Marc Schwartz wrote:
 
 
 On Wed, 2007-12-05 at 18:06 -0800, Bin Yue wrote:
 Dear friends :
 using the glm function and setting family=binomial, I got a list of
 coefficients.
 The coefficients reflect the effects  of predicted variables on the
 probability of the response to be 1.
 My response variable consists of  A and D . I don't know which level
 of
 the response was set to be 1.
 is the first element of the response set to be 1?
Thank all in advance.
Regards,
 
 -
 Best regards,
 Bin Yue
 
 
 As per the Details section of ?glm:
 
 For binomial and quasibinomial families the response can also be
 specified as a factor (when the first level denotes failure and all
 others success) ...
 
 
 So use:
 
   levels(response.variable)
 
 and that will give you the factor levels, where the first level is 0 and
 the second level is 1. 
 
 If you work in a typical English based locale with default alpha based
 level ordering, it will likely be A (Alive?) is 0 and D (Dead?) is 1.
 
 HTH,
 
 Marc Schwartz
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.
 
 


-
Best regards,
Bin Yue

*
student for a Master program in South Botanical Garden , CAS

-- 
View this message in context: 
http://www.nabble.com/logistic-regression-using-%22glm%22%2Cwhich-%22y%22-is-set-to-be-%221%22-tf4953617.html#a14185819
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Working with ts objects

2007-12-05 Thread Gabor Grothendieck
anscombe is built into R already so you don't need to read it in.
An intercept is the default in lm so you don't have to specify it.

opar - par(mfrow = c(2,2))
plot(y1 ~ x1, anscombe)
reg - lm(y1 ~ x1, anscombe)
reg
abline(reg)
...etc...
par(opar)

Note that plot(anscombe[1:2]) and lm(anscombe[2:1]) also work.

read.table returns a data frame whereas ts requires a vector
or matrix so none of your ts code will work.  as.matrix(DF)
or data.matrix(DF) will convert data frame DF to a matrix.


On Dec 5, 2007 2:30 PM, Richard Saba [EMAIL PROTECTED] wrote:
 I am relatively new to R and object oriented programming. I have relied on
 SAS for most of my data analysis.  I teach an introductory undergraduate
 forecasting course using the Diebold text and I am considering using R in
 addition to SAS and Eviews in the course. I work primarily with univariate
 or multivariate time series data. I am having a great deal of difficulty
 understanding and working with ts objects particularly when it comes to
 referencing variables in plot commands or in formulas. The confusion is
 amplified when certain procedures (lm for example) coerce the ts object
 into a data.frame before application with the results that the output is
 stored in a data.frame object.
 For example the two sets of code below replicate examples from chapter 2 and
 6 in the text. In the first set of code if I were to replace
 anscombe-read.table(fname, header=TRUE) with
 anscombe-ts(read.table(fname, header=TRUE)) the plot() commands would
 generate errors. The objects x1, y1 ...  would not be recognized. In
 this case I would have to reference the specific column in the anscombe data
 set. If I would have constructed the data set from several different data
 sets using the ts.intersect() function (see second code below)the problem
 becomes even more involved and keeping track of which columns are associated
 with which variables can be rather daunting. All I wanted was to plot actual
 vs. predicted values of hstarts and the residuals from the model.

 Given the difficulties I have encountered I know my students will have
 similar problems. Is there a source other than the basic R manuals that I
 can consult and recommend to my students that will help get a handle on
 working with time series objects? I found the Shumway Time series analysis
 and its applications with R Examples website very helpful but many
 practical questions involving manipulation of time series data still remain.
 Any help will be appreciated.
 Thanks,

 Richard Saba
 Department of Economics
 Auburn University
 Email:  [EMAIL PROTECTED]
 Phone:  334 844-2922




 anscombe-read.table(fname, header=TRUE)
 names(anscombe)-c(x1,y1,x2,y2,x3,y3,x4,y4)
 reg1-lm(y1~1 + x1, data=anscombe)
 reg2-lm(y2~1 + x2, data=anscombe)
 reg3-lm(y3~1 + x3, data=anscombe)
 reg4-lm(y4~1 + x4, data=anscombe)
 summary(reg1)
 summary(reg2)
 summary(reg3)
 summary(reg4)
 par(mfrow=c(2,2))
 plot(x1,y1)
 abline(reg1)
 plot(x2,y2)
 abline(reg2)
 plot(x3,y3)
 abline(reg3)
 plot(x4,y4)
 abline(reg4)

 ..
 fname-file.choose()
 tab6.1-ts(read.table(fname, header=TRUE),frequency=12,start=c(1946,1))
 month-cycle(tab6.1)
 year-floor(time(tab6.1))
 dat1-ts.intersect(year,month,tab6.1)
 dat2-window(dat1,start=c(1946,1),end=c(1993,12))
 reg1-lm(tab6.1~1+factor(month),data=dat2, na.action=NULL)
 summary(reg1)
 hstarts-dat2[,3]
 plot1-ts.intersect(hstarts,reg1$fitted.value,reg1$resid)
 plot.ts(plot1[,1])
 lines(plot1[,2], col=red)
 plot.ts(plot[,3], ylab=Residuals)

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] snow package on multi core unix box

2007-12-05 Thread Saeed Abu Nimeh
Is the rmpi package (or rpvm) needed to exploit multiple cores on a
single unix box using the snow package. The documentation of the package
does not provide info about setting up a single machine with multiple
cores. Also, if how effective is it to run a bayesian simulation on
parallel (or distributed) processors using the snow package.
Thanks,
Saeed

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Segmented regression

2007-12-05 Thread Power, Brendan D (Toowoomba)
Hello all,

I have 3 time series (tt) that I've fitted segmented regression models
to, with 3 breakpoints that are common to all, using code below
(requires segmented package). However I wish to specifiy a zero
coefficient, a priori, for the last segment of the KW series (green)
only. Is this possible to do with segmented? If not, could someone point
in a direction?

The final goal is to compare breakpoint sets for differences from those
derived from other data.

Thanks in advance,

Brendan.


library(segmented)
df-data.frame(y=c(0.12,0.12,0.11,0.19,0.27,0.28,0.35,0.38,0.46,0.51,0.5
8,0.59,0.60,0.57,0.64,0.68,0.72,0.73,0.78,0.84,0.85,0.83,0.86,0.88,0.88,
0.95,0.95,0.93,0.92,0.97,0.86,1.00,0.85,0.97,0.90,1.02,0.95,0.54,0.53,0.
50,0.60,0.70,0.74,0.78,0.82,0.88,0.83,1.00,0.85,0.96,0.84,0.86,0.82,0.86
,0.84,0.84,0.84,0.77,0.69,0.61,0.67,0.73,0.65,0.55,0.58,0.56,0.60,0.50,0
.50,0.42,0.43,0.44,0.42,0.40,0.51,0.60,0.63,0.71,0.74,0.82,0.82,0.85,0.8
9,0.91,0.87,0.91,0.93,0.95,0.95,0.97,1.00,0.96,0.90,0.86,0.91,0.94,0.96,
0.88,0.88,0.88,0.92,0.82,0.85),
 
tt=c(141.6,141.6,141.6,183.2,212.8,227.0,242.4,271.5,297.4,312.3,331.4,3
42.4,346.3,356.6,371.6,408.8,408.8,419.5,434.4,464.5,492.6,521.7,550.5,5
50.3,565.4,588.0,602.9,623.7,639.6,647.9,672.6,680.6,709.7,709.7,750.2,7
50.2,750.2,141.6,141.6,141.6,183.2,212.8,227.0,242.4,271.5,297.4,312.3,3
31.4,342.4,346.3,356.6,371.6,408.8,408.8,419.5,434.4,464.5,492.6,521.7,5
50.5,550.3,565.4,588.0,602.9,623.7,639.6,647.9,672.6,680.6,709.7,709.7,1
41.6,141.6,141.6,183.2,212.8,227.0,242.4,271.5,297.4,312.3,331.4,342.4,3
46.3,356.6,371.6,408.8,408.8,419.5,434.4,464.5,492.6,521.7,550.5,550.3,5
65.4,588.0,602.9,623.7,639.6,647.9,672.6,709.7),
   group=c(rep(RKW,37),rep(RWC,34),rep(RKV,32)))
init.bp - c(297.4,639.6,680.6)
lm.1 - lm(y~tt+group,data=df)
seg.1 - segmented(lm.1, seg.Z=~tt, psi=list(tt=init.bp))

  version
   _   
platform   i386-pc-mingw32 
arch   i386
os mingw32 
system i386, mingw32   
status 
major  2   
minor  6.0 
year   2007
month  10  
day03  
svn rev43063   
language   R   
version.string R version 2.6.0 (2007-10-03)



DISCLAIMER**...{{dropped:15}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] hclust in heatmap.2

2007-12-05 Thread Ashoka Polpitiya
Check the Rowv, Colv options to heatmap.2

data(mtcars)
x  - as.matrix(mtcars)
heatmap.2(x, Rowv=FALSE, dendrogram=column)

-Ashoka

Scientist - Pacific Northwest National Lab

On Dec 5, 2007 4:20 PM, affy snp [EMAIL PROTECTED] wrote:

 Dear list,

 I am using heatmap.2(x) to draw a heatmap. Ideally, I want to the matrix
 x clustered only by columns and keep the original order of rows unchanged.
 Is there a way to do that in heatmap.2()?

 Thanks a lot! Any suggestions will be appreciated!

 Best,
  Allen

[[alternative HTML version deleted]]

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] logistic regression using glm,which y is set to be 1

2007-12-05 Thread Bin Yue

 Dear all:
 By comparing glmresult$y and model.response(model.frame(glmresult)),  I
have found out which one is 
set to be TRUE and which FALSE.But it seems that to fit a logistic
regression , logit (or logistic) transformation has to be done before
regression.
 Does anybody know how to obtain the transformation result ? It is hard
to settle down before knowing the actual process R works . I have read some
books and the ?glm help file , but what they told me was not sufficient.
   Best wishes ,
 Bin Yue


Weiwei Shi wrote:
 
 Dear Bin:
 you type
 ?glm
 in R console and you will find the Detail section of help file for glm
 
 i pasted it for you too
 
 Details
 
 A typical predictor has the form response ~ terms where response is the
 (numeric) response vector and terms is a series of terms which specifies a
 linear predictor for response. For binomialand quasibinomial families the
 response can also be specified as a
 factorfile:///Library/Frameworks/R.framework/Versions/2.6/Resources/library/base/html/factor.html
 (when
 the first level denotes failure and all others success) or as a two-column
 matrix with the columns giving the numbers of successes and failures. A
 terms specification of the form first + second indicates all the terms in
 first together with all the terms in second with duplicates removed. The
 terms in the formula will be re-ordered so that main effects come first,
 followed by the interactions, all second-order, all third-order and so on:
 to avoid this pass a terms object as the formula.
 
 A specification of the form first:second indicates the the set of terms
 obtained by taking the interactions of all terms in first with all terms
 in
 second. The specification first*second indicates the *cross* of first and
 second. This is the same as first + second + first:second.
 
 glm.fit is the workhorse function.
 
 If more than one of etastart, start and mustart is specified, the first in
 the list will be used. It is often advisable to supply starting values for
 a
 quasifile:///Library/Frameworks/R.framework/Versions/2.6/Resources/library/stats/html/family.html
 family,
 and also for families with unusual links such as gaussian(log).
 
 All of weights, subset, offset, etastart and mustart are evaluated in the
 same way as variables in formula, that is first in data and then in the
 environment of formula.
 
 
 
 On Dec 5, 2007 10:41 PM, Bin Yue [EMAIL PROTECTED] wrote:
 

 Dear Marc Schwartz:
  When I ask R2.6.0 for windows, the information it gives does not contain
 much about family=binomial .
  You said that there is a detail section of ?glm. I want to read it
 thoroughly. Could  you tell me where and how I can find the detail
 section
 of ?glm.
   Thank you very much .
   Best regards,
  Bin Yue



 Marc Schwartz wrote:
 
 
  On Wed, 2007-12-05 at 18:06 -0800, Bin Yue wrote:
  Dear friends :
  using the glm function and setting family=binomial, I got a list
 of
  coefficients.
  The coefficients reflect the effects  of predicted variables on the
  probability of the response to be 1.
  My response variable consists of  A and D . I don't know which
 level
  of
  the response was set to be 1.
  is the first element of the response set to be 1?
 Thank all in advance.
 Regards,
 
  -
  Best regards,
  Bin Yue
 
 
  As per the Details section of ?glm:
 
  For binomial and quasibinomial families the response can also be
  specified as a factor (when the first level denotes failure and all
  others success) ...
 
 
  So use:
 
levels(response.variable)
 
  and that will give you the factor levels, where the first level is 0
 and
  the second level is 1.
 
  If you work in a typical English based locale with default alpha based
  level ordering, it will likely be A (Alive?) is 0 and D (Dead?) is 1.
 
  HTH,
 
  Marc Schwartz
 
  __
  R-help@r-project.org mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide
  http://www.R-project.org/posting-guide.html
  and provide commented, minimal, self-contained, reproducible code.
 
 


 -
 Best regards,
 Bin Yue

 *
 student for a Master program in South Botanical Garden , CAS

 --
 View this message in context:
 http://www.nabble.com/logistic-regression-using-%22glm%22%2Cwhich-%22y%22-is-set-to-be-%221%22-tf4953617.html#a14185819
 Sent from the R help mailing list archive at Nabble.com.

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

 
 
 
 -- 
 Weiwei Shi, Ph.D
 Research Scientist
 GeneGO, Inc.
 
 Did you always know?
 No, I did not. But I believed...
 ---Matrix III
 
   [[alternative HTML version deleted]]
 
 __
 R-help@r-project.org mailing list

Re: [R] Which Linux OS on Athlon amd64, to comfortably run R?

2007-12-05 Thread Prof Brian Ripley

On Thu, 6 Dec 2007, Emmanuel Charpentier wrote:


Prof Brian Ripley a écrit :

Note that Ottorino has only 1GB of RAM installed, which makes a 64-bit
version of R somewhat moot.  See chapter 8 of

http://cran.r-project.org/doc/manuals/R-admin.html


Thank you for this reminder|tip ! I didn't re-read this document since
... oh, my  ... very early (1.x ?) versions. At which time my favorite
peeve against R was the fixed memory allocation scheme.

I would have thought that 64 bits machines could take advantage of a
wider bus and (marginally ?) faster instructions to balance larger
pointers. Am I mistaken ?


Yes, it is more complex than that.  If you run 32-bit instructions on a 
x86_64, the physical bus is the same as when you run 64-bit instructions. 
The larger code usually means the CPU caches spill more often, and some 
64-bit chips have more 32-bit than 64-bit registers which allows better 
scheduling.


The R-admin manual reports on some empirical testing.  But when you have 
limited RAM the larger code and data for a 64-bit build will cause more 
swapping and that is likely to dominate performance issues on large 
problems.


Note that the comparisons depend on both the chip and the OS: it seems 
that on Mac OS 10.5 on a Core 2 Duo the 64-bit version is faster (on small 
examples).  The original enquiry was about 'amd64 linux', but I've checked 
Intel Core 2 Duo as well: on my box 64-bit builds are faster than 32-bit 
ones, whereas the reverse is true for Opterons.  So it seems that the 
architectural differences of Core 2 Duo vs AMD64 do affect the issue.


--
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Using expression in Hmisc Key()

2007-12-05 Thread Dieter Menne
Michael Kubovy kubovy at virginia.edu writes:

 
 Dear r-helpers,
 
 How do I tell xYplot() and Key() that I want the labels in italic?
 

 Key(x = 0.667, y = 0.833, other = list(title = expression(italic(v)),  
 cex.title = 1,
   labels = c(expression(italic(b)), expression(italic(c)),  
 expression(italic(d)
 dev.off()

Michael, 

I have submit a similar case last week to the Bug tracker. Maybe you can raise
that enhancement request to defect

http://biostat.mc.vanderbilt.edu/trac/Hmisc/ticket/21

Dieter

---
The Key function generated by some plot commands should have a ... parameter.
Otherwise, the ... in rlegend is useless, and it would be nice to be able to
suppress the box, for example.

Key = function (x = NULL, y = NULL, lev = c(No Fail, Fail), pch = c(16, 1))
{ .. part omitted

rlegend(x, y, legend = lev, pch = pch, ...) invisible()

}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.