Re: [R] Error Building From Source

2005-04-14 Thread Uwe Ligges
Alan Arnholt wrote:
Greetings:
I am trying to build R-2.0.1 from source on windows.  My path is set to:
.;C:\RStools;C:\MinGW\bin;C:\perl\bin;C:\texmf\miktex\bin;C:\HTMLws\;C:\R201\R201\bin;%System
Root%\system32;%SystemRoot%;%SystemRoot%\System32\Wbem;C:\Program Files\
Common Files\Adaptec Shared\System;C:\LINGO9\
and Mkrules has been edited and reads
# path (possibly full path) to same version of R on the host system
R_EXE=C:/R201/R201/bin
The corresponding section's header is
"cross-compilation settings", hence don't change it if compiling on 
Windows. "R" is perfect, here.
You don't need to specify this path anywhere in MkRules.

Uwe Ligges

when I type make I get the following:
-- Making package base 
  adding build stamp to DESCRIPTION
C:/R201/R201/bin: not found
make[4]: *** [frontmatter] Error 127
make[3]: *** [all] Error 2
make[2]: *** [pkg-base] Error 2
make[1]: *** [rpackage] Error 2
make: *** [all] Error 2
Any hints as to why it says it can not find C:/R201/R201/bin?  Any help
would be appreciated.
Alan
Alan T. Arnholt
Associate Professor
Dept. of Mathematical Sciences
Appalachian State University
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R_LIBS difficulty ?

2005-04-14 Thread Prof Brian Ripley
On Thu, 14 Apr 2005, François Pinard wrote:
[Prof Brian Ripley]
[François Pinard]

Now using this line within `~/.Renviron':
 R_LIBS=/home/pinard/etc/R
my tiny package is correctly found by R.  However, R does not seem to
see any library within that directory if I rather use either of:
 R_LIBS=$HOME/etc/R
 R_LIBS="$HOME/etc/R"

Correct, and as documented.  See the description in ?Startup,
which says things like ${foo-bar} are allowed but not $HOME, and
not ${HOME}/bah or even ${HOME}.  But R_LIBS=~/etc/R will work in
.Renviron since ~ is intepreted by R in paths.
Hello, Brian (or should I rather write Prof Ripley?).
Thanks for having replied.  I was not sure how to read "but not", which
could be associated either with "which says" or "are allowed".  My
English knowledge is not fully solid, and I initially read you meant the
later, but it seems the former association is probably the correct one.
It does not say.  You have gone beyond what it says is allowed.
The fact is the documentation never says that `$HOME' or `${HOME} are
forbidden.  It is rather silent on the subject, except maybe for this
sentence: "value is processed in a similar way to a Unix shell" in the
Details section, which vaguely but undoubtedly suggests that `$HOME' and
`${HOME}' might be allowed.  Using `~/' is not especially documented
either, except from the Examples section, where it is used.  I probably
thought it was an example of how shell-alike R processes `~/.Renviron'.
Yes, it is silent. And silence means it is not documented to work.
The last writing (I mean, something similar) is suggested somewhere in
the R manuals (but I do not have the manual with me right now to give
the exact reference, I'm in another town).

It is not mentioned in an R manual, but it is mentioned in the FAQ.
I tried checking in the FAQ.  By the way, http://www.r-project.org
presents a menu on the left, and there is a group of items under the
title `Documentation'.  `FAQs' is shown under that title, but is not
clickable.  I would presume it was meant to be?  However, the `Other'
item is itself clickable, and offers a link to what appears to be an
FAQs page.
The only thing I saw, in item 5.2 of the FAQ (How can add-on packages be
installed?) says that one may use `$HOME/' while defining `R_LIBS' in a
Bourne shell profile, or _preferably_ use `~/` while defining `R_LIBS'
within file `~/.Renviron`.  The FAQ does not really say that `$HOME' is
forbidden.  The FAQ then refers to `?Startup' for more information, and
`?Startup' is not clear on this thing, in my opinion at least.
R_LIBS=$HOME/etc/R will work in a shell (and R_LIBS=~/etc/R may not).

Another hint that it could be expected to work is that the same
`~/.Renviron' once contained the line:

 R_BROWSER=$HOME/bin/links

which apparently worked as expected.  (This `links' script launches
the real program with `-g' appended whenever `DISPLAY' is defined.)

Yes, but that was not interpreted by R, rather a shell script called by R.
Granted, thanks for pointing this out.
The documentation does not really say either (or else I missed it) if
the value of R_BROWSER is given to exec, or given to an exec'ed shell.
If a shell is called, it means in particular that we can use options,
and this is a useful feature, worth being known I guess.
It is platform-dependent, and indeed may change over time on a given 
platform.

The trick in reading technical documentation is to read what it says, and 
not assume what it does not say.

--
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

[R] code for index of canor analysis

2005-04-14 Thread ronggui
i have search the web and not find code to cal these index.so i write one.if 
there exists code for such index,i hope you can let me know.

this is my first R code.i send the the list and hope uesRs give me some advise 
for improve it or check if i make any mistake . i appriciate your suggestion .
i have check the result roughly to the SPSS's. though i do not know excatly how 
SPSS calculate the redundancy index,but my result is  similar to the SPSS's.

thank you.


--
cancor.index<-function(object,x,y,center=T,scale=F){
x<-scale(x,center=center,scale=scale)
y<-scale(y,center=center,scale=scale)
ncor<-length(object$cor)
#number of canonical variables
nx<-dim(object$xco)[1]
#number of X
ny<-dim(object$yco)[1]
#number of Y
xscore<-x%*%(object$xcoe[,1:ncor])
colnames(xscore)<-paste("con",1:ncor,"x",sep=".")
yscore<-y%*%(object$ycoe[,1:ncor])
colnames(yscore)<-paste("con",1:ncor,"y",sep=".")
#canonical score
eigenvalue<-object$cor^2/(1-object$cor^2)
#eigenvalue/lambda
x.xscore<-cor(x,xscore)
y.yscore<-cor(y,yscore)
#canonical loadings
y.xscore<-cor(y,xscore)
x.yscore<-cor(x,yscore)
#structure loadings/cross loadings
prop.y<-diag(crossprod(y.yscore)/ny)
prop.x<-diag(crossprod(x.xscore)/nx)
#proportion of varoiance accounted for  by its own cv
cr.sqare<-as.vector(object$cor^2)
#canonical R-sqare
RD.yy<-cr.sqare*prop.y
RD.xx<-cr.sqare*prop.x
#proportion variance accounting for my the opposite Can. Var
index<-list(
"xscore"=xscore,"yscore"=yscore,
"eigenvalue"=eigenvalue,
"cr.sqare"=cr.sqare,
"can.loadings.x"=x.xscore,"can.loadings.y"=y.yscore,
"cros.loadings.y"=y.xscore,"cros.loadings.x"=x.yscore,
"prop.var.of.Y.by.CV.Y"=prop.y,
"prop.var.of.X.by.CV.X"=prop.x,
"prop.var.of.y.by.CV-X"=RD.yy,
"prop.var.of.X.by.CV-Y"=RD.xx
)
class(index)<-"cancor.index"
return(index)
}

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Wrapping long labels in barplot(2)

2005-04-14 Thread Jan P. Smit
Dear Tom,
Yes, this works great.
Many thanks and best regards,
Jan
Mulholland, Tom wrote:
I think this might have been my code
mapply(paste,strwrap(levels(ncdata$Chapter),18,simplify = FALSE),collapse = 
"\n")
Tom

-Original Message-
From: Jan P. Smit [mailto:[EMAIL PROTECTED]
Sent: Thursday, 14 April 2005 5:15 PM
To: Mulholland, Tom
Cc: r-help@stat.math.ethz.ch
Subject: Re: [R] Wrapping long labels in barplot(2)
Dear Tom,
Many thanks. I think this gets me in the right direction, but 
concatenates all levels into one long level. Any further thoughts?

Best regards,
Jan
Mulholland, Tom wrote:
This may not be the best way but in the past I think I have 
done something like
levels(x) <- paste(strwrap(levels(x),20,prefix = 
""),collapse = "\n")
Tom

-Original Message-
From: Jan P. Smit [mailto:[EMAIL PROTECTED]
Sent: Thursday, 14 April 2005 11:48 AM
To: r-help@stat.math.ethz.ch
Subject: [R] Wrapping long labels in barplot(2)
I am using barplot, and barplot2 in the gregmisc bundle, in the 
following way:

barplot2(sort(xtabs(expend / 1000 ~ theme)),
   col = c(mdg7, mdg8, mdg3, mdg1), horiz = T, las = 1,
   xlab = "$ '000", plot.grid = T)
The problem is that the values of 'theme', which is a 
factor, are in 

some cases rather long, so that I would like to wrap/split 
them at a 

space once they exceed, say, 20 characters. What I'm doing now is 
specifying names.arg manually with '\n' where I want the 
breaks, but I 
would like to automate the process.

I've looked for a solution using 'strwrap', but am not sure 
how to apply 
it in this situation.

Jan Smit
Consultant
Economic and Social Commission for Asia and the Pacific
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! 
http://www.R-project.org/posting-guide.html


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! 
http://www.R-project.org/posting-guide.html
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Factor Analysis Biplot

2005-04-14 Thread Jari Oksanen
On Fri, 2005-04-15 at 12:49 +1200, Brett Stansfield wrote:
> Dear R
Dear S,

> When I go to do the biplot
> 
> biplot(eurofood.fa$scores, eurofood$loadings)
> Error in 1:p : NA/NaN argument

Potential sources of error (guessing: no sufficient detail given in the
message):

- you ask scores from eurofood.fa and loadings from eurofood: one of
these names may be wrong.
- you did not ask scores in factanal (they are not there as default, but
you have to specify 'scores').

> 
> Loadings:
>   Factor1 Factor2
> RedMeat0.561  -0.112 
> WhiteMeat  0.593  -0.432 
> Eggs   0.839  -0.195 
> Milk   0.679 
> Fish   0.300   0.951 
> Cereals   -0.902  -0.267 
> Starch 0.542   0.253 
> Nuts  -0.760 
> Fr.Veg-0.145   0.325
> 
The cut values are there, but they are not displayed.  To see this, you
may try:

unclass(eurofood$loadings)
print(eurofuud$loadings, cutoff=0)

cheers, J
-- 
Jari Oksanen -- Dept Biology, Univ Oulu, 90014 Oulu, Finland
email [EMAIL PROTECTED], homepage http://cc.oulu.fi/~jarioksa/

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] how can get rid of the level in the table

2005-04-14 Thread Cuichang Zhao
hello,
1. when i get a column data from a table, it always follows with the level. 
for exmaple if i have a table = (v1, v2), and table$v1 = (1, 2, 3); and col1 <- 
table$v1;
then there is level assign to the table, with 1 is level1 and 2 is level2 3 is 
level3 ect.
however, when are want to get col1[3], which is 3, by when i add the col1[3] to 
a list, the is actually appears as 33 instead of because the level for 3 is 3. 
In fact, the data i want is 3 not 33, so i don't want any levels come with the 
data each time when read from the column in the table. how can i do that?
 
2. how can i read from a file from a specific folder, and how i create a folder 
and write a file to that folder?
 
 
Thank you very much.
 
C-Ming
 
April 14,2005

__



[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] function corresponding to map of perl

2005-04-14 Thread Wolfram Fischer
--- In reply to: ---
>Date:15.04.05 08:08 (+0200)
>From:Wolfram Fischer <[EMAIL PROTECTED]>
>Subject: [R] function corresponding to map of perl
>
> Is there a function in R that corresponds to the
> function ``map'' of perl?
> 
> It could be called like:
>   vector.a <- map( vector.b, FUN, args.for.FUN )
> 
> It should execute for each element ele.b of vector.b:
>   FUN( vector.b, args.for.FUN)
> 
> It should return a vector (or data.frame) of the results
> of the calls of FUN.
> 
> It nearly works using:
>apply( data.frame( vector.b ), 1, FUN, args.for.FUN )
> But when FUN is called ele.b from vector.b is no known.

Here I made a mistake. I realised now that ``apply'' does the job,
e.g.
apply( data.frame( 1:3 ), 1, paste, sep='', "X" )

Wolfram

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] function corresponding to map of perl

2005-04-14 Thread Wolfram Fischer
Is there a function in R that corresponds to the
function ``map'' of perl?

It could be called like:
vector.a <- map( vector.b, FUN, args.for.FUN )

It should execute for each element ele.b of vector.b:
FUN( vector.b, args.for.FUN)

It should return a vector (or data.frame) of the results
of the calls of FUN.

It nearly works using:
 apply( data.frame( vector.b ), 1, FUN, args.for.FUN )
But when FUN is called ele.b from vector.b is no known.

Thanks - Wolfram

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Error Building From Source

2005-04-14 Thread Alan Arnholt
Greetings:

I am trying to build R-2.0.1 from source on windows.  My path is set to:

.;C:\RStools;C:\MinGW\bin;C:\perl\bin;C:\texmf\miktex\bin;C:\HTMLws\;C:\R201\R201\bin;%System
Root%\system32;%SystemRoot%;%SystemRoot%\System32\Wbem;C:\Program Files\
Common Files\Adaptec Shared\System;C:\LINGO9\

and Mkrules has been edited and reads

# path (possibly full path) to same version of R on the host system
R_EXE=C:/R201/R201/bin

when I type make I get the following:

-- Making package base 
  adding build stamp to DESCRIPTION
C:/R201/R201/bin: not found
make[4]: *** [frontmatter] Error 127
make[3]: *** [all] Error 2
make[2]: *** [pkg-base] Error 2
make[1]: *** [rpackage] Error 2
make: *** [all] Error 2

Any hints as to why it says it can not find C:/R201/R201/bin?  Any help
would be appreciated.

Alan

Alan T. Arnholt
Associate Professor
Dept. of Mathematical Sciences
Appalachian State University

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


RE: [R] Wrapping long labels in barplot(2)

2005-04-14 Thread Mulholland, Tom
I think this might have been my code

mapply(paste,strwrap(levels(ncdata$Chapter),18,simplify = FALSE),collapse = 
"\n")

Tom

> -Original Message-
> From: Jan P. Smit [mailto:[EMAIL PROTECTED]
> Sent: Thursday, 14 April 2005 5:15 PM
> To: Mulholland, Tom
> Cc: r-help@stat.math.ethz.ch
> Subject: Re: [R] Wrapping long labels in barplot(2)
> 
> 
> Dear Tom,
> 
> Many thanks. I think this gets me in the right direction, but 
> concatenates all levels into one long level. Any further thoughts?
> 
> Best regards,
> 
> Jan
> 
> 
> Mulholland, Tom wrote:
> > This may not be the best way but in the past I think I have 
> done something like
> > 
> > levels(x) <- paste(strwrap(levels(x),20,prefix = 
> ""),collapse = "\n")
> > 
> > Tom
> > 
> > 
> >>-Original Message-
> >>From: Jan P. Smit [mailto:[EMAIL PROTECTED]
> >>Sent: Thursday, 14 April 2005 11:48 AM
> >>To: r-help@stat.math.ethz.ch
> >>Subject: [R] Wrapping long labels in barplot(2)
> >>
> >>
> >>I am using barplot, and barplot2 in the gregmisc bundle, in the 
> >>following way:
> >>
> >>barplot2(sort(xtabs(expend / 1000 ~ theme)),
> >> col = c(mdg7, mdg8, mdg3, mdg1), horiz = T, las = 1,
> >> xlab = "$ '000", plot.grid = T)
> >>
> >>The problem is that the values of 'theme', which is a 
> factor, are in 
> >>some cases rather long, so that I would like to wrap/split 
> them at a 
> >>space once they exceed, say, 20 characters. What I'm doing now is 
> >>specifying names.arg manually with '\n' where I want the 
> >>breaks, but I 
> >>would like to automate the process.
> >>
> >>I've looked for a solution using 'strwrap', but am not sure 
> >>how to apply 
> >>it in this situation.
> >>
> >>Jan Smit
> >>
> >>Consultant
> >>Economic and Social Commission for Asia and the Pacific
> >>
> >>__
> >>R-help@stat.math.ethz.ch mailing list
> >>https://stat.ethz.ch/mailman/listinfo/r-help
> >>PLEASE do read the posting guide! 
> >>http://www.R-project.org/posting-guide.html
> >>
> > 
> > 
> > __
> > R-help@stat.math.ethz.ch mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide! 
> http://www.R-project.org/posting-guide.html
> > 
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Help with "MERGE" gratefully accepted

2005-04-14 Thread Andrew Stryker
Briggs, Meredith M <[EMAIL PROTECTED]> wrote on 2005-Apr-15:
> 
> Hello

Hi,

> How do I use function 'MERGE" to combine the FILE A and FILE B below to make 
> FILE C?
> 
> Thank you
> 
> 
> 
>  FILE A  
>   140151167   
>   30.1 11.4   40 
> 
> FILE B
> 
>   140   167   
>   5.7  30.3
> 
> FILE C
> 
>   140151167   
>   30.1 11.4  40   
>   5.7   NA30.3

Your problem is much easier to solve if the data are arranged
differently.  Say,

File A
ID, VAR_A
140, 30.1
151, 11.4
167, 40

File B
ID, VAR_B
140, 5.7
167, 30.3

File C
ID, VAR_C, VAR_D
140, 30.1, 5.7
151, 11.4, NA
167, 40, 30.3


Those files can be read with read.csv into data frames.  A simple

ab <- merge(fa, fb)

where fa is the data frame for file A and fb the same for file
B will put you have the way there.  Pay attention to the all.x and
all.y options.

I suspect there is a way to do the transposition in R.  However,
indexing records as rows and fields as columns is the standard
approach.  If you follow this convention, you will find that many
tools, not just R, are much more likely to work with you.

Good luck,

Andrew

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


RE: [R] Help with "MERGE" gratefully accepted

2005-04-14 Thread Austin, Matt
> dat1 <- data.frame(var1=c(140, 151, 167), var2=c(30.1, 11.4, 40))
> dat2 <- data.frame(var1=c(140, 167), var3=c(5.7, 30.3))
> merge(dat1, dat2, all=TRUE)
  var1 var2 var3
1  140 30.1  5.7
2  151 11.4   NA
3  167 40.0 30.3


Matt Austin
Statistician

Amgen 
One Amgen Center Drive
M/S 24-2-C
Thousand Oaks CA 93021
(805) 447 - 7431

"Today has the fatigue of a Friday and the desperation of a Monday"  -- S.
Pearce 2005


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Briggs, Meredith M
Sent: Thursday, April 14, 2005 18:26 PM
To: r-help@stat.math.ethz.ch
Subject: [R] Help with "MERGE" gratefully accepted






Hello

How do I use function 'MERGE" to combine the FILE A and FILE B below to make
FILE C?

Thank you



 FILE A  
140151167   
30.1 11.4   40 

FILE B

140   167   
5.7  30.3

FILE C

140151167   
30.1 11.4  40   
5.7   NA30.3

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide!
http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] cross compiling R for Windows under Linux

2005-04-14 Thread Lars Schouw
Professor Ripley

I am very hourned to use R after all your hard work.

It looks as if I can't see the paths
/users/ripley/mingw even though I have set the HEADER
correct.

POINT:
I tried to include the missing header file float.h
from dynload.c directly. It can't see the file!!!

Is looks as if I have to run confgure myself again to
set the --prefix correct.

Whan I added --verbose to the make flags I get this:

i586-mingw32-gcc -isystem
/home/schouwl/unpack/mingw/include --verbose -O2 -Wall
-pedantic -I../include -I. -DHAVE_CONFIG_H
-DR_DLL_BUILD  -c dynload.c -o dynload.o
Reading specs from
/export/home/schouwl/unpack/mingw/bin/../lib/gcc/i586-mingw32/3.4.2/specs
Configured with: ../configure
--prefix=/users/ripley/mingw --target=i586-mingw32
--enable-threads --enable-hash-synchronization
--disable-nls
Thread model: win32
gcc version 3.4.2 (mingw-special)

/export/home/schouwl/unpack/mingw/bin/../libexec/gcc/i586-mingw32/3.4.2/cc1
-quiet -v -I../include -I. -iprefix
/export/home/schouwl/unpack/mingw/bin/../lib/gcc/i586-mingw32/3.4.2/
-DHAVE_CONFIG_H -DR_DLL_BUILD -isystem
/home/schouwl/unpack/mingw/include dynload.c -quiet
-dumpbase dynload.c -mtune=pentium -auxbase-strip
dynload.o -O2 -Wall -pedantic -version -o
/tmp/ccGf2Nz4.s
ignoring nonexistent directory
"/export/home/schouwl/unpack/mingw/bin/../lib/gcc/i586-mingw32/3.4.2/../../../../i586-mingw32/sys-include"
ignoring nonexistent directory
"/users/ripley/mingw/lib/gcc/i586-mingw32/3.4.2/include"
ignoring nonexistent directory
"/users/ripley/mingw/i586-mingw32/sys-include"
ignoring nonexistent directory
"/users/ripley/mingw/i586-mingw32/include"
#include "..." search starts here:
#include <...> search starts here:
 ../include
 .
 /home/schouwl/unpack/mingw/include

/export/home/schouwl/unpack/mingw/bin/../lib/gcc/i586-mingw32/3.4.2/include

/export/home/schouwl/unpack/mingw/bin/../lib/gcc/i586-mingw32/3.4.2/../../../../i586-mingw32/include
End of search list.
GNU C version 3.4.2 (mingw-special) (i586-mingw32)
compiled by GNU C version 3.4.2.
GGC heuristics: --param ggc-min-expand=100 --param
ggc-min-heapsize=131072
dynload.c: In function `R_loadLibrary':
dynload.c:97: warning: implicit declaration of
function `_controlfp'
dynload.c:97: error: `_MCW_IC' undeclared (first use
in this function)
dynload.c:97: error: (Each undeclared identifier is
reported only once
dynload.c:97: error: for each function it appears in.)
dynload.c:98: warning: implicit declaration of
function `_clearfp'
dynload.c:102: error: `_MCW_EM' undeclared (first use
in this function)
dynload.c:102: error: `_MCW_RC' undeclared (first use
in this function)
dynload.c:102: error: `_MCW_PC' undeclared (first use
in this function)
make[3]: *** [dynload.o] Error 1
make[2]: *** [../../bin/R.dll] Error 2
make[1]: *** [rbuild] Error 2
make: *** [all] Error 2

I then tried to have the sysadm create a soft link
from  /users/ripley/mingw to
/export/home/schouwl/unpack/mingw
It did also not help.

Regards
Lars Schouw

--- Prof Brian Ripley <[EMAIL PROTECTED]> wrote:
> *If* you really have the header paths set correctly
> it does work, and has 
> been tested by several people.
> 
> Do read MkRules more carefully and think about how
> your setting differs 
> from the example given.  You have *not* as you claim
> 
> # Set this to where the mingw32 include files are.
> It must be accurate.
>
HEADER=/users/ripley/R/cross-tools4/i586-mingw32/include
> 
> if you used my cross-compiler build (and gave no
> credit).  Hint: float.h 
> is a `mingw32 include file'.
> 
> As the comment says, user error here is disastrous,
> so please take the 
> hint.
> 
> On Thu, 14 Apr 2005, Lars Schouw wrote:
> 
> > Hi
> >
> > I tried to cross compile R under Linux but get an
> > error.
> >
> > i586-mingw32-gcc -isystem
> > /home/schouwl/unpack/mingw/include -O2 -Wall
> -pedantic
> > -I../include -I. -DHAVE_CONFIG_H -DR_DLL_BUILD  -c
> > dynload.c -o dynload.o
> > dynload.c: In function `R_loadLibrary':
> > dynload.c:94: warning: implicit declaration of
> > function `_controlfp'
> > dynload.c:94: error: `_MCW_IC' undeclared (first
> use
> > in this function)
> > dynload.c:94: error: (Each undeclared identifier
> is
> > reported only once
> > dynload.c:94: error: for each function it appears
> in.)
> > dynload.c:95: warning: implicit declaration of
> > function `_clearfp'
> > dynload.c:99: error: `_MCW_EM' undeclared (first
> use
> > in this function)
> > dynload.c:99: error: `_MCW_RC' undeclared (first
> use
> > in this function)
> > dynload.c:99: error: `_MCW_PC' undeclared (first
> use
> > in this function)
> > make[3]: *** [dynload.o] Error 1
> > make[2]: *** [../../bin/R.dll] Error 2
> > make[1]: *** [rbuild] Error 2
> > make: *** [all] Error 2
> >
> >
> > This is the that was reported in the mailing list
> > before.
> >
>
http://tolstoy.newcastle.edu.au/R/devel/04/12/1571.html
> >
> > I have set the HEADER correct in MkRules
> 
> No, you did not.
> 
> > HEADER=/home/schouwl/unpack/mingw/include
> > The file f

Re: [R] Wrapping long labels in barplot(2)

2005-04-14 Thread Jan P. Smit
Dear Marc,
Excellent, this is exactly what I was looking for. The only thing I had 
to change in your code was turning 'short.labels' into a factor.

Many thanks and best regards,
Jan
Marc Schwartz wrote:
Building on Tom's reply, the following should work:

labels <- factor(paste("This is a long label ", 1:10))
labels
 [1] This is a long label  1  This is a long label  2 
 [3] This is a long label  3  This is a long label  4 
 [5] This is a long label  5  This is a long label  6 
 [7] This is a long label  7  This is a long label  8 
 [9] This is a long label  9  This is a long label  10
10 Levels: This is a long label  1 ... This is a long label  9


short.labels <- sapply(labels, function(x) paste(strwrap(x,
 10), collapse = "\n"), USE.NAMES = FALSE)

short.labels
 [1] "This is\na long\nlabel 1"  "This is\na long\nlabel 2" 
 [3] "This is\na long\nlabel 3"  "This is\na long\nlabel 4" 
 [5] "This is\na long\nlabel 5"  "This is\na long\nlabel 6" 
 [7] "This is\na long\nlabel 7"  "This is\na long\nlabel 8" 
 [9] "This is\na long\nlabel 9"  "This is\na long\nlabel 10"


mp <- barplot2(1:10)
mtext(1, text = short.labels, at = mp, line = 2)

HTH,
Marc Schwartz
On Thu, 2005-04-14 at 16:14 +0700, Jan P. Smit wrote:
Dear Tom,
Many thanks. I think this gets me in the right direction, but 
concatenates all levels into one long level. Any further thoughts?

Best regards,
Jan
Mulholland, Tom wrote:
This may not be the best way but in the past I think I have done something like
levels(x) <- paste(strwrap(levels(x),20,prefix = ""),collapse = "\n")
Tom

-Original Message-
From: Jan P. Smit [mailto:[EMAIL PROTECTED]
Sent: Thursday, 14 April 2005 11:48 AM
To: r-help@stat.math.ethz.ch
Subject: [R] Wrapping long labels in barplot(2)
I am using barplot, and barplot2 in the gregmisc bundle, in the 
following way:

barplot2(sort(xtabs(expend / 1000 ~ theme)),
   col = c(mdg7, mdg8, mdg3, mdg1), horiz = T, las = 1,
   xlab = "$ '000", plot.grid = T)
The problem is that the values of 'theme', which is a factor, are in 
some cases rather long, so that I would like to wrap/split them at a 
space once they exceed, say, 20 characters. What I'm doing now is 
specifying names.arg manually with '\n' where I want the 
breaks, but I 
would like to automate the process.

I've looked for a solution using 'strwrap', but am not sure 
how to apply 
it in this situation.

Jan Smit

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Help with "MERGE" gratefully accepted

2005-04-14 Thread Briggs, Meredith M




Hello

How do I use function 'MERGE" to combine the FILE A and FILE B below to make 
FILE C?

Thank you



 FILE A  
140151167   
30.1 11.4   40 

FILE B

140   167   
5.7  30.3

FILE C

140151167   
30.1 11.4  40   
5.7   NA30.3

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] test ignore

2005-04-14 Thread Briggs, Meredith M

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Factor Analysis Biplot

2005-04-14 Thread Brett Stansfield
Dear R
When I go to do the biplot

biplot(eurofood.fa$scores, eurofood$loadings)
Error in 1:p : NA/NaN argument

 I think this is because the component loadings don't show values for some
variables

Loadings:
  Factor1 Factor2
RedMeat0.561  -0.112 
WhiteMeat  0.593  -0.432 
Eggs   0.839  -0.195 
Milk   0.679 
Fish   0.300   0.951 
Cereals   -0.902  -0.267 
Starch 0.542   0.253 
Nuts  -0.760 
Fr.Veg-0.145   0.325

So how can I get it to do a biplot? Is there a way for R to recognise
component loadings less than the cut off value??

Brett Stansfield

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Factor Analysis Biplot

2005-04-14 Thread Brett Stansfield
Dear R help

I am having difficulty doing a biplot of the first two factors of a factor
analysis. I presume it is because the values in factor 2 for Milk and NUTS
are not displayed in the component loadings.

Loadings:
  Factor1 Factor2
RedMeat0.561  -0.112 
WhiteMeat  0.593  -0.432 
Eggs   0.839  -0.195 
Milk   0.679 
Fish   0.300   0.951 
Cereals   -0.902  -0.267 
Starch 0.542   0.253 
Nuts  -0.760 
Fr.Veg-0.145   0.325

It has no problem doing a normal plot using
plot(eurofood.fa$scores[,1], eurofood.fa$scores[,2])

But when I ask for a biplot I get

biplot(eurofood.fa$scores[,1], eurofood.fa$scores[,2])
Error in 1:n : NA/NaN argument

What can I do to overcome this??
Brett Stansfield

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] use the source code in my own c code

2005-04-14 Thread Mehrnoush Khojasteh
This is just a test!

Please disregard

 

 


[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Display execution in a function

2005-04-14 Thread Sebastien Durand
Dear all I hope you haven't received this message twice,
Here is a simplified version of a function I made:

plotfunc<-function(x){
#x a vector
cat("please select two points","\n")
plot(x)
points<-locator(2)
return(points)
}

Using the last R version 2.01 for mac os x (v1.01 aqua GUI)
I would like to know what should I do to make 
sure that my text line will be printed prior to 
the drawing of the plot.
I hope you will agree with me that it is not 
useful to have the plot displayed and R waiting 
for user input while nothing has yet been 
displayed for instructions!

Thanks a lot
Sebastien Durand
--
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Inverse of the Laplace Transform/Gaver Stehfest algorithm

2005-04-14 Thread Tolga Uzuner
Hi there,
Is there an implementation of the Gaveh Stehfest algorithm in R 
somewhere ? Or some other inversion ?

Thanks,
Tolga
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Display execution in a function

2005-04-14 Thread Sebastien Durand
Dear all,
Here is a simplified version of a function I made:

plotfunc<-function(x){
#x a vector
cat("please select two points","\n")
plot(x)
points<-locator(2)
return(points)
}

Using R version 1.01 for mac os x (aqua GUI)
I would like to know what should I do to make 
sure that my text line will be printed prior to 
the drawing of the plot.
I hope you will agree with me that it is not 
useful to have the plot displayed and R waiting 
for user input while nothing has yet been 
displayed for instructions!

Thanks a lot
PS.:  I have tried this on both a R version 1.01 
and 1.00 and on a G5 dual processor as well as on 
a powerbook G4

Sebastien Durand
--
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


RE: [R] use the source code in my own c code

2005-04-14 Thread Liaw, Andy
If you are using only lowess, and nothing else in R, you might as well use
the code on netlib.

Andy

> From: Mehrnoush Khojasteh
> 
> Hi there,
> 
> I am trying to use the source code for lowess (lowess.c in 
> the src\appl
> directory of R sources). 
> 
> The problem is that some other files are included in lowess.c that I
> don't know where to find them!
> 
>  
> 
> Any idea what I need to do to get able to use the source code?
> 
>  
> 
> Thanks in advance,
> 
> Mehrnoush
> 
>  
> 
>  
> 
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! 
> http://www.R-project.org/posting-guide.html
> 
> 
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R_LIBS difficulty ?

2005-04-14 Thread François Pinard
[Prof Brian Ripley]
> [François Pinard]

> >Now using this line within `~/.Renviron':
> >  R_LIBS=/home/pinard/etc/R
> >my tiny package is correctly found by R.  However, R does not seem to
> >see any library within that directory if I rather use either of:
> >  R_LIBS=$HOME/etc/R
> >  R_LIBS="$HOME/etc/R"

> Correct, and as documented.  See the description in ?Startup,
> which says things like ${foo-bar} are allowed but not $HOME, and
> not ${HOME}/bah or even ${HOME}.  But R_LIBS=~/etc/R will work in
> .Renviron since ~ is intepreted by R in paths.

Hello, Brian (or should I rather write Prof Ripley?).

Thanks for having replied.  I was not sure how to read "but not", which
could be associated either with "which says" or "are allowed".  My
English knowledge is not fully solid, and I initially read you meant the
later, but it seems the former association is probably the correct one.

The fact is the documentation never says that `$HOME' or `${HOME} are
forbidden.  It is rather silent on the subject, except maybe for this
sentence: "value is processed in a similar way to a Unix shell" in the
Details section, which vaguely but undoubtedly suggests that `$HOME' and
`${HOME}' might be allowed.  Using `~/' is not especially documented
either, except from the Examples section, where it is used.  I probably
thought it was an example of how shell-alike R processes `~/.Renviron'.

> >The last writing (I mean, something similar) is suggested somewhere in
> >the R manuals (but I do not have the manual with me right now to give
> >the exact reference, I'm in another town).

> It is not mentioned in an R manual, but it is mentioned in the FAQ.

I tried checking in the FAQ.  By the way, http://www.r-project.org
presents a menu on the left, and there is a group of items under the
title `Documentation'.  `FAQs' is shown under that title, but is not
clickable.  I would presume it was meant to be?  However, the `Other'
item is itself clickable, and offers a link to what appears to be an
FAQs page.

The only thing I saw, in item 5.2 of the FAQ (How can add-on packages be
installed?) says that one may use `$HOME/' while defining `R_LIBS' in a
Bourne shell profile, or _preferably_ use `~/` while defining `R_LIBS'
within file `~/.Renviron`.  The FAQ does not really say that `$HOME' is
forbidden.  The FAQ then refers to `?Startup' for more information, and
`?Startup' is not clear on this thing, in my opinion at least.

> R_LIBS=$HOME/etc/R will work in a shell (and R_LIBS=~/etc/R may not).

> >Another hint that it could be expected to work is that the same
> >`~/.Renviron' once contained the line:

> >  R_BROWSER=$HOME/bin/links

> >which apparently worked as expected.  (This `links' script launches
> >the real program with `-g' appended whenever `DISPLAY' is defined.)

> Yes, but that was not interpreted by R, rather a shell script called by R.

Granted, thanks for pointing this out.

The documentation does not really say either (or else I missed it) if
the value of R_BROWSER is given to exec, or given to an exec'ed shell.
If a shell is called, it means in particular that we can use options,
and this is a useful feature, worth being known I guess.

Once again, thanks for having replied, and for caring.

-- 
François Pinard   http://pinard.progiciels-bpi.ca

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] use the source code in my own c code

2005-04-14 Thread Mehrnoush Khojasteh
Hi there,

I am trying to use the source code for lowess (lowess.c in the src\appl
directory of R sources). 

The problem is that some other files are included in lowess.c that I
don't know where to find them!

 

Any idea what I need to do to get able to use the source code?

 

Thanks in advance,

Mehrnoush

 

 


[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] latent class regression

2005-04-14 Thread Wensui Liu
As far as I know, there are 2 libraries for latent class regression,
flexmix and mmlrc. Since I don't have experience with either one, can
someone give me some advice which library is better?

Thank you so much.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Re: Fluctuating asymmetry and measurement error

2005-04-14 Thread Chris Longson
Hi Andrew,

Bear with me as it's a while since I did this and I was new to R at
the time, but lme is probably what you're after. Remember that you're
actually not all that interested in _individual_ variance, because FA
is a sample-level property.

You'll want to set up something like:

Treatment Individual Measure Result
1  1  1   foo
1  1  2   bar
1  2  1
1  2  2
2  3  1
2  3  2
2  4  1
2  4  2

Where result is absolute R-L, assuming you've done the checks for
size-dependence and so in. Then run an lme something like:

library(nlme)
test <- lme(Result ~ Treatment, random = list(Individual=~1))

Bearing in mind that you're not actually interested in the
individuals, you can either just include the multiple measures for each
individual and only worry about the remaining variance at the 'Treatment'
level, or you could do another model with 
"random = list(Measure=~1,Individual=~1)", then run:

anova(model1, model2)

This will tell you if the measurement term on its own is contributing
anything useful. In general it's better to have fewer factors, FA
analysis is low enough in power without cluttering it up.

Hope that helps. Now the R-gurus will probably tell you how you should
actually do the lme :)

Regards,
Chris

> Message: 29
> Date: Wed, 13 Apr 2005 15:22:21 +0100
> From: "Andrew Higginson" <[EMAIL PROTECTED]>
> Subject: [R] Fluctuating asymmetry and measurement error
> To: 
> Message-ID: <[EMAIL PROTECTED]>
> Content-Type: text/plain; charset=US-ASCII
> 
> Hi all, 
> 
> Has anyone tested for FA in R? I need to seperate out the variance due to 
> measurement error from variation between individuals (following Palmer & 
> Strobeck 1986). 
> 
> 
> 
> Andy Higginson
> 
> Animal Behaviour and Ecology Research Group
> School of Biology
> University of Nottingham
> NG7 2RD
> U.K.
> 

-- 

Chris Longson
PhD student
Department of Biological Sciences
Macquarie University
+61 2 9850 8190
[EMAIL PROTECTED] | www.tinyurl.com/7x25n

"We found the flat paper rises on its own as it falls, which would not
happen if the force due to air is similar to that on an airfoil."

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Finding an available port for server socket

2005-04-14 Thread jhallman
I've written a Smalltalk application which starts R on another machine and
then communicates with it via sockets.  The functions involved are:

slaveStart <- function(masterHost, masterPort){
  inPort <- availablePort()
  sendSocket(as.character(inPort), masterHost, masterPort)
  socketSlave(inPort)
}

socketSlave <- function(inPort){
  ## listens on inPort.
  on.exit(closeAllConnections(), add = T)
  repeat {
inConn <- socketConnection(port = inPort, server = T, blocking = T)
try(source(inConn, echo = T, prompt.echo = "> ",
   max.deparse.length = 5))
close(inConn)
  }
}

sendSocket <- function(strings, host, port){
  ## open a blocking client socket, put strings on it, and close
  outConn <- socketConnection(host, port, blocking = T)
  writeLines(strings, outConn)
  close(outConn)
}

availablePort <- function(){
  ## find a port that R can listen on
  ## just returns 40001 on Windows
  portsInUse <- 0
  os <- as.vector(Sys.info()["sysname"])
  if(os == "Linux"){
hexTcp <- system("cat /proc/net/tcp | awk '{print $2}'", intern = T)
hexUdp <- system("cat /proc/net/udp | awk '{print $2}'", intern = T)
portsInUse <- hex2numeric(gsub(".*\:", "", c(hexTcp[-1], hexUdp[-1])))
  }
  if(os == "SunOS"){  
## use a locally written script that massages output from netstat
portsInUse <- as.numeric(system("/mra/prod/scripts/portsInUse", intern = T))
  } 
  port <- 40001
  while(!is.na(match(port, portsInUse))) port <- port + 1
  port
}

The way this works is that 

(i) The Smalltalk app running on firstMachine finds an available port (say
12345) that it can listen on, start listening there, and writes 

slaveStart("firstMachine", 12345)

to startUpFile.

(ii) The Smalltalk app does something like this to start R on secondMachine:

ssh secondMachine R < startUpFile > someLogFile

(iii) R sends the inPort port number back to Smalltalk.

(iv)  Whenever Smalltalk wants R to do something, it opens a client connection
  to inPort on secondMachine and writes R code to it.  It closes the
  connection when it finishes writing the R code to it.  If Smalltalk
  wants to get a result returned from R, the code sent will include a call
  to sendSocket() to accomplish this.

(v)   If Smalltalk expects a reply via sendSocket, it resumes listening on
  masterPort for a connection from R.

And so on


All of this works so far, but my availablePort() function is too ugly, and I
don't expect it to work on Windows.  So my question is: Is there a better,
more portable way I can find an available port?  If not, can someone put one
in there?

Jeff

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


RE: [R] multinom and contrasts

2005-04-14 Thread John Fox
Dear array,

> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of array chip
> Sent: Thursday, April 14, 2005 3:35 PM
> To: John Fox
> Cc: R-help@stat.math.ethz.ch
> Subject: RE: [R] multinom and contrasts
> 
> Dear John,
> 
> My dataset has a response variable with 6 levels, and
> 12 independent variables, 10 of them are continuous variable, 
> one is a categorical variable with 2 levels, the other one is 
> also a categorical variable with 4 levels. total 206 
> observations. I attached my dataset with the email "sample.txt".
> 

You are, therefore, fitting a complex model with 5*(1 + 10 + 1 + 3) = 75
parameters to a data set with 206 observations.
 
> library(MASS)
> library(nnet)
> 
> sample<-read.table("sample.txt",sep='\t',header=T,row.names=1)
> wts2<-sample$wts
> sample<-sample[,-4]
> 
> options(contrasts=c('contr.helmert','contr.poly'))
> obj1<-multinom(class~.,sample,weights=wts2,maxit=1000)
> options(contrasts=c('contr.treatment','contr.poly'))
> obj2<-multinom(class~.,sample,weights=wts2,maxit=1000)
> 
> predict(obj1,type='probs')[1:5,]
> predict(obj2,type='probs')[1:5,]
> 
> Interestingly, if I change the values of the variable "bkgd" 
> for 2 observations (from "a", to "f"), then I can get 
> convergence with helmert contrast, but still not converged 
> with treatment contrast:
> 
> sample$bkgd[201]<-'f'
> sample$bkgd[205]<-'f'
> 
> options(contrasts=c('contr.helmert','contr.poly'))
> obj1<-multinom(class~.,sample,weights=wts2,maxit=1000)

If you look at the covariance matrix of the coefficients, you'll see that
there are still problems with the fit:

> summary(diag(vcov(obj1)))
Min.1st Qu. Median   Mean3rd Qu.   Max. 
-1.029e-20  6.670e+05  3.502e+06  7.781e+06  1.075e+07  4.768e+07 

That negative variance (though essentially 0) doesn't bode well. Indeed,
requiring nearly 1000 iterations for apparent convergence is in itself an
indication of problems.

> options(contrasts=c('contr.treatment','contr.poly'))
> obj2<-multinom(class~.,sample,weights=wts2,maxit=1000)
> 
> predict(obj1,type='probs')[1:5,]
> predict(obj2,type='probs')[1:5,]
> 
> appreciate any suggestions!
> 

I don't know anything about your application, so I hesitate to make
recommendations, but (assuming that the model you've fit makes sense) in the
abstract I'd suggest collecting a lot more data or fitting a much simpler
model.

Regards,
 John

> 
> 
> --- John Fox <[EMAIL PROTECTED]> wrote:
> 
> > Dear chip,
> > 
> > > -Original Message-
> > > From: [EMAIL PROTECTED] 
> > > [mailto:[EMAIL PROTECTED] On
> > Behalf Of array chip
> > > Sent: Thursday, April 14, 2005 1:00 PM
> > > To: John Fox
> > > Cc: R-help@stat.math.ethz.ch
> > > Subject: RE: [R] multinom and contrasts
> > > 
> > > Dear John,
> > > 
> > > Thanks for the answer! In my own dataset, The
> > > multinom() did not converge even after I had tried
> > to
> > > increase the maximum number of iteration (from
> > default 100 to
> > > 1000). In this situation, there is some bigger
> > diffrenece in
> > > fitted probabilities under different contrasts
> > (e.g. 
> > > 0.9687817 vs. 0.9920816). My question is whether
> > the analysis
> > > (fitted probabilities) is still valid if it does
> > not
> > > converge? and what else can I try about it?
> > > 
> > 
> > If multinom() doesn't converge to a stable solution after 1000 
> > iterations, it's probably safe to say that the problem is 
> > ill-conditioned in some respect. Have you looked at the covariance 
> > matrix of the estimates?
> > 
> > Regards,
> >  John
> > 
> > > Thank you!
> > > 
> > > 
> > > 
> > > 
> > > --- John Fox <[EMAIL PROTECTED]> wrote:
> > > > Dear chip,
> > > > 
> > > > The difference is small and is due to
> > computational error.
> > > > 
> > > > Your example:
> > > > 
> > > > > max(abs(zz[1:10,] - yy[1:10,]))
> > > > [1] 2.207080e-05
> > > > 
> > > > Tightening the convergence tolerance in
> > multinom() eliminates the
> > > > difference:
> > > > 
> > > > >
> > > >
> > options(contrasts=c('contr.treatment','contr.poly'))
> > > > >
> > > >
> > >
> >
> xx<-multinom(Type~Infl+Cont,data=housing[-c(1,10,11,22,25,30),],
> > > > reltol=1.0e-12)
> > > > # weights:  20 (12 variable)
> > > > initial  value 91.495428
> > > > iter  10 value 91.124526
> > > > final  value 91.124523
> > > > converged
> > > > > yy<-predict(xx,type='probs')
> > > > >
> > options(contrasts=c('contr.helmert','contr.poly'))
> > > > >
> > > >
> > >
> >
> xx<-multinom(Type~Infl+Cont,data=housing[-c(1,10,11,22,25,30),],
> > > > reltol=1.0e-12)
> > > > # weights:  20 (12 variable)
> > > > initial  value 91.495428
> > > > iter  10 value 91.125287
> > > > iter  20 value 91.124523
> > > > iter  20 value 91.124523
> > > > iter  20 value 91.124523
> > > > final  value 91.124523
> > > > converged
> > > > > zz<-predict(xx,type='probs')
> > > > > max(abs(zz[1:10,] - yy[1:10,]))
> > > > [1] 1.530021e-08
> > > > 
> > > > I hope this helps,
> > > >  John

Re: [R] Overload standart function

2005-04-14 Thread Prof Brian Ripley
On Thu, 14 Apr 2005, [iso-8859-2] Václav Kratochvíl wrote:
I try to develop my own R package. I have a couple of standart functions 
like dim() and length() overloaded.
Hmm.  Not with the name dim.Model.  You have defined an S3 method for 
class "Model", possibly not intentionally, and without following the rules 
set out in `Writing R Extensions'.

#Example
dim.Model <- function(this) {
 length(unique(this$.variables));
}
I built my package, but when I try to load it... This message appears:
Attaching package 'mudim':
   The following object(s) are masked _by_ .GlobalEnv :
dim.Model,...(etc.)
Any idea, how to hide this message?
See ?library, which has an argument warn.conflicts, and that tells you how 
to do this on a per-package basis.

However, the problem appears to be that you don't want the copies in 
workspace (.GlobalEnv), which should be easy to fix.

--
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

RE: [R] multinom and contrasts

2005-04-14 Thread array chip
Dear John,

My dataset has a response variable with 6 levels, and 
12 independent variables, 10 of them are continuous
variable, one is a categorical variable with 2 levels,
the other one is also a categorical variable with 4
levels. total 206 observations. I attached my dataset
with the email "sample.txt".

library(MASS)
library(nnet)

sample<-read.table("sample.txt",sep='\t',header=T,row.names=1)
wts2<-sample$wts
sample<-sample[,-4]

options(contrasts=c('contr.helmert','contr.poly'))
obj1<-multinom(class~.,sample,weights=wts2,maxit=1000)
options(contrasts=c('contr.treatment','contr.poly'))
obj2<-multinom(class~.,sample,weights=wts2,maxit=1000)

predict(obj1,type='probs')[1:5,]
predict(obj2,type='probs')[1:5,]

Interestingly, if I change the values of the variable
"bkgd" for 2 observations (from "a", to "f"), then I
can get convergence with helmert contrast, but still
not converged with treatment contrast:

sample$bkgd[201]<-'f'
sample$bkgd[205]<-'f'

options(contrasts=c('contr.helmert','contr.poly'))
obj1<-multinom(class~.,sample,weights=wts2,maxit=1000)
options(contrasts=c('contr.treatment','contr.poly'))
obj2<-multinom(class~.,sample,weights=wts2,maxit=1000)

predict(obj1,type='probs')[1:5,]
predict(obj2,type='probs')[1:5,]

appreciate any suggestions!



--- John Fox <[EMAIL PROTECTED]> wrote:

> Dear chip,
> 
> > -Original Message-
> > From: [EMAIL PROTECTED] 
> > [mailto:[EMAIL PROTECTED] On
> Behalf Of array chip
> > Sent: Thursday, April 14, 2005 1:00 PM
> > To: John Fox
> > Cc: R-help@stat.math.ethz.ch
> > Subject: RE: [R] multinom and contrasts
> > 
> > Dear John,
> > 
> > Thanks for the answer! In my own dataset, The
> > multinom() did not converge even after I had tried
> to 
> > increase the maximum number of iteration (from
> default 100 to 
> > 1000). In this situation, there is some bigger
> diffrenece in 
> > fitted probabilities under different contrasts
> (e.g. 
> > 0.9687817 vs. 0.9920816). My question is whether
> the analysis 
> > (fitted probabilities) is still valid if it does
> not 
> > converge? and what else can I try about it?
> > 
> 
> If multinom() doesn't converge to a stable solution
> after 1000 iterations,
> it's probably safe to say that the problem is
> ill-conditioned in some
> respect. Have you looked at the covariance matrix of
> the estimates?
> 
> Regards,
>  John
> 
> > Thank you!
> > 
> > 
> > 
> > 
> > --- John Fox <[EMAIL PROTECTED]> wrote:
> > > Dear chip,
> > > 
> > > The difference is small and is due to
> computational error.
> > > 
> > > Your example:
> > > 
> > > > max(abs(zz[1:10,] - yy[1:10,]))
> > > [1] 2.207080e-05
> > > 
> > > Tightening the convergence tolerance in
> multinom() eliminates the
> > > difference:
> > > 
> > > >
> > >
> options(contrasts=c('contr.treatment','contr.poly'))
> > > >
> > >
> >
>
xx<-multinom(Type~Infl+Cont,data=housing[-c(1,10,11,22,25,30),],
> > > reltol=1.0e-12)
> > > # weights:  20 (12 variable)
> > > initial  value 91.495428
> > > iter  10 value 91.124526
> > > final  value 91.124523
> > > converged
> > > > yy<-predict(xx,type='probs')
> > > >
> options(contrasts=c('contr.helmert','contr.poly'))
> > > >
> > >
> >
>
xx<-multinom(Type~Infl+Cont,data=housing[-c(1,10,11,22,25,30),],
> > > reltol=1.0e-12)
> > > # weights:  20 (12 variable)
> > > initial  value 91.495428
> > > iter  10 value 91.125287
> > > iter  20 value 91.124523
> > > iter  20 value 91.124523
> > > iter  20 value 91.124523
> > > final  value 91.124523
> > > converged
> > > > zz<-predict(xx,type='probs')
> > > > max(abs(zz[1:10,] - yy[1:10,]))
> > > [1] 1.530021e-08
> > > 
> > > I hope this helps,
> > >  John
> > > 
> > > 
> > > John Fox
> > > Department of Sociology
> > > McMaster University
> > > Hamilton, Ontario
> > > Canada L8S 4M4
> > > 905-525-9140x23604
> > > http://socserv.mcmaster.ca/jfox
> > > 
> > > 
> > > > -Original Message-
> > > > From: [EMAIL PROTECTED] 
> > > > [mailto:[EMAIL PROTECTED] On
> > > Behalf Of array chip
> > > > Sent: Wednesday, April 13, 2005 6:26 PM
> > > > To: R-help@stat.math.ethz.ch
> > > > Subject: [R] multinom and contrasts
> > > > 
> > > > Hi,
> > > > 
> > > > I found that using different contrasts (e.g.
> > > > contr.helmert vs. contr.treatment) will
> generate
> > > different
> > > > fitted probabilities from multinomial logistic
> > > regression
> > > > using multinom(); while the fitted
> probabilities
> > > from binary
> > > > logistic regression seem to be the same. Why
> is
> > > that? and for
> > > > multinomial logisitc regression, what contrast
> > > should be
> > > > used? I guess it's helmert?
> > > > 
> > > > here is an example script:
> > > > 
> > > > library(MASS)
> > > > library(nnet)
> > > > 
> > > >    multinomial logistic
> > > >
> > >
> options(contrasts=c('contr.treatment','contr.poly'))
> > > >
> > >
> >
>
xx<-multinom(Type~Infl+Cont,data=housing[-c(1,10,11,22,25,30),])
> > > > yy<-predict(xx,type='probs')
> > >

[R] Overload standart function

2005-04-14 Thread Václav Kratochvíl
Hi all,

I try to develop my own R package. I have a couple of standart functions like 
dim() and length() overloaded.

#Example
dim.Model <- function(this) {
  length(unique(this$.variables));
}

I built my package, but when I try to load it... This message appears:
Attaching package 'mudim':
The following object(s) are masked _by_ .GlobalEnv :
 dim.Model,...(etc.)

Any idea, how to hide this message?

Thanks for your ideas.

Vaclav


_   
 /_-_\http://www.kratochvil.biz  
  [] \o\ _|_ /o/ []  email: [EMAIL PROTECTED]
"ICQ: 112443571


[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] n-dimensional(hypercube)distance calculation..

2005-04-14 Thread Prof Brian Ripley
On Thu, 14 Apr 2005 [EMAIL PROTECTED] wrote:
The `centers' are the means?  by() can find the mean of multivariate data
by group.  And dist() finds Euclidean and other distances.
However, the Jeffries-Matusita distance depends on covariance matrices,
and 50 points in 100 dims are not enough to estimate one.  Indeed my 
concern is that you have so few data that either the measurements are 
highly correlated (so you can just select a few) or your inferences will 
be suspect.

I am rather new in R so i would appreciate your help in my problem..
I have 3 types of vegetation (A,B,C),50 measurements per class and 100 
variables per measurement.
It would be helpful to know how you have stored them.
I would like to perform seperability analysis between these classes meaning...
a.)create the hypercube from these 100 variables
Which hypercube?  If you mean the bounding box, use apply or lapply with
range().
b.)"plot" the 50 measurements for each class and identify the position of the
center of each class..
c.)calculate the distances between each class center using Euclidean,
Jeffries-Matusita or other measures.

--
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] [Job Ad] Centocor Nonclinical Statistics

2005-04-14 Thread Pikounis, Bill [CNTUS]
STATISTICIAN / DATA ANALYST

Centocor, an operating company of Johnson & Johnson, seeks a highly
motivated
statistician/data analyst to work in its newly formed Nonclinical Statistics
group. We support innovative science in the discovery and research of
biotechnology therapies, with obligatory emphasis on the application of
modern
statistical approaches.

Primary responsibilities of the position include collaborations with
researchers on experimental design, data analysis, and interpretation.
Development of software, development of statistical methodology, and
co-authorship of publications and presentations are also primary
responsibilities. The position is based in the Valley Forge area northwest
of
Philadelphia, Pennsylvania, in the USA. Extraordinary applicants from
outside
the US are welcome, subject to visa limitations.

The scope of the nonclinical statistics group focuses on needs outside
traditional clinical trials. This includes preclinical in-vivo, in-vitro,
and
in-silico studies, as well as product formulations. We also serve
experimental
medicine needs in clinical pharmacology.

A Ph.D. or Masters in statistics or appropriate data-analytic discipline is
required. Comprehensive understanding of linear models is essential. There
are
unlimited opportunities to successfully apply 'modern' approaches such as
resistance & robustness, good data graphs, resampling, and high-dimension
reduction. Good oral and written skills are crucial, as is the desire to
learn
enough of the relevant science to interact effectively. Excellent software
skills are also essential. Facility in the S language, preferably R, is
expected. 

A successful candidate must be constantly eager to expand their
statistical, communications, and computing expertise. S/he must also be
enjoy
the process of building long-term collaborative relationships, be at ease
with
either non-rigid or rigid structure to projects, and be professional in
handling numerous projects simultaneously.

Centocor R&D is a culture of passionate professionals. From top-on-down in
the
organization, statisticians are highly-valued partners for their
contributions
to scientific and business objectives. The innovative spirit of our
biotechnology company is backed by the trust of our parent company Johnson &
Johnson (http://www.jnj.com/our_company/), a worldwide leader in health
products and services. Centocor is in a healthy stage of growth, and the
commitment to a dedicated, expanding nonclinical statistics group is strong.

We are located in the Valley Forge area of Pennsylvania, USA, approximately
25
miles northwest of Philadelphia. If you believe your interests and
background
match the above description then please mail your CV or resume and a cover
letter to

ATTENTION: Open Position 
Bill Pikounis 
Nonclinical Statistics 
200 Great Valley Parkway, MailStop C-4-1 
Malvern, PA 19355

or submit it electronically by reply to this message.

Thank you very much,
Bill
---
Bill Pikounis, PhD

Nonclinical Statistics
Centocor, Inc.
200 Great Valley Parkway
MailStop C4-1
Malvern, PA 19355

610 240 8498
fax 610 651 6717 



[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] n-dimensional(hypercube)distance calculation..

2005-04-14 Thread achilleas . psomas
Dear R-help..

I am rather new in R so i would appreciate your help in my problem..

I have 3 types of vegetation (A,B,C),50 measurements per class and 100 variables
per measurement.
I would like to perform seperability analysis between these classes meaning...

a.)create the hypercube from these 100 variables
b.)"plot" the 50 measurements for each class and identify the position of the
center of each class..
c.)calculate the distances between each class center using Euclidean,
Jeffries-Matusita or other measures.

I have tried searching all keywords at CRAN but i was not able to find a post or
a package that could be used for an analysis like that..

I would appreciate your help...

Kind regards to all R-helpers..

Achilleas.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


RE: [R] multinom and contrasts

2005-04-14 Thread John Fox
Dear chip,

> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of array chip
> Sent: Thursday, April 14, 2005 1:00 PM
> To: John Fox
> Cc: R-help@stat.math.ethz.ch
> Subject: RE: [R] multinom and contrasts
> 
> Dear John,
> 
> Thanks for the answer! In my own dataset, The
> multinom() did not converge even after I had tried to 
> increase the maximum number of iteration (from default 100 to 
> 1000). In this situation, there is some bigger diffrenece in 
> fitted probabilities under different contrasts (e.g. 
> 0.9687817 vs. 0.9920816). My question is whether the analysis 
> (fitted probabilities) is still valid if it does not 
> converge? and what else can I try about it?
> 

If multinom() doesn't converge to a stable solution after 1000 iterations,
it's probably safe to say that the problem is ill-conditioned in some
respect. Have you looked at the covariance matrix of the estimates?

Regards,
 John

> Thank you!
> 
> 
> 
> 
> --- John Fox <[EMAIL PROTECTED]> wrote:
> > Dear chip,
> > 
> > The difference is small and is due to computational error.
> > 
> > Your example:
> > 
> > > max(abs(zz[1:10,] - yy[1:10,]))
> > [1] 2.207080e-05
> > 
> > Tightening the convergence tolerance in multinom() eliminates the
> > difference:
> > 
> > >
> > options(contrasts=c('contr.treatment','contr.poly'))
> > >
> >
> xx<-multinom(Type~Infl+Cont,data=housing[-c(1,10,11,22,25,30),],
> > reltol=1.0e-12)
> > # weights:  20 (12 variable)
> > initial  value 91.495428
> > iter  10 value 91.124526
> > final  value 91.124523
> > converged
> > > yy<-predict(xx,type='probs')
> > > options(contrasts=c('contr.helmert','contr.poly'))
> > >
> >
> xx<-multinom(Type~Infl+Cont,data=housing[-c(1,10,11,22,25,30),],
> > reltol=1.0e-12)
> > # weights:  20 (12 variable)
> > initial  value 91.495428
> > iter  10 value 91.125287
> > iter  20 value 91.124523
> > iter  20 value 91.124523
> > iter  20 value 91.124523
> > final  value 91.124523
> > converged
> > > zz<-predict(xx,type='probs')
> > > max(abs(zz[1:10,] - yy[1:10,]))
> > [1] 1.530021e-08
> > 
> > I hope this helps,
> >  John
> > 
> > 
> > John Fox
> > Department of Sociology
> > McMaster University
> > Hamilton, Ontario
> > Canada L8S 4M4
> > 905-525-9140x23604
> > http://socserv.mcmaster.ca/jfox
> > 
> > 
> > > -Original Message-
> > > From: [EMAIL PROTECTED] 
> > > [mailto:[EMAIL PROTECTED] On
> > Behalf Of array chip
> > > Sent: Wednesday, April 13, 2005 6:26 PM
> > > To: R-help@stat.math.ethz.ch
> > > Subject: [R] multinom and contrasts
> > > 
> > > Hi,
> > > 
> > > I found that using different contrasts (e.g.
> > > contr.helmert vs. contr.treatment) will generate
> > different
> > > fitted probabilities from multinomial logistic
> > regression
> > > using multinom(); while the fitted probabilities
> > from binary
> > > logistic regression seem to be the same. Why is
> > that? and for
> > > multinomial logisitc regression, what contrast
> > should be
> > > used? I guess it's helmert?
> > > 
> > > here is an example script:
> > > 
> > > library(MASS)
> > > library(nnet)
> > > 
> > >    multinomial logistic
> > >
> > options(contrasts=c('contr.treatment','contr.poly'))
> > >
> >
> xx<-multinom(Type~Infl+Cont,data=housing[-c(1,10,11,22,25,30),])
> > > yy<-predict(xx,type='probs')
> > > yy[1:10,]
> > > 
> > > options(contrasts=c('contr.helmert','contr.poly'))
> > >
> >
> xx<-multinom(Type~Infl+Cont,data=housing[-c(1,10,11,22,25,30),])
> > > zz<-predict(xx,type='probs')
> > > zz[1:10,]
> > > 
> > > 
> > >   # binary logistic
> > >
> > options(contrasts=c('contr.treatment','contr.poly'))
> > >
> >
> obj.glm<-glm(Cont~Infl+Type,family='binomial',data=housing[-c(
> > 1,10,11,22,25,30),])
> > > yy<-predict(xx,type='response')
> > > 
> > > options(contrasts=c('contr.helmert','contr.poly'))
> > >
> >
> obj.glm<-glm(Cont~Infl+Type,family='binomial',data=housing[-c(
> > 1,10,11,22,25,30),])
> > > zz<-predict(xx,type='response')
> > > 
> > > Thanks
> > > 
> > > __
> > > R-help@stat.math.ethz.ch mailing list 
> > > https://stat.ethz.ch/mailman/listinfo/r-help
> > > PLEASE do read the posting guide! 
> > > http://www.R-project.org/posting-guide.html
> > 
> >
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! 
> http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] grubbs.test

2005-04-14 Thread Lukasz Komsta
Dnia 2005-04-14 15:34, UÅytkownik Dave Evens napisaÅ:

Is it valid to use the grubbs.test in this way?
I'm very happy that someone is interested in my new package, but I must 
worry you, that Grubbs test is probably not proper in such case. Your 
data are dependent (time series) and possibly autocorrelated. The 
outliers package is designed for testing small independent samples (for 
example results of quantitative chemical analysis), not time series data.

Regards,
--
Lukasz Komsta
Department of Medicinal Chemistry
Medical University of Lublin
6 Chodzki, 20-093 Lublin, Poland
Fax +48 81 7425165
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


RE: [R] multinom and contrasts

2005-04-14 Thread array chip
Dear John,

Thanks for the answer! In my own dataset, The
multinom() did not converge even after I had tried to
increase the maximum number of iteration (from default
100 to 1000). In this situation, there is some bigger
diffrenece in fitted probabilities under different
contrasts (e.g. 0.9687817 vs. 0.9920816). My question
is whether the analysis (fitted probabilities) is
still valid if it does not converge? and what else can
I try about it?

Thank you!




--- John Fox <[EMAIL PROTECTED]> wrote:
> Dear chip,
> 
> The difference is small and is due to computational
> error. 
> 
> Your example:
> 
> > max(abs(zz[1:10,] - yy[1:10,]))
> [1] 2.207080e-05
> 
> Tightening the convergence tolerance in multinom()
> eliminates the
> difference:
> 
> >
> options(contrasts=c('contr.treatment','contr.poly'))
> >
>
xx<-multinom(Type~Infl+Cont,data=housing[-c(1,10,11,22,25,30),],
> reltol=1.0e-12)
> # weights:  20 (12 variable)
> initial  value 91.495428 
> iter  10 value 91.124526
> final  value 91.124523 
> converged
> > yy<-predict(xx,type='probs')
> > options(contrasts=c('contr.helmert','contr.poly'))
> >
>
xx<-multinom(Type~Infl+Cont,data=housing[-c(1,10,11,22,25,30),],
> reltol=1.0e-12)
> # weights:  20 (12 variable)
> initial  value 91.495428 
> iter  10 value 91.125287
> iter  20 value 91.124523
> iter  20 value 91.124523
> iter  20 value 91.124523
> final  value 91.124523 
> converged
> > zz<-predict(xx,type='probs')
> > max(abs(zz[1:10,] - yy[1:10,]))
> [1] 1.530021e-08
> 
> I hope this helps,
>  John 
> 
> 
> John Fox
> Department of Sociology
> McMaster University
> Hamilton, Ontario
> Canada L8S 4M4
> 905-525-9140x23604
> http://socserv.mcmaster.ca/jfox 
>  
> 
> > -Original Message-
> > From: [EMAIL PROTECTED] 
> > [mailto:[EMAIL PROTECTED] On
> Behalf Of array chip
> > Sent: Wednesday, April 13, 2005 6:26 PM
> > To: R-help@stat.math.ethz.ch
> > Subject: [R] multinom and contrasts
> > 
> > Hi,
> > 
> > I found that using different contrasts (e.g.
> > contr.helmert vs. contr.treatment) will generate
> different 
> > fitted probabilities from multinomial logistic
> regression 
> > using multinom(); while the fitted probabilities
> from binary 
> > logistic regression seem to be the same. Why is
> that? and for 
> > multinomial logisitc regression, what contrast
> should be 
> > used? I guess it's helmert?
> > 
> > here is an example script:
> > 
> > library(MASS)
> > library(nnet)
> > 
> >    multinomial logistic
> >
> options(contrasts=c('contr.treatment','contr.poly'))
> >
>
xx<-multinom(Type~Infl+Cont,data=housing[-c(1,10,11,22,25,30),])
> > yy<-predict(xx,type='probs')
> > yy[1:10,]
> > 
> > options(contrasts=c('contr.helmert','contr.poly'))
> >
>
xx<-multinom(Type~Infl+Cont,data=housing[-c(1,10,11,22,25,30),])
> > zz<-predict(xx,type='probs')
> > zz[1:10,]
> > 
> > 
> >   # binary logistic
> >
> options(contrasts=c('contr.treatment','contr.poly'))
> >
>
obj.glm<-glm(Cont~Infl+Type,family='binomial',data=housing[-c(
> 1,10,11,22,25,30),])
> > yy<-predict(xx,type='response')
> > 
> > options(contrasts=c('contr.helmert','contr.poly'))
> >
>
obj.glm<-glm(Cont~Infl+Type,family='binomial',data=housing[-c(
> 1,10,11,22,25,30),])
> > zz<-predict(xx,type='response')
> > 
> > Thanks
> > 
> > __
> > R-help@stat.math.ethz.ch mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide! 
> > http://www.R-project.org/posting-guide.html
> 
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


RE: [R] grubbs.test

2005-04-14 Thread Berton Gunter
The Grubbs test is one of many old (1950's - '70's) and classical tests for
outliers in linear regression. Here's a link:
http://www.itl.nist.gov/div898/handbook/eda/section3/eda35h.htm

I think it fair to say that such outlier detection methods were long ago
found to be deficient and have poor statistical properties and were
supplanted by (computationally much more demanding -- but who cares these
days!?) robust/resistant techniques, at least in the more straightforward
linear models contexts. rlm() in MASS (the package) is one good
implementation of these ideas in R. See MASS (the book by V&R) for a short
but informative discussion and further references.

I should add that the use of robust/resistant techniques exposes (i.e., they
exist but we statisticians get nervous talking publicly about them) many
fundamental issues about estimation vs inference, statistical modeling
strategies, etc. The problem is that important estimation and inference
issues for R/R estimators remain to be worked out -- if, indeed, it makes
sense to think about things this way at all. For example, for various kinds
of mixed effects models, "statistical learning theory" ensemble methods,
etc. The problem, as always, is what the heck does one mean by "outlier" in
these contexts. Seems to be like pornography -- "I know it when I see it."*

Contrary views cheerfully solicited!

Cheers to all,

-- Bert Gunter

*Sorry -- that's a reference to a famous quote of Justice Potter Stewart, an
American Supreme Court Justice.
http://www.michaelariens.com/ConLaw/justices/stewart.htm
 

> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of vito muggeo
> Sent: Thursday, April 14, 2005 7:05 AM
> To: Dave Evens
> Cc: r-help@stat.math.ethz.ch
> Subject: Re: [R] grubbs.test
> 
> Dear Dave,
> I do not know the grubbs.test (is it a function, where can I 
> find it?) 
> and probably n=6 data points are really few..
> 
> Having said that, what do you mean as "outlier"?
> If you mean deviation from the estimated mean (of previous data), you 
> might have a look to the strucchange package..(sorry, but now 
> I do not 
> remember the exact name of the function)
> 
> best,
> vito
> 
> 
> Dave Evens wrote:
> > Dear All,
> > 
> > I have small samples of data (between 6 and 15) for
> > numerious time series points. I am assuming the data
> > for each time point is normally distributed. The
> > problem is that the data arrvies sporadically and I
> > would like to detect the number of outliers after I
> > have six data points for any time period. Essentially,
> > I would like to detect the number of outliers when I
> > have 6 data points then test whether there are any
> > ouliers. If so, remove the outliers, and wait until I
> > have at least 6 data points or when the sample size
> > increases and test again whether there are any
> > outliers. This process is repeated until there are no
> > more data points to add to the sample.
> > 
> > Is it valid to use the grubbs.test in this way?
> > 
> > If not, are there any tests out there that might be
> > appropriate for this situation? Rosner's test required
> > that I have at least 25 data points which I don't
> > have.
> > 
> > Thank you in advance for any help.
> > 
> > Dave
> > 
> > __
> > R-help@stat.math.ethz.ch mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide! 
> http://www.R-project.org/posting-guide.html
> > 
> 
> -- 
> 
> Vito M.R. Muggeo
> Dip.to Sc Statist e Matem `Vianelli'
> Università di Palermo
> viale delle Scienze, edificio 13
> 90121 Palermo - ITALY
> tel: 091 6626240
> fax: 091 485726/485612
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! 
> http://www.R-project.org/posting-guide.html
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Course*** S-PLUS / R for SAS users: Complementing and Extending Statistical Computing for SAS Users

2005-04-14 Thread elvis
 XLSolutions Corporation (www.xlsolutions-corp.com) is proud to
announce our May-2005  2-day 

"S-PLUS / R: Complementing and Extending Statistical Computing for SAS
Users" 

www.xlsolutions-corp.com/Rsas.htm

 
 Boston  --> May 26-27
 Raleigh -->  TBD
 
Reserve your seat now at the early bird rates! Payment due AFTER
the class.

Course Description:

 This course is designed for users who want to learn how to complement
and extend statistical 
computing of SAS with the S or R system. The course will give SAS users
a strong foundation 
for becoming a versatile programmer. Participants are encouraged to
bring data for interactive sessions


With the following outline:


- An Overview of resources: installation and demonstration, the R
project/core;
 CRAN; the distribution for UNIX and windows systems; the package
concept;
  literature: Venables and Ripley, Chambers and Hastie, other
publications.
- A Comparison of R and S-PLUS
- Quick review of the SAS environment: the OS interface, the data step,
macros, procs, reports. 
   Issues of data archiving with SSD format; interfaces to DBMS
- Data manipulations in S and R (data frame and matrix operations) and
SAS (the data step) -- 
   issues of importing, formatting, transformation, cataloging,
exporting
- Functions vs macros in SAS for programming repetitive processes.
- The iteration models of SAS vs whole-object modeling
-  Statistical modeling support in R/S vs SAS PROCS.
- Integrated documentation and example processing in R/S.
- Post-processing of function output in R/S vs OUTPUT datasets in SAS.
- Specific comparisons: linear modeling, glms, gees, lmes
- Report Writing in R and Splus
- Extending the R/S systems for new data structures and new algorithms


Email us for group discounts.
Email Sue Turner: [EMAIL PROTECTED]
Phone: 206-686-1578
Visit us: www.xlsolutions-corp.com/training.htm
Please let us know if you and your colleagues are interested in this
classto take advantage of group discount. Register now to secure your
seat!

Interested in R/Splus Advanced course? email us.


Cheers,
Elvis Miller, PhD
Manager Training.
XLSolutions Corporation
206 686 1578
www.xlsolutions-corp.com
[EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Re: Building R packages under Windows.

2005-04-14 Thread Duncan Golicher

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] solved

2005-04-14 Thread ronggui
i have find the answer to this question.
sorry for my overlook the information about it in internet.

> 2,as some textbook say,when using fisher's method,"we proceed by assuming 
> that the within-group covariance structure for our data is the same across 
> groups",so we need test for equailty of covariance matrices. my question is 
> :when i using lda,should i test equailty of covariance matrices first? and 
> can R do these?

the box's m test is the way for this purpose,but the test does not have good 
property.so it is not worthwhile for us the do the test.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] about fonts

2005-04-14 Thread ronggui


On Sun, 10 Apr 2005 17:57:26 +0100 (BST)
Prof Brian Ripley <[EMAIL PROTECTED]> wrote:

> On Sun, 10 Apr 2005, ronggui wrote:
> 
> > when i use R(2.1.0) under windows,it can display Chinese well.and the
> 
> R 2.1.0 will not be out for 8 days: are you a time-traveller or careless?
> (The posting guide does ask you to give the *full* version, that is 2.1.0 
> beta of a particular date.)

i would like to use the new version,so maybe i am a time-traveller.sorry for my 
careless that i  fail to give the info about the version.from version 
1.9-2.10,the problem exsits.but under windows,all works well ,which makes me 
think the problem is due the the fonts.
> version
 _
platform i686-pc-linux-gnu
arch i686
os   linux-gnu
system   i686, linux-gnu
status   alpha
major2
minor1.0
year 2005
month03
day  23
language R

> 
> 1) Read the posting guide and learn to tell us *accurately* basic 
> information, like which graphics device, which locale and which R version.

X11,pdf face the same problem.
my locale is UTF8.my i am using Simplied Chinese.

> 2) All those devices stated in the NEWS file to do so do indeed work in 
> Simplified Chinese (and we have no reason to suppose that they do not work 
> in Traditional Chinese, but the difference does matter as the glyphs are 
> different).  But some, e.g. pdf() do not.  The X11 device does if suitable 
> X11 fonts are installed, and not otherwise.  If you mean the X11 device, 
> seek help on an X11 forum.
all my other X11 application,mail agent,openoffice,,work well and the 
chinese can display correctly.

> (Incidentally, 2.1.0 beta under Windows XP only works in Chinese if the 
> correct fonts are installed, which they are not by default in the versions 
> sold in Europe.)
in fact,from version 1.90,R can display Chinese correctly under windows.(though 
it sometimes ugly before version 2.1.0).

when i use $xlfonts,i can see my system has the following fonts,and i am sure 
the song fonts can support chinese.
-cbs-song-medium-r-normal-fantizi-0-0-75-75-c-0-cns11643.1992-1
-cbs-song-medium-r-normal-fantizi-0-0-75-75-c-0-cns11643.1992-2
-cbs-song-medium-r-normal-fantizi-0-0-75-75-c-0-cns11643.1992-3
-cbs-song-medium-r-normal-fantizi-0-0-75-75-c-0-cns11643.1992-4
-cbs-song-medium-r-normal-fantizi-0-0-75-75-c-0-cns11643.1992-5
-cbs-song-medium-r-normal-fantizi-0-0-75-75-c-0-cns11643.1992-6
-cbs-song-medium-r-normal-fantizi-0-0-75-75-c-0-cns11643.1992-7
-cbs-song-medium-r-normal-fantizi-16-160-75-75-c-160-cns11643.1992-3
-cbs-song-medium-r-normal-fantizi-16-160-75-75-c-160-cns11643.1992-4
-cbs-song-medium-r-normal-fantizi-16-160-75-75-c-160-cns11643.1992-5
-cbs-song-medium-r-normal-fantizi-16-160-75-75-c-160-cns11643.1992-6
-cbs-song-medium-r-normal-fantizi-16-160-75-75-c-160-cns11643.1992-7
-cbs-song-medium-r-normal-fantizi-24-240-75-75-c-240-cns11643.1992-1
-cbs-song-medium-r-normal-fantizi-24-240-75-75-c-240-cns11643.1992-2
.

and i use $fc-list,i can get 

AR PL New Sung,文鼎PL新宋:style=Regular
SimSun,宋体:style=Regular
.
i am sure the fonts display chinese well.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] can test the if relationship is significant in cancor?

2005-04-14 Thread ronggui
i have try hard to find the answer by google,but i can not find any solution.
so i wan to ask:
1,can we test the if canonical relationship is significant after using cancor?
2,if it can,how? 
3,if not,is it under-developed or there is not need to do it?or there is no 
good way to do it?

i hope my question is not too silly.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Reading and coalescing many datafiles.

2005-04-14 Thread Peter Dalgaard
[EMAIL PROTECTED] writes:

> Greetings.
> 
> 
> I've got some analysis problems I'm trying to solve, the raw data for which
> are accumulated in a bunch of time-and-date-based files.
> 
> /some/path/2005-01-02-00-00-02
> 
> etc.
> 
> 
> The best 'read all these files' method I've seen in the r-help archives comes
> down to 
> 
> for (df in my_list_of_filenames )
> {
>   dat <- rbind(dat,my_read_function(df))
> } 
> 
> which, unpleasantly, is O(N^2) w.r.t. the number of files.
> 
> I'm fiddling with other idioms to accomplish the same goal.  Best I've come up
> with so far, after extensive reference to the mailing list archives, is
> 
> 
> my_read_function.many<-function(filenames)
>   {
> filenames <- filenames[file.exists(filenames)];
> rv <- do.call("rbind", lapply(filenames,my_read_function))
> row.names(rv) = c(1:length(row.names(rv)))
> rv
>   }
> 
> 
> I'd love to have some stupid omission pointed out.


Why? It's pretty much what I would suggest, except for the superfluous
c().

-- 
   O__   Peter Dalgaard Blegdamsvej 3  
  c/ /'_ --- Dept. of Biostatistics 2200 Cph. N   
 (*) \(*) -- University of Copenhagen   Denmark  Ph: (+45) 35327918
~~ - ([EMAIL PROTECTED]) FAX: (+45) 35327907

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Printing integers in R "as is"

2005-04-14 Thread Prof Brian Ripley
On Thu, 14 Apr 2005, Firas Swidan wrote:
Hi,
thanks for the suggestions. However, for some reason the first one did not
work. Trying
cat( paste( paste(orientation, as.integer(start), as.integer(end),
names,"\n"), paste(as.integer(start), as.integer(end),"exon\n"), sep=""))
resulted in the same problem.
Works for me:
orientation <- pi
start <- 10
end <- start+1
names <- letters[1:3]
cat( paste( paste(orientation, as.integer(start), as.integer(end),
names,"\n"), paste(as.integer(start), as.integer(end),"exon\n"), sep=""))
3.14159265358979 10 11 a
10 11 exon
 3.14159265358979 10 11 b
10 11 exon
 3.14159265358979 10 11 c
10 11 exon
whereas without the as.integer I do get 1e+05.

Setting scipen in options did the job.
Cheers,
Firas.

Well, you have to convert an integer to character to see it: `as is' is in
your case 64 0's and 1's.
I very much suspect that you have a double and not an integer:
10
[1] 1e+05
as.integer(10)
[1] 10
so that is one answer: actually use an `integer vector' as you claim.
A second answer is in ?options, see `scipen'.
A third answer is to use sprintf() or formatC() to handle the conversion
yourself.
On Thu, 14 Apr 2005, Firas Swidan wrote:
Hi,
I am using the following command to print to a file (I omitted the file
details):
cat( paste( paste(orientation, start, end, names,"\n"), paste(start, end,
"exon\n"), sep=""))
where "orientation" and "names" are character vectors and "start" and
"end" are integer vectors.
The problem is that R coerce the integer vectors to characters. In
general, that works fine, but when one of the integer is 10 (or has
more 0's) then R prints it as 1e+05. This behavior causes a lot of
trouble for the program reading R's output.
This problem occur with paste, cat,
and print (i.e. paste(10)="1e+05" and so on).
I tried to change the "digit" option in "options()" but that did not help.
Is is possible to change the behavior of the coercing or are there any
work arounds?
--
Brian D. Ripley,[EMAIL PROTECTED]
Professor of Applied Statistics,http://www.stats.ox.ac.uk/~ripley/
University of Oxford,   Tel:  +44 1865 272861 (self)
1 South ParksRoad,   +44 1865 272866 (PA)
Oxford OX1 3TG, UK  Fax:  +44 1865 272595

--
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Legend in xyplot two columns

2005-04-14 Thread Deepayan Sarkar
On Thursday 14 April 2005 10:29, Gesmann, Markus wrote:
> Thanks Deepayan!
>
> Your solution does excatly what I want.
> Further experiments and thoughts on my side brought me also to a
> solution.
> If I use the option rep=FALSE, and plot the bullit with "lines" and
> split the "lines" argument into two groups it gives me the same
> result, as every item in the key list starts a new column.

Of course. I'd forgotten that 'lines' can also be points.

Deepayan

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Reading and coalescing many datafiles.

2005-04-14 Thread asr


Greetings.


I've got some analysis problems I'm trying to solve, the raw data for which
are accumulated in a bunch of time-and-date-based files.

/some/path/2005-01-02-00-00-02

etc.


The best 'read all these files' method I've seen in the r-help archives comes
down to 

for (df in my_list_of_filenames )
{
  dat <- rbind(dat,my_read_function(df))
} 

which, unpleasantly, is O(N^2) w.r.t. the number of files.

I'm fiddling with other idioms to accomplish the same goal.  Best I've come up
with so far, after extensive reference to the mailing list archives, is


my_read_function.many<-function(filenames)
  {
filenames <- filenames[file.exists(filenames)];
rv <- do.call("rbind", lapply(filenames,my_read_function))
row.names(rv) = c(1:length(row.names(rv)))
rv
  }


I'd love to have some stupid omission pointed out.


- Allen S. Rout

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Printing integers in R "as is"

2005-04-14 Thread Firas Swidan
Hi,

thanks for the suggestions. However, for some reason the first one did not
work. Trying

cat( paste( paste(orientation, as.integer(start), as.integer(end),
names,"\n"), paste(as.integer(start), as.integer(end),"exon\n"), sep=""))

resulted in the same problem.

Setting scipen in options did the job.

Cheers,
Firas.


> Well, you have to convert an integer to character to see it: `as is' is in
> your case 64 0's and 1's.
>
> I very much suspect that you have a double and not an integer:
>
> > 10
> [1] 1e+05
> > as.integer(10)
> [1] 10
>
> so that is one answer: actually use an `integer vector' as you claim.
>
> A second answer is in ?options, see `scipen'.
>
> A third answer is to use sprintf() or formatC() to handle the conversion
> yourself.
>
>
> On Thu, 14 Apr 2005, Firas Swidan wrote:
>
> > Hi,
> > I am using the following command to print to a file (I omitted the file
> > details):
> >
> > cat( paste( paste(orientation, start, end, names,"\n"), paste(start, end,
> > "exon\n"), sep=""))
> >
> > where "orientation" and "names" are character vectors and "start" and
> > "end" are integer vectors.
> >
> > The problem is that R coerce the integer vectors to characters. In
> > general, that works fine, but when one of the integer is 10 (or has
> > more 0's) then R prints it as 1e+05. This behavior causes a lot of
> > trouble for the program reading R's output.
> > This problem occur with paste, cat,
> > and print (i.e. paste(10)="1e+05" and so on).
> >
> > I tried to change the "digit" option in "options()" but that did not help.
> > Is is possible to change the behavior of the coercing or are there any
> > work arounds?
>
> --
> Brian D. Ripley,[EMAIL PROTECTED]
> Professor of Applied Statistics,http://www.stats.ox.ac.uk/~ripley/
> University of Oxford,   Tel:  +44 1865 272861 (self)
> 1 South ParksRoad,   +44 1865 272866 (PA)
> Oxford OX1 3TG, UK  Fax:  +44 1865 272595
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Legend in xyplot two columns

2005-04-14 Thread Gesmann, Markus
Thanks Deepayan!

Your solution does excatly what I want. 
Further experiments and thoughts on my side brought me also to a
solution. 
If I use the option rep=FALSE, and plot the bullit with "lines" and
split the "lines" argument into two groups it gives me the same result,
as every item in the key list starts a new column.

library(lattice)
key <- list( rep=FALSE,
   lines=list(col=c("red", "blue"), type=c("p","l"),
pch=19),
   text=list(lab=c("John","Paul")),
   lines=list(col=c("green", "red"), type=c("l", "l")),
   text=list(lab=c("George","Ringo")),
   rectangles = list(col= "#CC", border=FALSE),
   text=list(lab="The Beatles"),
   )

xyplot(1~1, key=key)


But your solution is much more felxible!

Kind Regards

Markus

-Original Message-

LNSCNTMCS01***
The information in this E-Mail and in any attachments is CONFIDENTIAL and may 
be privileged.  If you are NOT the intended recipient, please destroy this 
message and notify the sender immediately.  You should NOT retain, copy or use 
this E-mail for any purpose, nor disclose all or any part of its contents to 
any other person or persons.

Any views expressed in this message are those of the individual sender, EXCEPT 
where the sender specifically states them to be the views of Lloyd's.

Lloyd's may monitor the content of E-mails sent and received via its
network for viruses or unauthorised use and for other lawful
business purposes."

Lloyd's is authorised under the Financial Services and Markets Act 2000


From: Deepayan Sarkar [mailto:[EMAIL PROTECTED] 
Sent: 14 April 2005 16:01
To: r-help@stat.math.ethz.ch
Cc: Gesmann, Markus
Subject: Re: [R] Legend in xyplot two columns


On Thursday 14 April 2005 05:30, Gesmann, Markus wrote:
> Dear R-Help
>
> I have some trouble to set the legend in a xyplot into two rows.
> The code below gives me the legend in the layout I am looking for, I
> just rather have it in two rows.
>
> library(lattice)
> schluessel <- list(
>points=list( col="red", pch=19, cex=0.5 ),
>text=list(lab="John"),
>lines=list(col="blue"),
>text=list(lab="Paul"),
>lines=list(col="green"),
>text=list(lab="George"),
>lines=list(col="orange"),
>text=list(lab="Ringo"),
>rectangles = list(col= "#CC", border=FALSE),
>text=list(lab="The Beatles"),
>  )
>
> xyplot(1~1, key=schluessel)
>
> The next code gives me two rows, but repeates all the points,lines,
> and rectangles.
>
> schluessel2 <- list(
>points=list( col="red", pch=19, cex=0.5 ),
>lines=list(col=c("blue", "green", "orange")),
>rectangles = list(col= "#CC", border=FALSE),
>text=list(lab=c("John","Paul","George","Ringo", "The
> Beatles")),
>columns=3,
>   )
>
> xyplot(1~1, key=schluessel2)
>
> So I think each list has to have 6 items, but some with "no" content.
> How do I do this?

You could try using col="transparent" to suppress things, but that's not

a very satisfactory solution. The function to create the key is simply 
not designed to create unstructured legends like this. However, you can 
create an use an arbitrary ``grob'' (grid graphics object) for a 
legend, e.g.:

##-

library(grid)
library(lattice)

fl <-
grid.layout(nrow = 2, ncol = 6,
heights = unit(rep(1, 2), "lines"),
widths =
unit(c(2, 1, 2, 1, 2, 1),
 c("cm", "strwidth", "cm",
   "strwidth", "cm", "strwidth"),
 data = list(NULL, "John", NULL,
 "George", NULL, "The Beatles")))

foo <- frameGrob(layout = fl)
foo <- placeGrob(foo,
 pointsGrob(.5, .5, pch=19,
gp = gpar(col="red", cex=0.5)),
 row = 1, col = 1)
foo <- placeGrob(foo,
 linesGrob(c(0.2, 0.8), c(.5, .5),
   gp = gpar(col="blue")),
 row = 2, col = 1)
foo <- placeGrob(foo,
 linesGrob(c(0.2, 0.8), c(.5, .5),
   gp = gpar(col="green")), 
 row = 1, col = 3)
foo <- placeGrob(foo,
 linesGrob(c(0.2, 0.8), c(.5, .5),
   gp = gpar(col="orange")), 
 row = 2, col = 3)
foo <- placeGrob(foo,
 rectGrob(width = 0.6, 
  gp = gpar(col="#CC",
  fill = "#CC")), 
 row = 1, col = 5)
foo <- placeGrob(foo,
 textGrob(lab = "John"), 
 row = 1, col = 2)
foo <- placeGrob(foo,
 textGrob(lab = "Paul"), 

Re: [R] Anova for GLMM (lme4) is a valid method?

2005-04-14 Thread Douglas Bates
Ronaldo Reis-Jr. wrote:
Hi,
I try to make a binomial analysis using GLMM in a longitudinal data file.
Is correct to use anova(model) to access the significance of the fixed terms?
Thanks
Ronaldo
From lme4_0.95-1 on the GLMM function has been replaced by lmer with a 
non-missing family argument.  For the time being I would recommend 
staying with lme4_0.9-x and using the anova(model) from that but bear in 
mind that the Wald approximate tests are notoriously inaccurate for some 
generalized linear models and generalized linear mixed models.

If you have only a single level of random effects and you also have 
access to SAS I would suggest cross-checking the results against those 
from SAS PROC NLMIXED.  Getting better results for this calculation in 
lmer models is on my "To Do" list but there are a lot of other tasks 
above it.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] RE:Building R packages under Windows.

2005-04-14 Thread Duncan Golicher
Prof Ripley has quite rightly pointed out that my previous comment could 
be (quite wrongly) interpreted as denigrating the work of the R team. 
This was far from my intention. I apologise to all concerned for a 
terribly flippant remark that was certainly not meant as a criticism of 
anyone. It came out wrong. Like all other R users I am deeply indebted 
to the work of the R core team and marvel at the selflessness and 
generosity of all concerned. All I meant to say was that user base of R 
has expanded enormously since the excellent documentation was written 
and it is only natural that some elements do not reflect this. The fact 
that package building turns out to be much easier than it looks to be 
when you first read "Writing R extensions" is a complement to the 
tremendous work of R developers, not a denigration.

Deepest apologies again to all for my clumsiness,
Duncan Golicher
Duncan Golicher
--
Dr Duncan Golicher
Ecologia y Sistematica Terrestre
Conservación de la Biodiversidad
El Colegio de la Frontera Sur
San Cristobal de Las Casas, Chiapas, Mexico
Tel. 967 1883 ext 1310
Celular 044 9671041021
[EMAIL PROTECTED]
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Legend in xyplot two columns

2005-04-14 Thread Deepayan Sarkar
On Thursday 14 April 2005 05:30, Gesmann, Markus wrote:
> Dear R-Help
>
> I have some trouble to set the legend in a xyplot into two rows.
> The code below gives me the legend in the layout I am looking for, I
> just rather have it in two rows.
>
> library(lattice)
> schluessel <- list(
>points=list( col="red", pch=19, cex=0.5 ),
>text=list(lab="John"),
>lines=list(col="blue"),
>text=list(lab="Paul"),
>lines=list(col="green"),
>text=list(lab="George"),
>lines=list(col="orange"),
>text=list(lab="Ringo"),
>rectangles = list(col= "#CC", border=FALSE),
>text=list(lab="The Beatles"),
>  )
>
> xyplot(1~1, key=schluessel)
>
> The next code gives me two rows, but repeates all the points,lines,
> and rectangles.
>
> schluessel2 <- list(
>points=list( col="red", pch=19, cex=0.5 ),
>lines=list(col=c("blue", "green", "orange")),
>rectangles = list(col= "#CC", border=FALSE),
>text=list(lab=c("John","Paul","George","Ringo", "The
> Beatles")),
>columns=3,
>   )
>
> xyplot(1~1, key=schluessel2)
>
> So I think each list has to have 6 items, but some with "no" content.
> How do I do this?

You could try using col="transparent" to suppress things, but that's not 
a very satisfactory solution. The function to create the key is simply 
not designed to create unstructured legends like this. However, you can 
create an use an arbitrary ``grob'' (grid graphics object) for a 
legend, e.g.:

##-

library(grid)
library(lattice)

fl <-
grid.layout(nrow = 2, ncol = 6,
heights = unit(rep(1, 2), "lines"),
widths =
unit(c(2, 1, 2, 1, 2, 1),
 c("cm", "strwidth", "cm",
   "strwidth", "cm", "strwidth"),
 data = list(NULL, "John", NULL,
 "George", NULL, "The Beatles")))

foo <- frameGrob(layout = fl)
foo <- placeGrob(foo,
 pointsGrob(.5, .5, pch=19,
gp = gpar(col="red", cex=0.5)),
 row = 1, col = 1)
foo <- placeGrob(foo,
 linesGrob(c(0.2, 0.8), c(.5, .5),
   gp = gpar(col="blue")),
 row = 2, col = 1)
foo <- placeGrob(foo,
 linesGrob(c(0.2, 0.8), c(.5, .5),
   gp = gpar(col="green")), 
 row = 1, col = 3)
foo <- placeGrob(foo,
 linesGrob(c(0.2, 0.8), c(.5, .5),
   gp = gpar(col="orange")), 
 row = 2, col = 3)
foo <- placeGrob(foo,
 rectGrob(width = 0.6, 
  gp = gpar(col="#CC",
  fill = "#CC")), 
 row = 1, col = 5)
foo <- placeGrob(foo,
 textGrob(lab = "John"), 
 row = 1, col = 2)
foo <- placeGrob(foo,
 textGrob(lab = "Paul"), 
 row = 2, col = 2)
foo <- placeGrob(foo,
 textGrob(lab = "George"), 
 row = 1, col = 4)
foo <- placeGrob(foo,
 textGrob(lab = "Ringo"), 
 row = 2, col = 4)
foo <- placeGrob(foo,
 textGrob(lab = "The Beatles"), 
 row = 1, col = 6)

xyplot(1 ~ 1, legend = list(top = list(fun = foo)))

##-

HTH,

Deepayan

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] affy quality

2005-04-14 Thread Sean Davis
Marinus
While there isn't an "arrayquality" package like for cDNA arrays, there 
are many available tools for affy.  You probably want to look at the 
bioconductor packages for affy.  In any case, there is a bioconductor 
list that is the better list for such questions.

Sean
On Apr 14, 2005, at 10:11 AM, Dansen, Ing. M.C. wrote:
Does anyone have nice
quality controlls for affy arrays,
Can't find any tools as are being used for
2 dye arrays.
cheers,
marinus
This e-mail and its contents are subject to the DISCLAIMER at 
http://www.tno.nl/disclaimer/email.html
	[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! 
http://www.R-project.org/posting-guide.html
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] CI for Ratios of Variance components in lme?

2005-04-14 Thread Douglas Bates
Berton Gunter wrote:
My apologies if this is obvious:
Is there a simple way (other than simulation or bootstrapping) to obtain a
(approximate)confidence interval for the ratio of 2 variance components in a
fitted lme model? -- In particular, if there are only 2 components (1
grouping factor). I'm using nlme but lme4 would be fine, too.
Sorry for being so late in responding.  I'm way behind in reading R-help.
This particular calculation can be done for an lme fit.  At present it 
is difficult to do this for an lmer fit.

An lme fit of a model like this has a component apVar which is an 
approximate variance-covariance matrix for the parameter estimates in 
the random effects component.  The first parameter is the natural 
logarithm of the relative variance (ratio of the variance component to 
the residual variance).

> bert <- data.frame(grp = factor(rep(1:5, c(3, 9, 8, 28, 34))), resp = 
scan("/tmp/bert.txt"))
Read 82 items
> fm1 <- lme(resp ~ 1, bert, ~ 1|grp)
> fm1$apVar
 reStruct.grp  lSigma
reStruct.grp 3.611912e+02 0.002383590
lSigma   2.383590e-03 0.006172887
attr(,"Pars")
reStruct.grp   lSigma
  -5.7476114   -0.6307136
attr(,"natural")
[1] TRUE

You may want to look at some of the code in the lme S3 method for the 
intervals generic to see how this is used.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] affy quality

2005-04-14 Thread Dansen, Ing. M.C.
Does anyone have nice
quality controlls for affy arrays,
Can't find any tools as are being used for 
2 dye arrays.
 
cheers,
marinus
 

This e-mail and its contents are subject to the DISCLAIMER at 
http://www.tno.nl/disclaimer/email.html
[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] grubbs.test

2005-04-14 Thread vito muggeo
Dear Dave,
I do not know the grubbs.test (is it a function, where can I find it?) 
and probably n=6 data points are really few..

Having said that, what do you mean as "outlier"?
If you mean deviation from the estimated mean (of previous data), you 
might have a look to the strucchange package..(sorry, but now I do not 
remember the exact name of the function)

best,
vito
Dave Evens wrote:
Dear All,
I have small samples of data (between 6 and 15) for
numerious time series points. I am assuming the data
for each time point is normally distributed. The
problem is that the data arrvies sporadically and I
would like to detect the number of outliers after I
have six data points for any time period. Essentially,
I would like to detect the number of outliers when I
have 6 data points then test whether there are any
ouliers. If so, remove the outliers, and wait until I
have at least 6 data points or when the sample size
increases and test again whether there are any
outliers. This process is repeated until there are no
more data points to add to the sample.
Is it valid to use the grubbs.test in this way?
If not, are there any tests out there that might be
appropriate for this situation? Rosner's test required
that I have at least 25 data points which I don't
have.
Thank you in advance for any help.
Dave
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
--

Vito M.R. Muggeo
Dip.to Sc Statist e Matem `Vianelli'
Università di Palermo
viale delle Scienze, edificio 13
90121 Palermo - ITALY
tel: 091 6626240
fax: 091 485726/485612
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


RE: [R] Multiple copies of attached packages

2005-04-14 Thread Prof Brian Ripley
On Thu, 14 Apr 2005, Liaw, Andy wrote:
I suspect you've attach()'ed `DF' multiple times in your
code (possibly inside a loop, or perhaps a function that
was called several times).  Note that if it were a
`package', it would show up in search() as `package:DF'
rather than just `DF'.  Also, R Core folks took care to
avoid attaching the same package multiple times:
library(MASS)
search()
[1] ".GlobalEnv""package:MASS"  "package:methods"
"package:stats"
[5] "package:graphics"  "package:grDevices" "package:utils"
"package:datasets"
[9] "Autoloads" "package:base"
library(MASS)
search()
[1] ".GlobalEnv""package:MASS"  "package:methods"
"package:stats"
[5] "package:graphics"  "package:grDevices" "package:utils"
"package:datasets"
[9] "Autoloads" "package:base"
Notice how trying to load a package that's already on the
search path has no effect.
This is not true for R objects, though.
When you attach a data frame, say, `DF', (or a list), it
places a _copy_ on the search path, so you can access
the variables in the data frame (or components of the
list) directly.  When you make modifications to the
variables (such as x[i] <- something, rather than
DF$x[i] <- something), the modifications are applied to
the _copy_ on the search path, not the original.
Not quite.  The correct description is in ?attach (and apart from 
mentioning attach was used, reading the help before posting is de rigeur).
Here is the version from 2.1.0 beta (which has been expanded):

 The database is not actually attached.  Rather, a new environment
 is created on the search path and the elements of a list (including
 columns of a dataframe) or objects in a save file are _copied_
 into the new environment.  If you use '<<-' or 'assign' to assign
 to an attached database, you only alter the attached copy, not the
 original object.  (Normal assignment will place a modified version
 in the user's workspace: see the examples.) For this reason
 'attach' can lead to confusion.
Examples:
 summary(women$height)   # refers to variable 'height' in the data frame
 attach(women)
 summary(height) # The same variable now available by name
 height <- height*2.54   # Don't do this. It creates a new variable
 # in the user's workspace
 find("height")
 summary(height) # The new variable in the workspace
 rm(height)
 summary(height) # The original variable.
 height <<- height*25.4  # Change the copy in the attached environment
 find("height")
 summary(height) # The changed copy
 detach("women")
 summary(women$height)   # unchanged
Notice the difference between <- and <<- .
Assigning to an element follows the same rules, and in addition is an 
error unless an object exists that can be suitably subscripted.

--
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Wrapping long labels in barplot(2)

2005-04-14 Thread Marc Schwartz
Building on Tom's reply, the following should work:

> labels <- factor(paste("This is a long label ", 1:10))
> labels
 [1] This is a long label  1  This is a long label  2 
 [3] This is a long label  3  This is a long label  4 
 [5] This is a long label  5  This is a long label  6 
 [7] This is a long label  7  This is a long label  8 
 [9] This is a long label  9  This is a long label  10
10 Levels: This is a long label  1 ... This is a long label  9


> short.labels <- sapply(labels, function(x) paste(strwrap(x,
 10), collapse = "\n"), USE.NAMES = FALSE)

> short.labels
 [1] "This is\na long\nlabel 1"  "This is\na long\nlabel 2" 
 [3] "This is\na long\nlabel 3"  "This is\na long\nlabel 4" 
 [5] "This is\na long\nlabel 5"  "This is\na long\nlabel 6" 
 [7] "This is\na long\nlabel 7"  "This is\na long\nlabel 8" 
 [9] "This is\na long\nlabel 9"  "This is\na long\nlabel 10"

> mp <- barplot2(1:10)
> mtext(1, text = short.labels, at = mp, line = 2)


HTH,

Marc Schwartz


On Thu, 2005-04-14 at 16:14 +0700, Jan P. Smit wrote:
> Dear Tom,
> 
> Many thanks. I think this gets me in the right direction, but 
> concatenates all levels into one long level. Any further thoughts?
> 
> Best regards,
> 
> Jan
> 
> 
> Mulholland, Tom wrote:
> > This may not be the best way but in the past I think I have done something 
> > like
> > 
> > levels(x) <- paste(strwrap(levels(x),20,prefix = ""),collapse = "\n")
> > 
> > Tom
> > 
> > 
> >>-Original Message-
> >>From: Jan P. Smit [mailto:[EMAIL PROTECTED]
> >>Sent: Thursday, 14 April 2005 11:48 AM
> >>To: r-help@stat.math.ethz.ch
> >>Subject: [R] Wrapping long labels in barplot(2)
> >>
> >>
> >>I am using barplot, and barplot2 in the gregmisc bundle, in the 
> >>following way:
> >>
> >>barplot2(sort(xtabs(expend / 1000 ~ theme)),
> >> col = c(mdg7, mdg8, mdg3, mdg1), horiz = T, las = 1,
> >> xlab = "$ '000", plot.grid = T)
> >>
> >>The problem is that the values of 'theme', which is a factor, are in 
> >>some cases rather long, so that I would like to wrap/split them at a 
> >>space once they exceed, say, 20 characters. What I'm doing now is 
> >>specifying names.arg manually with '\n' where I want the 
> >>breaks, but I 
> >>would like to automate the process.
> >>
> >>I've looked for a solution using 'strwrap', but am not sure 
> >>how to apply 
> >>it in this situation.
> >>
> >>Jan Smit
> >>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


RE: [R] Multiple copies of attached packages

2005-04-14 Thread Liaw, Andy
I suspect you've attach()'ed `DF' multiple times in your
code (possibly inside a loop, or perhaps a function that 
was called several times).  Note that if it were a 
`package', it would show up in search() as `package:DF'
rather than just `DF'.  Also, R Core folks took care to
avoid attaching the same package multiple times:

> library(MASS)
> search()
 [1] ".GlobalEnv""package:MASS"  "package:methods"
"package:stats"
 [5] "package:graphics"  "package:grDevices" "package:utils"
"package:datasets" 
 [9] "Autoloads" "package:base" 
> library(MASS)
> search()
 [1] ".GlobalEnv""package:MASS"  "package:methods"
"package:stats"
 [5] "package:graphics"  "package:grDevices" "package:utils"
"package:datasets" 
 [9] "Autoloads" "package:base" 

Notice how trying to load a package that's already on the
search path has no effect.

This is not true for R objects, though.

When you attach a data frame, say, `DF', (or a list), it
places a _copy_ on the search path, so you can access
the variables in the data frame (or components of the 
list) directly.  When you make modifications to the
variables (such as x[i] <- something, rather than
DF$x[i] <- something), the modifications are applied to 
the _copy_ on the search path, not the original.  

HTH,
Andy


> From: Fernando Saldanha
> 
> I have noticed that after I ran a batch script multiple times I get
> multiple copies of a package's name when I call search(). Is this a
> problem?
> 
> > search()
>  [1] ".GlobalEnv""DF"  "DF" 
> [4] "DF"  "DF"  "DF"
> 
> multiple copies here ...
>  
> [13] "DF"  "DF"  "DF" 
> 
> other packages here ...
> 
> [28] "package:quadprog"  "package:car"   "package:methods"  
> [31] "package:stats" "package:graphics"  "package:grDevices"
> [34] "package:utils" "package:datasets"  "Autoloads"
> [37] "package:base" 
> 
> The following strange (to me) behavior that may be related. Suppose I
> have a variable x that is in the global environment, and also there is
> an 'x' in a dataframe called DF. Then I remove the variable x from the
> Global Environment, with
> 
> remove('x', pos = 1)
> At this point if I call remove again in the same way I get an error:
> the variable x does not exist anymore. However, at this point I also
> can check that DF$x exists. So far so good.
> 
> Further down in my code I have an assignment of the type
> 
> x[i] <- something (*)
> 
> which works fine, except that then if I look at x[i] and DF$x[i] they
> are different. So it looks like x was recreated in the Global
> Environment, which I actually can check by typing
> 
> .GlobalEnv$x.
> 
> On the other hand, if I put in my code something like
> 
> newvar[i] <- something
> 
> where newvar was never defined, then I get an error. I was hoping that
> the statement (*)
> above would assign to the variable DF$x. But it looks like (although
> that is probably not the correct explanation) that the interpreter
> somehow "remembers" that once there was a variable x in the global
> environment and accepts the assignment to x[i], recreating that
> variable.
> 
> Any insights on this?
> 
> Many thanks,
> 
> Fernando
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! 
> http://www.R-project.org/posting-guide.html
> 
> 
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] grubbs.test

2005-04-14 Thread Dave Evens

Dear All,

I have small samples of data (between 6 and 15) for
numerious time series points. I am assuming the data
for each time point is normally distributed. The
problem is that the data arrvies sporadically and I
would like to detect the number of outliers after I
have six data points for any time period. Essentially,
I would like to detect the number of outliers when I
have 6 data points then test whether there are any
ouliers. If so, remove the outliers, and wait until I
have at least 6 data points or when the sample size
increases and test again whether there are any
outliers. This process is repeated until there are no
more data points to add to the sample.

Is it valid to use the grubbs.test in this way?

If not, are there any tests out there that might be
appropriate for this situation? Rosner's test required
that I have at least 25 data points which I don't
have.

Thank you in advance for any help.

Dave

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] compiling with pgf90

2005-04-14 Thread Julie Harold
Hi,
I need to compile R-2.0.1 on an opteron running suse9.1 and using 
portland group compilers.  Can you advise me of the environemt variables 
I need to set, particulalry the FPICFLAGS.

thanks,
Julie
---
Dr Julie Harold: University of East Anglia, Norwich, NR4 7TJ
  Environmental Sciences: Unix Support Officer
IT and Computing Service: High Performance Computing Consultant
 phone 01603 59 2385/3121
email  [EMAIL PROTECTED]
  for env unix/linux support please mail [EMAIL PROTECTED]
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Multiple copies of attached packages

2005-04-14 Thread Fernando Saldanha
I have noticed that after I ran a batch script multiple times I get
multiple copies of a package's name when I call search(). Is this a
problem?

> search()
 [1] ".GlobalEnv""DF"  "DF" 
[4] "DF"  "DF"  "DF"

multiple copies here ...
 
[13] "DF"  "DF"  "DF" 

other packages here ...

[28] "package:quadprog"  "package:car"   "package:methods"  
[31] "package:stats" "package:graphics"  "package:grDevices"
[34] "package:utils" "package:datasets"  "Autoloads"
[37] "package:base" 

The following strange (to me) behavior that may be related. Suppose I
have a variable x that is in the global environment, and also there is
an 'x' in a dataframe called DF. Then I remove the variable x from the
Global Environment, with

remove('x', pos = 1)
At this point if I call remove again in the same way I get an error:
the variable x does not exist anymore. However, at this point I also
can check that DF$x exists. So far so good.

Further down in my code I have an assignment of the type

x[i] <- something (*)

which works fine, except that then if I look at x[i] and DF$x[i] they
are different. So it looks like x was recreated in the Global
Environment, which I actually can check by typing

.GlobalEnv$x.

On the other hand, if I put in my code something like

newvar[i] <- something

where newvar was never defined, then I get an error. I was hoping that
the statement (*)
above would assign to the variable DF$x. But it looks like (although
that is probably not the correct explanation) that the interpreter
somehow "remembers" that once there was a variable x in the global
environment and accepts the assignment to x[i], recreating that
variable.

Any insights on this?

Many thanks,

Fernando

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] lme, corARMA and large data sets

2005-04-14 Thread Dimitris Rizopoulos
you should include the 'form' argument in "corARMA()", i.e.,
corARMA(form=~1|dummy, p=1, q=1)
I hope it helps.
Best,
Dimitris

Dimitris Rizopoulos
Ph.D. Student
Biostatistical Centre
School of Public Health
Catholic University of Leuven
Address: Kapucijnenvoer 35, Leuven, Belgium
Tel: +32/16/336899
Fax: +32/16/337015
Web: http://www.med.kuleuven.ac.be/biostat/
http://www.student.kuleuven.ac.be/~m0390867/dimitris.htm
- Original Message - 
From: "Peter Wandeler" <[EMAIL PROTECTED]>
To: 
Sent: Thursday, April 14, 2005 2:12 PM
Subject: [R] lme, corARMA and large data sets


I am currently trying to get a "lme" analyses running to correct for 
the
non-independence of residuals (using e.g. corAR1, corARMA) for a 
larger data
set (>1 obs) for an independent (lgeodisE) and dependent 
variable
(gendis). Previous attempts using SAS failed. In addition we were 
told by
SAS that our data set was too large to be handled by this procedure 
anyway
(!!).

SAS script
proc mixed data=raw method=reml maxiter=1000;
model gendis=lgeodisE / solution;
repeated /subject=intercept type=arma(1,1);
So I turned to R. Being a complete R newbie I didn't arrive 
computing
exactly the same model in R on a reduced data set so far.

R command line (using "dummy" as a dummy group variable)

model.ARMA<-lme(gendis~lgeodisE,correlation=corARMA(p=1,q=1),random=~1|dummy).
Furthermore, memory allocation problems occurred again on my 1GB RAM 
desktop
during some trials with larger data sets.

Can anybody help?
Cheers,
Peter
--
GMX Garantie: Surfen ohne Tempo-Limit! http://www.gmx.net/de/go/dsl
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! 
http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Printing integers in R "as is"

2005-04-14 Thread Prof Brian Ripley
Well, you have to convert an integer to character to see it: `as is' is in 
your case 64 0's and 1's.

I very much suspect that you have a double and not an integer:
10
[1] 1e+05
as.integer(10)
[1] 10
so that is one answer: actually use an `integer vector' as you claim.
A second answer is in ?options, see `scipen'.
A third answer is to use sprintf() or formatC() to handle the conversion 
yourself.

On Thu, 14 Apr 2005, Firas Swidan wrote:
Hi,
I am using the following command to print to a file (I omitted the file
details):
cat( paste( paste(orientation, start, end, names,"\n"), paste(start, end,
"exon\n"), sep=""))
where "orientation" and "names" are character vectors and "start" and
"end" are integer vectors.
The problem is that R coerce the integer vectors to characters. In
general, that works fine, but when one of the integer is 10 (or has
more 0's) then R prints it as 1e+05. This behavior causes a lot of
trouble for the program reading R's output.
This problem occur with paste, cat,
and print (i.e. paste(10)="1e+05" and so on).
I tried to change the "digit" option in "options()" but that did not help.
Is is possible to change the behavior of the coercing or are there any
work arounds?
--
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] xtable POSIXt

2005-04-14 Thread Prof Brian Ripley
On Thu, 14 Apr 2005, Miha Razinger wrote:
Hi,
I was trying like to print out data frame with POSIXct column
in html format using xtable package, but I got error message
when trying to print the table. Here is example:
 aaa<-data.frame(as.POSIXct(strptime('03 2005', '%d %Y')),0)
 aaa.tab<-xtable(aaa)
 print(aaa.tab)
 Error in Math.POSIXt(x + ifelse(x == 0, 1, 0)) :
 abs not defined for POSIXt objects
I was able to get around the problem with converting column
back to string but still I would like to know is it a bug or
I was doing something wrong.
It's not a bug in R.
It seems to be a docmentation error or infelicity in xtable: it clearly 
does not work with all data frames, and the restrictions are neither clear 
to me nor enforced.  Please take it up with the xtable maintainer.

--
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Strange behavior of atan2

2005-04-14 Thread Peter Dalgaard
Clément Calenge <[EMAIL PROTECTED]> writes:

> Dear all,
> 
> I've got a problem with the function atan2. For a couple of
> coordinates x and y,
> This function returns the angle between the vector of coordinates (x,
> y) and the
> abscissa axis, i.e. it is the same as atan(y/x) (as indicated on the
> help page).
> If we consider the vector with coordinates x = 0 and  y = 0, we have
> the following result:
> 
>  > atan(0/0)
> [1] NaN
> 
> This is expected. However:
> 
>  > atan2(0,0)
> [1] 0
> 
> Instead of a missing value, the function atan2 returns an angle equal
> to 0 radians.
> I've searched through the help pages, the FAQ and the forum, but I
> did'nt find
> any explanation to this result. Does anyone know if this behavior is
> expected, and
> why ?
> Thank you for any clues.

Yes, it is expected. R just copies what the C library function does,
but there is actually a rationale:

http://www-sbras.nsc.ru/cgi-bin/www/unix_help/unix-man?atan2+3

Briefly: You don't get a natural conversion to spherical coordinates
and back without this convention.

-- 
   O__   Peter Dalgaard Blegdamsvej 3  
  c/ /'_ --- Dept. of Biostatistics 2200 Cph. N   
 (*) \(*) -- University of Copenhagen   Denmark  Ph: (+45) 35327918
~~ - ([EMAIL PROTECTED]) FAX: (+45) 35327907

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] lme, corARMA and large data sets

2005-04-14 Thread Peter Wandeler
I am currently trying to get a "lme" analyses running to correct for the
non-independence of residuals (using e.g. corAR1, corARMA) for a larger data
set (>1 obs) for an independent (lgeodisE) and dependent variable
(gendis). Previous attempts using SAS failed. In addition we were told by
SAS that our data set was too large to be handled by this procedure anyway
(!!).

SAS script
proc mixed data=raw method=reml maxiter=1000;
model gendis=lgeodisE / solution;
repeated /subject=intercept type=arma(1,1);

So I turned to R. Being a complete R newbie I didn't arrive computing
exactly the same model in R on a reduced data set so far.

R command line (using “dummy” as a dummy group variable)
>
model.ARMA<-lme(gendis~lgeodisE,correlation=corARMA(p=1,q=1),random=~1|dummy).

Furthermore, memory allocation problems occurred again on my 1GB RAM desktop
during some trials with larger data sets.

Can anybody help?
Cheers,
Peter

-- 


GMX Garantie: Surfen ohne Tempo-Limit! http://www.gmx.net/de/go/dsl

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Printing integers in R "as is"

2005-04-14 Thread Jan T. Kim
On Thu, Apr 14, 2005 at 02:32:33PM +0300, Firas Swidan wrote:

> I am using the following command to print to a file (I omitted the file
> details):
> 
> cat( paste( paste(orientation, start, end, names,"\n"), paste(start, end,
> "exon\n"), sep=""))
> 
> where "orientation" and "names" are character vectors and "start" and
> "end" are integer vectors.

For printing formatted output of this kind, you're generally much better
off using sprintf, as in

cat(sprintf("%2s  %8d  %8d  %s\n", orientation, as.integer(start), 
as.integer(end), names));

or, if length(names) > 1, you might consider

sprintf("%2s  %8d  %8d  %s\n", orientation, as.integer(start), 
as.integer(end), paste(names, collapse = ", "));

etc. This assumes that start and end are numeric vectors of length 1,
which seems sensible to me based on the context I can conclude from the
variable names, and I think that sprintf in R-devel, and R 2.1.0 in the
near future will cycle over longer vectors too.

> The problem is that R coerce the integer vectors to characters. In
> general, that works fine, but when one of the integer is 10 (or has
> more 0's) then R prints it as 1e+05. This behavior causes a lot of
> trouble for the program reading R's output.
> This problem occur with paste, cat,
> and print (i.e. paste(10)="1e+05" and so on).

Are you certain that start and end are integer vectors? If in doubt,
check typeof(start) -- the fact that the values are integer does not
necessarily mean that the type is integer.

Best regards, Jan
-- 
 +- Jan T. Kim ---+
 |*NEW*email: [EMAIL PROTECTED]   |
 |*NEW*WWW:   http://www.cmp.uea.ac.uk/people/jtk |
 *-=<  hierarchical systems are for files, not for humans  >=-*

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Normalization and missing values

2005-04-14 Thread Jonathan Baron
On 04/13/05 21:05, Chris Bergstresser wrote:
 This article is great; thanks for providing it.  The authors
 recommend either using "ML Estimation" or "Multiple Imputation" to fill
 in the missing data.  They don't talk much about which is better for
 certain situations, however.

Multiple imputation is good when you want to make statistical
inferences.  It is what aregImpute() is good for.

I used transcan() for a situation that did not involve inference:
Our graduate admissions committee of 5 rates applicants, and the
members of the committee differ somewhat in mean and variance,
and sometimes a member is out of the room when an applicant is
rated.  So I attempt to mimic what the member will do anyway,
which is to conform and adjust:

s.m <- as.matrix(students[,4:8]) # ratings, NA when missing
s.imp <- transcan(s.m,asis="*",data=s.m,imputed=T,long=T,pl=F)
s.na <- is.na(s.m) # which ratings are imputed
s.m[which(s.na)] <- unlist(s.imp$imputed)
students[,4:8] <- s.m

The last 3 lines seem like a kludge to me, but I couldn't find
any other way in the time I had, and this works.  This does not
involve multiple imputation.  I guess it would also be OK for
inference if there weren't very many missing data, but don't take 
my word for it.

Jon
-- 
Jonathan Baron, Professor of Psychology, University of Pennsylvania
Home page: http://www.sas.upenn.edu/~baron
R search page: http://finzi.psych.upenn.edu/

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Strange behavior of atan2

2005-04-14 Thread Clément Calenge
Dear all,
I've got a problem with the function atan2. For a couple of coordinates 
x and y,
This function returns the angle between the vector of coordinates (x, y) 
and the
abscissa axis, i.e. it is the same as atan(y/x) (as indicated on the 
help page).
If we consider the vector with coordinates x = 0 and  y = 0, we have
the following result:

> atan(0/0)
[1] NaN
This is expected. However:
> atan2(0,0)
[1] 0
Instead of a missing value, the function atan2 returns an angle equal to 
0 radians.
I've searched through the help pages, the FAQ and the forum, but I 
did'nt find
any explanation to this result. Does anyone know if this behavior is 
expected, and
why ?
Thank you for any clues.
Regards,

Clément Calenge
--
Clément CALENGE
LBBE - UMR CNRS 5558 - Université 
Claude Bernard Lyon 1 - FRANCE
tel. (+33) 04.72.43.27.57
fax. (+33) 04.72.43.13.88

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Printing integers in R "as is"

2005-04-14 Thread Firas Swidan
Hi,
I am using the following command to print to a file (I omitted the file
details):

cat( paste( paste(orientation, start, end, names,"\n"), paste(start, end,
"exon\n"), sep=""))

where "orientation" and "names" are character vectors and "start" and
"end" are integer vectors.

The problem is that R coerce the integer vectors to characters. In
general, that works fine, but when one of the integer is 10 (or has
more 0's) then R prints it as 1e+05. This behavior causes a lot of
trouble for the program reading R's output.
This problem occur with paste, cat,
and print (i.e. paste(10)="1e+05" and so on).

I tried to change the "digit" option in "options()" but that did not help.
Is is possible to change the behavior of the coercing or are there any
work arounds?

Thanks in advance,
Firas.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] xtable POSIXt

2005-04-14 Thread Miha Razinger
Hi,

I was trying like to print out data frame with POSIXct column
in html format using xtable package, but I got error message
when trying to print the table. Here is example:

  aaa<-data.frame(as.POSIXct(strptime('03 2005', '%d %Y')),0)
  aaa.tab<-xtable(aaa)
  print(aaa.tab)

  Error in Math.POSIXt(x + ifelse(x == 0, 1, 0)) :
  abs not defined for POSIXt objects

I was able to get around the problem with converting column 
back to string but still I would like to know is it a bug or 
I was doing something wrong. 

thanks,

-- 
Miha Razinger
Environmental Agency of Slovenia
Office of Meteorology

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re : [R] Fitting a mixed negative binomial model

2005-04-14 Thread Naji
Ben Dave & all,


I'm a user of ADModel (product of Otter Research)
Just a word to say that for maximisation, I always rely on Admodel.
It's really fast (amazing when you have an important number of parameters),
can be used either as a standalone application or as DLL
I do use GAUSS (Aptech), R & Stata for my research.. For optimization, this
product deserves your attention..
I'm not aware of ADMB-RE so can tell nothing about it

Best regards
Naji Nassar

Le 13/04/05 21:05, « dave fournier » <[EMAIL PROTECTED]> a écrit :

> 
>> I *think* (but am not sure) that these guys were actually (politely)
>> advertising a commercial package that they're developing.  But,
> looking >at
>> the web page, it seems that this module may be freely available -- >can't
>> tell at the moment.
> 
>>Ben
> 
> 
> The Software for negative binomial mixed models will be
> free ie free as in you can use it without paying anything.
> It is built using our
> proprietary software.  The idea is to show how our software
> is good for building nonlinear statstical models including
> those with random effects.  Turning our stand alone software
> into somethng that can be called easily from r has been a
> bit of a steep learning curve for me, but we are making progress.
> So far we have looked at 3 models. The model in Booth et al. (easy).
> An overdispersed data set that turned out probably be
> a zero inflated poisson (faily easy but the negative binomial
> is only fit to be rejected for the simpler model) and
> what appears to be a true negative binomial (difficult but
> doable) and we are discussing the form of the model with the
> person who wishes to analyze it.
> 
> A few more data sets would be useful if anyone has
> an application so that we can ensure the robustness of our
> software.
> 
>  Dave
> 
> 
> 
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Conditional Interpolation

2005-04-14 Thread Laura Quinn
I am looking to perform a simple linear interpolation to fill a few small
gaps in a large data set.

The data set tends to be either continous with one or two gaps in it
(where I hope to perform the interpolation), or else it has large chunks
of data missing where I'd need to return an NA.

Also, I'm wanting to perform this on sequential "windows" in the data - ie
to return daily interpolated data from a year-long data set, or return an
"NA" for a day where insufficienty data is available.

This interpolation needs to be performed on a columwise basis (where
missing data points in adjacent columns are not related), on a
52000x22 matrix - with windows of 144 each.

Can someone please offer some advice, [EMAIL PROTECTED] getting confused with 
the
nesting of my loops!

I'm using V2.0.1 on linux.

Thanks,
Laura



Laura Quinn
Institute of Atmospheric Science
School of Earth and Environment
University of Leeds
Leeds
LS2 9JT

tel: +44 113 343 1596
fax: +44 113 343 6716
mail: [EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Normalization and missing values

2005-04-14 Thread Adaikalavan Ramasamy
What the best missing value imputation ? It depends on how the values
were generated (e.g. missing at random, informative missing ) and what
type of data (e.g. counts, continuous).

If you are interested in this you could either :

1) take the dataset of complete cases and impute missing values
according to the pattern of missing-ness you see on the whole data. Then
apply different types of imputation techniques and see which one has the
best results.

2) Or look for studies that have evaluated different techniques in your
_field_ and apply the best one.

Regards, Adai



On Wed, 2005-04-13 at 13:36 -0500, WeiWei Shi wrote:
> the way of scaling, IMHO, really depends on the distribution of each
> column in your original files. if each column in your data follows a
> normal distrbution, then a standard "normalization" will fit your
> requirement.
> 
> My previous research in microarray data shows me a simple "linear
> standardization" might be good enough for some purpose.
> 
> If your columns differ in magnitude, then some data transformation
> like (log) might be needed first.
> 
> Ed
> 
> 
> On 4/13/05, Achim Zeileis <[EMAIL PROTECTED]> wrote:
> > On Wed, 13 Apr 2005 14:33:25 -0300 (ADT) Rolf Turner wrote:
> > 
> > >
> > > Bert Gunter wrote:
> > >
> > > > You can't expect statistical procedures to rescue you from poor
> > > > data.
> > >
> > >   That should ***definitely*** go into the fortune package
> > >   data base!!!
> > 
> > :-) added for the next release.
> > Z
> > 
> > >   cheers,
> > >
> > >   Rolf Turner
> > >   [EMAIL PROTECTED]
> > >
> > > __
> > > R-help@stat.math.ethz.ch mailing list
> > > https://stat.ethz.ch/mailman/listinfo/r-help
> > > PLEASE do read the posting guide!
> > > http://www.R-project.org/posting-guide.html
> > >
> > 
> > __
> > R-help@stat.math.ethz.ch mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide! 
> > http://www.R-project.org/posting-guide.html
> >
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Legend in xyplot two columns

2005-04-14 Thread Gesmann, Markus
Dear R-Help

I have some trouble to set the legend in a xyplot into two rows.
The code below gives me the legend in the layout I am looking for, I
just rather have it in two rows.

library(lattice)
schluessel <- list(
   points=list( col="red", pch=19, cex=0.5 ),
   text=list(lab="John"),
   lines=list(col="blue"),
   text=list(lab="Paul"),
   lines=list(col="green"),
   text=list(lab="George"),
   lines=list(col="orange"),
   text=list(lab="Ringo"),
   rectangles = list(col= "#CC", border=FALSE),
   text=list(lab="The Beatles"),
   )
 
xyplot(1~1, key=schluessel)

The next code gives me two rows, but repeates all the points,lines, and
rectangles.
 
schluessel2 <- list(
   points=list( col="red", pch=19, cex=0.5 ),
   lines=list(col=c("blue", "green", "orange")),
   rectangles = list(col= "#CC", border=FALSE),
   text=list(lab=c("John","Paul","George","Ringo", "The
Beatles")),
   columns=3,
   )
 
xyplot(1~1, key=schluessel2)

So I think each list has to have 6 items, but some with "no" content.
How do I do this?

Thank you very much!

Markus

LNSCNTMCS01***
The information in this E-Mail and in any attachments is CON...{{dropped}}

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] compiling & installing R devel version on Debian - SOLVED

2005-04-14 Thread Stefano Calza
Thanks to Prof. Ripley I solved it.

Actually it was my (stupid) fault. In .Renviron I actually set R_LIBS

Thanks again

Stefano

On Thu, Apr 14, 2005 at 11:07:00AM +0100, Prof Brian Ripley wrote:
On Thu, 14 Apr 2005, Stefano Calza wrote:

>Hi all.
>
>I'm compiling the devel version of R on Debian GNU/Linux, and installing 
>it into /usr/local tree (instead of default /usr). So:

The default *is* --prefix=/usr/local (no trailing space). Are you sure 
you 
are not getting R confused with some other version, e.g. by having R_LIBS 
set to point to your other installation?

>./configure --prefix=/usr/local/
>make
>make install
>
>Everything works fine, but when I start R I get the following error 
>messages (traslated from italian, sorry):

If it passes `make check' then the build is fine, you are just not using 
it.
Please do try make check before installation.

>Error in dyn.load(x,as.logical(local),as.logical(now)):
>impossible to load the shared library 
>'/usr/lib/R/library/stats/libs/stats.so':
>libR.so:cannot open shared object file: No such file or directory

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

-- 
Stefano Calza, PhD
Sezione di Statistica Medica e Biometria
Dip. di Scienze Biomediche e Biotecnologie
Università degli Studi di Brescia - Italia
Viale Europa, 11 25123 Brescia
email: [EMAIL PROTECTED]
Telefono/Phone: +390303717532
Fax: +390303717488

Section of Medical Statistics and Biometry
Dept. Biomedical Sciences and Biotechnology
University of Brescia - Italy

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


RE: [R] calling r, from SAS, batch mode

2005-04-14 Thread Liaw, Andy
> From: Y Y
> 
> I generally work in SAS but have some graphics features I 
> would like to
> run in r.   I would like to do this 'automatically' from SAS.
> 
> I'm thinking of  something along the lines of putting the r 
> code in a text
> file and calling system of x on it;  r would expect the data 
> in a certain place,
> and my SAS would drop the data in the right place/filename 
> before invoking
> r.
> 
> x 'r.exe myfile.r';
> 
> Does anybody have any experience using r from within SAS like this
> or in a better way ?
> 
> Alas, searching for a one word character r to find an archived answer
> is not easy SAS-L.

You could have started by searching in the FAQs, in 
particular, R for Windows FAQ section 2.13.
 
Andy
 
> S.
> [EMAIL PROTECTED]
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! 
> http://www.R-project.org/posting-guide.html
> 
> 
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] compiling & installing R devel version on Debian

2005-04-14 Thread Prof Brian Ripley
On Thu, 14 Apr 2005, Stefano Calza wrote:
Hi all.
I'm compiling the devel version of R on Debian GNU/Linux, and installing 
it into /usr/local tree (instead of default /usr). So:
The default *is* --prefix=/usr/local (no trailing space). Are you sure you 
are not getting R confused with some other version, e.g. by having R_LIBS 
set to point to your other installation?

./configure --prefix=/usr/local/
make
make install
Everything works fine, but when I start R I get the following error 
messages (traslated from italian, sorry):
If it passes `make check' then the build is fine, you are just not using it.
Please do try make check before installation.
Error in dyn.load(x,as.logical(local),as.logical(now)):
impossible to load the shared library 
'/usr/lib/R/library/stats/libs/stats.so':
libR.so:cannot open shared object file: No such file or directory
--
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] cross compiling R for Windows under Linux

2005-04-14 Thread Prof Brian Ripley
*If* you really have the header paths set correctly it does work, and has 
been tested by several people.

Do read MkRules more carefully and think about how your setting differs 
from the example given.  You have *not* as you claim

# Set this to where the mingw32 include files are. It must be accurate.
HEADER=/users/ripley/R/cross-tools4/i586-mingw32/include
if you used my cross-compiler build (and gave no credit).  Hint: float.h 
is a `mingw32 include file'.

As the comment says, user error here is disastrous, so please take the 
hint.

On Thu, 14 Apr 2005, Lars Schouw wrote:
Hi
I tried to cross compile R under Linux but get an
error.
i586-mingw32-gcc -isystem
/home/schouwl/unpack/mingw/include -O2 -Wall -pedantic
-I../include -I. -DHAVE_CONFIG_H -DR_DLL_BUILD  -c
dynload.c -o dynload.o
dynload.c: In function `R_loadLibrary':
dynload.c:94: warning: implicit declaration of
function `_controlfp'
dynload.c:94: error: `_MCW_IC' undeclared (first use
in this function)
dynload.c:94: error: (Each undeclared identifier is
reported only once
dynload.c:94: error: for each function it appears in.)
dynload.c:95: warning: implicit declaration of
function `_clearfp'
dynload.c:99: error: `_MCW_EM' undeclared (first use
in this function)
dynload.c:99: error: `_MCW_RC' undeclared (first use
in this function)
dynload.c:99: error: `_MCW_PC' undeclared (first use
in this function)
make[3]: *** [dynload.o] Error 1
make[2]: *** [../../bin/R.dll] Error 2
make[1]: *** [rbuild] Error 2
make: *** [all] Error 2
This is the that was reported in the mailing list
before.
http://tolstoy.newcastle.edu.au/R/devel/04/12/1571.html
I have set the HEADER correct in MkRules
No, you did not.
HEADER=/home/schouwl/unpack/mingw/include
The file float.h is located in the
../i586-mingw32/include/float.h
from there.
I am cross compiling R 2.0.1 source code.
Help would be appreciated..

--
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] question about "R get vector from C"

2005-04-14 Thread Uwe Ligges
Michael S wrote:
Dear ALL-R helpers,
I want to let R get vector from c ,for example :numeric array ,vector .I 
saw some exmple like this :
/* useCall3.c*/
/* Getting an integer vector from C using .Call  */
#include 
#include 

SEXP setInt() {
  SEXP myint;
  int *p_myint;
  int len = 5;
  PROTECT(myint = NEW_INTEGER(len));  // Allocating storage space
  p_myint = INTEGER_POINTER(myint);
  p_myint[0] = 7;
  UNPROTECT(1);
  return myint;
}
then type at the command prompt:
R CMD SHLIB useCall3.c
to get useCall3.so
In windows platform ,how can I create right dll to let dyn.load use
and for .c ,.call ,external ,what are the differece? which one is better 
for  getting vector from C?

For the code above you certainly want to use .Call(), .C() and won't 
work with SEXP.
Under Windows, you can also say
  R CMD SHLIB useCall3.c
and get the corresponding useCall3.dll instead of useCall3.so. 
dyn.load() also works as expected, you just need the relevant tools / 
compiler installed.

Uwe Ligges

thanks in advance
Michael
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! 
http://www.R-project.org/posting-guide.html
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Wrapping long labels in barplot(2)

2005-04-14 Thread Jan P. Smit
Dear Tom,
Many thanks. I think this gets me in the right direction, but 
concatenates all levels into one long level. Any further thoughts?

Best regards,
Jan
Mulholland, Tom wrote:
This may not be the best way but in the past I think I have done something like
levels(x) <- paste(strwrap(levels(x),20,prefix = ""),collapse = "\n")
Tom

-Original Message-
From: Jan P. Smit [mailto:[EMAIL PROTECTED]
Sent: Thursday, 14 April 2005 11:48 AM
To: r-help@stat.math.ethz.ch
Subject: [R] Wrapping long labels in barplot(2)
I am using barplot, and barplot2 in the gregmisc bundle, in the 
following way:

barplot2(sort(xtabs(expend / 1000 ~ theme)),
col = c(mdg7, mdg8, mdg3, mdg1), horiz = T, las = 1,
xlab = "$ '000", plot.grid = T)
The problem is that the values of 'theme', which is a factor, are in 
some cases rather long, so that I would like to wrap/split them at a 
space once they exceed, say, 20 characters. What I'm doing now is 
specifying names.arg manually with '\n' where I want the 
breaks, but I 
would like to automate the process.

I've looked for a solution using 'strwrap', but am not sure 
how to apply 
it in this situation.

Jan Smit
Consultant
Economic and Social Commission for Asia and the Pacific
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! 
http://www.R-project.org/posting-guide.html


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] LOCFIT: What's it doing?

2005-04-14 Thread Miguel A. Arranz
You should definitely read Loader's book. Anyway, in the meantime, you should 
look an introductory paper that you will find at the Locfit web page. I think 
that you can set Locfit to estimate at all the sample points, which it does 
not by default, and also to use a prespecified constant bandwidth, but notice 
that its definition of the h parameter is not the standard one.

Hope this helps,

Miguel A. 

On Thursday 14 April 2005 10:47, Jacho-Chavez,DT  (pgr) wrote:
> Dear R-users,
>
> One of the main reasons I moved from GAUSS to R (as an econometrician) was
> because of the existence of the library LOCFIT for local polynomial
> regression. While doing some checking between my former `GAUSS code' and my
> new `R code', I came to realize LOCFIT is not quite doing what I want. I
> wrote the following example script:
>
> #--
>--- # Plain Vanilla NADARAYA-WATSON
> estimator (or Local Constant regression, e.g. deg=0) # with gaussian kernel
> & fixed bandwidth
>
> mkern<-function(y,x,h){
> Mx <- matrix(x,nrow=length(y),ncol=length(y),byrow=TRUE)
> Mxh <- (1/h)*dnorm((x-Mx)/h)
> Myxh<- (1/h)*y*dnorm((x-Mx)/h)
> yh <- rowMeans(Myxh)/rowMeans(Mxh)
> return(yh)
> }
>
> # Generating the design Y=m(x)+e
> n <- 10
> h <- 0.5
> x <- rnorm(n)
> y <- x + rnorm(n,mean=0,sd=0.5)
>
> # This is what I really want!
> mhat <- mkern(y,x,h)
>
> library(locfit)
> yhl.raw <-
> locfit(y~x,alpha=c(0,h),kern="gauss",ev="data",deg=0,link="ident")
>
> # This is what I get with LOCFIT
> print(cbind(x,mhat,residuals(yhl.raw,type="fit"),knots(yhl.raw,what="coef")
>))
> #--
>--
>
> Questions:
> 1) Why are residuals(.) & knots(.) results different from one another? If I
> want m^(x[i]) at each evaluation point i=1,...,n, which one should I use? I
> do not want interpolation whatsoever. 2) Why are they `close' but not equal
> to what I want?
>
> I can accept differences for higher degrees and multidimensional data at
> the boundary of the support (given the way we must do the regression in
> areas with sparse data) But why are these difference present for deg=0
> inside the support as well as at the boundary? The computer would still
> give us a result even with a close-to-zero random denominator (admittedly,
> not a reliable one). Unfortunately, I cannot get access to a copy of
> "Loader, C. (1999) Local Regression and Likelihood, Springer" from my local
> library, so a small explanation or advice would be greatly appreciated.
>
> I do not mind using an improved version of `what I want', but I would like
> to understand what am I doing?
>
>
> Thanks in advanced for your help,
>
>
> David Jacho-Chávez
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide!
> http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Re: cross compiling R for Windows under Linux

2005-04-14 Thread Lars Schouw
I tried the versions 
latest beta of R 2.1 as well
R-latest.tar.gz13-Apr-2005 17:27  11.6M 

and  tried out the two different mingw packages for
cross compilation
http://www.stats.ox.ac.uk/pub/Rtools/

I still get the same error.

Regards
Lars


--- Lars Schouw <[EMAIL PROTECTED]> wrote:
> Hi
> 
> I tried to cross compile R under Linux but get an
> error.
> 
> i586-mingw32-gcc -isystem
> /home/schouwl/unpack/mingw/include -O2 -Wall
> -pedantic
> -I../include -I. -DHAVE_CONFIG_H -DR_DLL_BUILD  -c
> dynload.c -o dynload.o
> dynload.c: In function `R_loadLibrary':
> dynload.c:94: warning: implicit declaration of
> function `_controlfp'
> dynload.c:94: error: `_MCW_IC' undeclared (first use
> in this function)
> dynload.c:94: error: (Each undeclared identifier is
> reported only once
> dynload.c:94: error: for each function it appears
> in.)
> dynload.c:95: warning: implicit declaration of
> function `_clearfp'
> dynload.c:99: error: `_MCW_EM' undeclared (first use
> in this function)
> dynload.c:99: error: `_MCW_RC' undeclared (first use
> in this function)
> dynload.c:99: error: `_MCW_PC' undeclared (first use
> in this function)
> make[3]: *** [dynload.o] Error 1
> make[2]: *** [../../bin/R.dll] Error 2
> make[1]: *** [rbuild] Error 2
> make: *** [all] Error 2
> 
> 
> This is the that was reported in the mailing list
> before. 
>
http://tolstoy.newcastle.edu.au/R/devel/04/12/1571.html
> 
> I have set the HEADER correct in MkRules
> HEADER=/home/schouwl/unpack/mingw/include
> The file float.h is located in the 
> ../i586-mingw32/include/float.h 
> from there.
> 
> I am cross compiling R 2.0.1 source code.
> 
> Help would be appreciated..
> 
> Lars Schouw
> 
> 
> 
> 
>   
> __ 
> Do you Yahoo!? 

> http://smallbusiness.yahoo.com/resources/
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] LOCFIT: What's it doing?

2005-04-14 Thread Jacho-Chavez,DT (pgr)
Dear R-users,

One of the main reasons I moved from GAUSS to R (as an econometrician) was 
because of the existence of the library LOCFIT for local polynomial regression. 
While doing some checking between my former `GAUSS code' and my new `R code', I 
came to realize LOCFIT is not quite doing what I want. I wrote the following 
example script:

#-
# Plain Vanilla NADARAYA-WATSON estimator (or Local Constant regression, e.g. 
deg=0)
# with gaussian kernel & fixed bandwidth

mkern<-function(y,x,h){
Mx <- matrix(x,nrow=length(y),ncol=length(y),byrow=TRUE)
Mxh <- (1/h)*dnorm((x-Mx)/h)
Myxh<- (1/h)*y*dnorm((x-Mx)/h)
yh <- rowMeans(Myxh)/rowMeans(Mxh)
return(yh)
}

# Generating the design Y=m(x)+e
n <- 10
h <- 0.5
x <- rnorm(n)
y <- x + rnorm(n,mean=0,sd=0.5)

# This is what I really want!
mhat <- mkern(y,x,h)

library(locfit)
yhl.raw <- locfit(y~x,alpha=c(0,h),kern="gauss",ev="data",deg=0,link="ident")

# This is what I get with LOCFIT
print(cbind(x,mhat,residuals(yhl.raw,type="fit"),knots(yhl.raw,what="coef")))
#

Questions:
1) Why are residuals(.) & knots(.) results different from one another? If I 
want m^(x[i]) at each evaluation point i=1,...,n, which one should I use? I do 
not want interpolation whatsoever.
2) Why are they `close' but not equal to what I want?

I can accept differences for higher degrees and multidimensional data at the 
boundary of the support (given the way we must do the regression in areas with 
sparse data) But why are these difference present for deg=0 inside the support 
as well as at the boundary? The computer would still give us a result even with 
a close-to-zero random denominator (admittedly, not a reliable one). 
Unfortunately, I cannot get access to a copy of "Loader, C. (1999) Local 
Regression and Likelihood, Springer" from my local library, so a small 
explanation or advice would be greatly appreciated.

I do not mind using an improved version of `what I want', but I would like to 
understand what am I doing?


Thanks in advanced for your help,


David Jacho-Chávez

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] gnlr/3 question

2005-04-14 Thread Arnout Standaert
Hi list,
I'd like to fit generalized gamma and weibull distributions to a number 
of data sets. I've been searching around and found references to R and 
Jim Lindsey's GNLM package, which has the gnlr and gnlr3 procedures that 
can do this.

Now, I'm completely new to R, and I'm working my way through the 
introduction... Nevertheless, I'd like to ask if someone could post a 
straightforward example of the use of gnlr/gnlr3 for simply fitting 
distributions (so, basically, with a null model)...

Thanks in advance,
Arnout
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] compiling & installing R devel version on Debian

2005-04-14 Thread Stefano Calza
Hi all.

I'm compiling the devel version of R on Debian GNU/Linux, and installing it 
into /usr/local tree (instead of default /usr). So:

./configure --prefix=/usr/local/
make
make install

Everything works fine, but when I start R I get the following error messages 
(traslated from italian, sorry):

Error in dyn.load(x,as.logical(local),as.logical(now)):
 impossible to load the shared library 
'/usr/lib/R/library/stats/libs/stats.so':
 libR.so:cannot open shared object file: No such file or directory
Error in dyn.load(x,as.logical(local),as.logical(now)):
 impossible to load the shared library 
'/usr/lib/R/library/methods/libs/methods.so':
 libR.so:cannot open shared object file: No such file or directory

and

package stats in options("defaultPackages") was not found
package methods in options("defaultPackages") was not found


Looks like it's look into the wrong directory tree. But why? Where am I going 
wrong?

TIA,

Stefano

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] cross compiling R for Windows under Linux

2005-04-14 Thread Lars Schouw
Hi

I tried to cross compile R under Linux but get an
error.

i586-mingw32-gcc -isystem
/home/schouwl/unpack/mingw/include -O2 -Wall -pedantic
-I../include -I. -DHAVE_CONFIG_H -DR_DLL_BUILD  -c
dynload.c -o dynload.o
dynload.c: In function `R_loadLibrary':
dynload.c:94: warning: implicit declaration of
function `_controlfp'
dynload.c:94: error: `_MCW_IC' undeclared (first use
in this function)
dynload.c:94: error: (Each undeclared identifier is
reported only once
dynload.c:94: error: for each function it appears in.)
dynload.c:95: warning: implicit declaration of
function `_clearfp'
dynload.c:99: error: `_MCW_EM' undeclared (first use
in this function)
dynload.c:99: error: `_MCW_RC' undeclared (first use
in this function)
dynload.c:99: error: `_MCW_PC' undeclared (first use
in this function)
make[3]: *** [dynload.o] Error 1
make[2]: *** [../../bin/R.dll] Error 2
make[1]: *** [rbuild] Error 2
make: *** [all] Error 2


This is the that was reported in the mailing list
before. 
http://tolstoy.newcastle.edu.au/R/devel/04/12/1571.html

I have set the HEADER correct in MkRules
HEADER=/home/schouwl/unpack/mingw/include
The file float.h is located in the 
../i586-mingw32/include/float.h 
from there.

I am cross compiling R 2.0.1 source code.

Help would be appreciated..

Lars Schouw

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] pstoedit

2005-04-14 Thread Friedrich . Leisch
> On Wed, 13 Apr 2005 09:36:21 +0100 (BST),
> (Ted Harding) ((H) wrote:

  > On 13-Apr-05 Prof Brian Ripley wrote:
  >> On Wed, 13 Apr 2005, BORGULYA [iso-8859-2] Gábor wrote:
  >> 
  >>> Has onyone experience with "pstoedit"
  >>> (http://www.pstoedit.net/pstoedit)
  >>> to convert eps graphs generated by R on Linux to Windows
  >>> formats (WMF or EMF)? Does this way work? Is there an other,
  >>> better way?
  >> 
  >> You can only do that using pstoedit on Windows.
  >> ^^

  > Well, I have pstoedit on Linux and with

  >   pstoedit -f emf infile.eps outfile.emf

  > I get what is claimed to be "Enhanced Windows metafile"
  > and which can be imported into Word (though then it is
  > subsequently somewhat resistant to editing operations,
  > such as rotating if it's the wrong way up).

I always use

pstoedit -f xfig $1 $figfile
fig2dev -L emf $figfile $outfile

on Linux (Debian's pstoedit seems not to support emf). Doesn't work
for all graphics, but in most cases it does, and when it works I get
something I can fully edit in Word, i.e., I can change the text of
axis labels, move points etc.

HTH,
Fritz

-- 
---
Friedrich Leisch 
Institut für Statistik Tel: (+43 1) 58801 10715
Technische Universität WienFax: (+43 1) 58801 10798
Wiedner Hauptstraße 8-10/1071
A-1040 Wien, Austria http://www.ci.tuwien.ac.at/~leisch

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] RE:Building R packages under Windows.

2005-04-14 Thread Duncan Golicher
Hello,
I feel a brief follow up to my post a few days ago is in order. Thanks 
to Renaud Lancelot's truly excelent step by step guide I solved my own 
difficulties in building R packages for personal use. A short while 
after he sent me his document the reply was -"Voilà!  Je l'ai 
fonctionnant (enfin!). C'est facile!" My French is obviously quite 
hopeless which is a tribute to the clarity of his explanation! However I 
am still receiving a suprising number of off list comments to the effect 
that package building is much more trouble than it is worth.

I'm now convinced that it's not, thanks to Renaud's very clear document.
The problem seems to lie in the fact that the "official" documentation 
for package construction is aimed at those providing serious "R 
extensions" for publication on CRAN.

That's fine, but packages are also very useful for personal use as a 
neat way of keeping your own stuff organised and documented. At the 
moment the overhead (time) for personal package construction 
superficially looks much higher than it need be, especially for users of 
R under Windows. Dare I say it, but all the saintly souls who dedicate 
their lives to R sometimes overlook lay users who have a(nother) life!

Is anyone working on a simple way of building packages, or explaining 
how to build packages? They would find a larger user base than they 
might suspect.

Duncan Golicher
--
Dr Duncan Golicher
Ecologia y Sistematica Terrestre
Conservación de la Biodiversidad
El Colegio de la Frontera Sur
San Cristobal de Las Casas, Chiapas, Mexico
Tel. 967 1883 ext 1310
Celular 044 9671041021
[EMAIL PROTECTED]
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html