Re: [R] controling x-labels in xyplot (lattice) when x is POSIX object

2003-10-21 Thread Martin Maechler
> "Deepayan" == Deepayan Sarkar <[EMAIL PROTECTED]>
> on Mon, 20 Oct 2003 12:58:36 -0500 writes:

Deepayan> On Monday 20 October 2003 12:35,
Deepayan> [EMAIL PROTECTED] wrote:

   <.>

>> Previously, I used 'calculateAxisComponents' to massage
>> the labels manually but that function (which I realise
>> was internal to lattice) is no longer available.

Deepayan> It's still there, but not exported (in the
Deepayan> NAMESPACE sense). You will find it in the source,
Deepayan> and perhaps be able to use it to calculate your
Deepayan> own 'at' and 'labels' externally.

and you can now (R 1.8.x) use the ":::" operator to access internal
symbols, i.e., use the function as

lattice:::calculateAxisComponents(...)

Deepayan> <.>

Martin

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] Jarque-Bera Test

2003-10-21 Thread Susana Bird
Dear all,
 
i have the question about the using of Jarque-Bera Test by using R. The question is 
that I do not have in my package "ts" this test and can not obtain any information in 
the help-file. Could you help my? Where could I download the package and which one, to 
use the Jarque-Bera Test?
 
Thank You,
Susan











-

- New people, new possibilities. FREE for a limited time!
[[alternative HTML version deleted]]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: [R] Jarque-Bera Test

2003-10-21 Thread Pfaff, Bernhard
> Dear all,
>  
> i have the question about the using of Jarque-Bera Test by 
> using R. The question is that I do not have in my package 
> "ts" this test and can not obtain any information in the 
> help-file. Could you help my? Where could I download the 
> package and which one, to use the Jarque-Bera Test?

see package "tseries"


>  
> Thank You,
> Susan
> 
> 



The information contained herein is confidential and is intended solely for the
addressee. Access by any other party is unauthorised without the express
written permission of the sender. If you are not the intended recipient, please
contact the sender either via the company switchboard on +44 (0)20 7623 8000, or
via e-mail return. If you have received this e-mail in error or wish to read our
e-mail disclaimer statement and monitoring policy, please refer to 
http://www.drkw.com/disc/email/ or contact the sender.

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] weighted correlations and NAs

2003-10-21 Thread tobias . verbeke




Dear list,

Is there a way to obtain a matrix of weighted
correlations in the presence of missing values ?
cor() can deal with NAs but cov.wt() apparently
can't. Is there any package that offers such a
function, e.g. one that uses all complete pairs
of observations ?

Thanks in advance,

Tobias

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Running RMySQL with SuSE 8.2?

2003-10-21 Thread M.Kondrin
Barnet Wagman wrote:

Do you know where I find a patched version, or where I find the patch an 
instructions on how to install it?  (I didn't see anything about this on 
CRAN.)  I'm running R-base-1.8.0-1.i386.rpm (the most recent binary 
available for SuSE) and it appears to have the 'valueClass' problem.

Thanks,

Barnet

David James wrote:

 

However, 
there is a problem in the released version of R 1.8.0 that affects
the DBI and other packages (has something to do with methods
that use the "valueClass" argument in the setGeneric/setMethod functions).
In this case one needs to use the R-patched version.

   



 

Hope this helps,

--
David
Barnet Wagman wrote:

   

Since there doesn't appear to be an RMySQL rpm for SuSE 8.*,  does 
anyone know if the 7.3 version will work with the SuSE 8.2 rpms of R and 
DBI?

The package installs without complaint, but when I try to run

  con <- dbConnect(dbDriver("MySQL"),dbname="test")

I get the error

  Error in dbConnect(dbDriver("MySQL")) : couldn't find function 
".valueClassTest"

(This is my first attempt to access a an rdms from R, so I could be 
doing something else wrong.)

Any ideas as what might be generating this error, or as to combinations 
of rpms that will work under SuSE 8.2 would be appreciated. (I took a 
stab at compiling RMySQL from src, but I don't have MySQL src installed 
and I rather not get involved in this if I can avoid it.)

Thanks,

Barnet Wagman

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
  

 



   

	[[alternative HTML version deleted]]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
 

There was a discussion about it just after R-1.8.0 was released. Patched 
version may be found at CRAN (as usual). For example 
ftp://ftp.stat.math.ethz.ch/Software/R/R-patched_2003-10-*. You can 
either compile it from scratch or fix your available installation just 
adding a line exports(.valueClassTest) to the end of NAMESPACE file in 
the methods library (this works for me).

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Polynomial lags

2003-10-21 Thread Spencer Graves
Have you checked "www.r-project.org" -> search -> "R site search"?  I 
just got 15 hits for "polynomial lag".  If you haven't already tried 
this, I'd guess that some of these hits (though certainly not all) might 
help you. 

hope this helps.  spencer graves

Francisco Vergara wrote:

Does anybody know if there is a built in fuction to use create 
polynomial distributed lags (sometimed called Almon lag) on linear 
models?

Thanks

Francisco

_

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] run R under linux

2003-10-21 Thread Jason Turner
Zhen Pang wrote:
I want to do 200 simulations. I found during the fisrt 128 simulations, 
some parameters may be NAs, since I use if (abs(aold-anew)<1e-5) {print 
(anew)  break} to break the one estimation. 
...
 > I want to know how to resume my program with the seeds saved, and do
like continueing the 130th one without break. If possible, the results 
of the first 128 simulations can be saved and combine with the remaining 
simulation.
help(try)

Cheers

Jason
--
Indigo Industrial Controls Ltd.
http://www.indigoindustrial.co.nz
64-21-343-545
[EMAIL PROTECTED]
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] BEGINNER: please help me to write my VERY simple function

2003-10-21 Thread Michele Grassi
Hi.
1)I have two variables: call a<-c(e.g.0,3,6,7...)
   b<-c(e.g.6,8,3,4...)
I want to create a third vector z wich contain the 
pairs values z<-c(0,6,3,8,6,3,7,4and so on for each 
pairs (a,b)).
There is a specific function?
How can i write my own function?

2)When i try to write a function and then i save it 
like "function.R" file, i try to retrieve it with 
source comand. As result i obtain an error 
message "error in parse: sintax error on line...". I 
apply deparse() and i see an incorrect parsing: how 
avoid unwanted parsing?
Thanks.
Michele.

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] BEGINNER: please help me to write my VERY simple function

2003-10-21 Thread Jason Turner
Michele Grassi wrote:

Hi.
1)I have two variables: call a<-c(e.g.0,3,6,7...)
   b<-c(e.g.6,8,3,4...)
I want to create a third vector z wich contain the 
pairs values z<-c(0,6,3,8,6,3,7,4and so on for each 
pairs (a,b)).
There is a specific function?
How can i write my own function?

For that, you don't need to.

help(expand.grid)


2)When i try to write a function and then i save it 
like "function.R" file, i try to retrieve it with 
source comand. As result i obtain an error 
message "error in parse: sintax error on line...". I 
apply deparse() and i see an incorrect parsing: how 
avoid unwanted parsing?
I'm really not sure what you're trying to do with deparse here, but I 
don't think it's supposed to do what you meant.  The error message is 
what should be attended to - fix that with your favorite text editor.

Cheers

Jason
--
Indigo Industrial Controls Ltd.
http://www.indigoindustrial.co.nz
64-21-343-545
[EMAIL PROTECTED]
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] BEGINNER: please help me to write my VERY simple function

2003-10-21 Thread Peter Wolf
Michele Grassi wrote:

Hi.
1)I have two variables: call a<-c(e.g.0,3,6,7...)
  b<-c(e.g.6,8,3,4...)
I want to create a third vector z wich contain the 
pairs values z<-c(0,6,3,8,6,3,7,4and so on for each 
pairs (a,b)).
There is a specific function?
How can i write my own function?

2)When i try to write a function and then i save it 
like "function.R" file, i try to retrieve it with 
source comand. As result i obtain an error 
message "error in parse: sintax error on line...". I 
apply deparse() and i see an incorrect parsing: how 
avoid unwanted parsing?
Thanks.
Michele.

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
 

1) a simple solution to the problem:
define (for example):
gen.pairs <- function(x,y){
 z<-as.vector(rbind(x,y))
 z
}
   try:
 >   x<-1:10
 >   y<-11:20
 >  gen.pairs(x,y)
[1]  1 11  2 12  3 13  4 14  5 15  6 16  7 17  8 18  9 19 10 20
2) a) you can write the definition to a file and then use  source this file.
 If the file name is   myfun.R  
 > source("myfun.R")
  will work. But only if there are no errors in the definition
b) you can type in the code of the definition after the R prompt ">"
c) you can type in :
 > gen.pairs <- function(x,y) { z }
 and complete the definition by using edit:
 > edit(gen.pairs)
 However, syntax errors are not allowed!   

Peter

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] Graphics overview

2003-10-21 Thread Christoph Bier
Hi,

is there an graphics overview, where the graphic capabitlities 
of R are shown with the corresponding code? I already tested 
'demo(graphics)', that isn't that comprehensive, 
'demo(image)', 'demo(lattice)', searched the Mailarchive, 
googled and the FAQ keeps silent, too.
   For example, I know how a special graphic I need should 
look like, but I don't know how to realise it. I even don't 
know how to describe it =).
   Another example, much more simpler (I hope): I want to get 
the sum of the values in a plot above the columns. Like this:

|   3
|  2_
|  _   | |
| | |  | |
|_|_|__|_|__
   AB
   RTMFs are welcome =/. But I read 'help(plot)' (plot is 
what I actually use for the graphic above¹) and 'help(par)', 
searched my introduction to S and S-Plus and I'm still waiting 
for "Introductory Statistics with R" (P. Dalgaard), that is 
not deliverable at the moment.

TIA

Regards,

Christoph

___
¹ Data is a data.frame with A and B being the sums of the 
characteristic values (not numeric) of one variable.
--
Christoph Bier, Dipl.Oecotroph., Email: [EMAIL PROTECTED]
Universitaet Kassel, FG Oekologische Lebensmittelqualitaet und
Ernaehrungskultur \\ Postfach 12 52 \\ 37202 Witzenhausen
Tel.: +49 (0) 55 42 / 98 -17 21, Fax: -17 13

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Graphics overview

2003-10-21 Thread Prof Brian Ripley
On Tue, 21 Oct 2003, Christoph Bier wrote:

> Hi,
> 
> is there an graphics overview, where the graphic capabitlities 
> of R are shown with the corresponding code? I already tested 
> 'demo(graphics)', that isn't that comprehensive, 
> 'demo(image)', 'demo(lattice)', searched the Mailarchive, 
> googled and the FAQ keeps silent, too.
> For example, I know how a special graphic I need should 
> look like, but I don't know how to realise it. I even don't 
> know how to describe it =).

Chapter 4 of MASS (the book) is a pretty comprehensive set of examples, 
but given that there are lots of plots associated with e.g. multivariate 
analysis (try chapter 11 of MASS) and time series (try chapter 14 of MASS) 
the scope is enormous.

> Another example, much more simpler (I hope): I want to get 
> the sum of the values in a plot above the columns. Like this:
> 
> |   3
> |  2_
> |  _   | |
> | | |  | |
> |_|_|__|_|__
> AB
> 
> RTMFs are welcome =/. But I read 'help(plot)' (plot is 
> what I actually use for the graphic above¹) and 'help(par)', 

It looks like a barplot to me.

> searched my introduction to S and S-Plus and I'm still waiting 
> for "Introductory Statistics with R" (P. Dalgaard), that is 
> not deliverable at the moment.

There is an example of that in the MASS package script ch04.R
The means to do it are described in `An Introduction to R' (and 
elsewhere).

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] report generator a la epiinfo

2003-10-21 Thread Lucas Gonzalez Santa Cruz
Hi

I'd like to use R in epidemiology and disease surveillance.

In EpiInfo you can have a script (.pgm) which calls a predefined report
(.rpt), where a table is calculated and values picked from that table
and placed where the author of the report wants them, with text around
those values. (Please see example below.)

I've looked at manuals, faq, mail-search and google. The closest is an
"R Report Generator" email that looked as if it wasn't followed after a
couple of years.

##The script might have something like this:
read.epiinfo("oswego.rec")
report("oswego.rpt", output="oswego.txt")

##The predefined report might have this:
#{ill}
Exactly {"YES"} people fell ill, and {"NO"} people didn't.
We don't know about the remaining [({}-{"YES"}-{"NO"})*100/{}] percent.
#{icecream ill}
We are specifically interested in the number of people who chose vanilla
and didn't fall ill (all {"VANILLA", "YES"} of them).

Is there anyway to do this with R? Any direction I should look into?

Thanks in advance.

Lucas

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] png() and/or jpeg(): line missing by using box(which="outer")

2003-10-21 Thread Pfaff, Bernhard
Dear R list,

I do encounter the following problem by generating either a png-file
(example below) or a jpeg-file: 
By employing 'box(which="outer")' a box is drawn, except for the right line.
If I generate the plot without the 'box(which="outer")', a line at the
bottom in the graphics file still appears. However, both plots are displayed
correctly in the R Graphics Device Window, i.e  with a box including the
right side or one without any lines at the outer margins of the plot. Now, I
want either a file - including the right side of box or one that has none on
all sides. 

test <- rnorm(100)
par(mar=c(6,4,6,4), oma=c(1,1,1,1))
png("test1.png")
plot(test)
grid()
box(which="outer")
box(which="plot")
dev.off()

png("test2.png")
plot(test)
grid()
box(which="plot")
dev.off()

Incidentally, both functions are calling .Internal(devga()). I have not
encountered this problem with version R 1.7.1 (for which I used the binary
distribution on CRAN). Now, I have source compiled R 1.8.0. Although,
everything passed 'make check', I am wondering if it could be possible that
'devga.c' or any other necessary file for running png() or jpeg() have not
been compiled 'correctly', or do I have simply to adjust a par()-argument?  

Any pointers or help is appreciated.
 

Bernhard


platform: "i386-pc-mingw32"
arch: "i386"
os: "mingw32"
system: "i386, mingw32"
major: "1"
minor: "8.0"
Windows NT 5.0




The information contained herein is confidential and is intended solely for the
addressee. Access by any other party is unauthorised without the express
written permission of the sender. If you are not the intended recipient, please
contact the sender either via the company switchboard on +44 (0)20 7623 8000, or
via e-mail return. If you have received this e-mail in error or wish to read our
e-mail disclaimer statement and monitoring policy, please refer to 
http://www.drkw.com/disc/email/ or contact the sender.

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: [R] BEGINNER: please help me to write my VERY simple functi

2003-10-21 Thread Ted Harding
On 21-Oct-03 Michele Grassi wrote:
> Hi.
> 1)I have two variables: call a<-c(e.g.0,3,6,7...)
>b<-c(e.g.6,8,3,4...)
> I want to create a third vector z wich contain the 
> pairs values z<-c(0,6,3,8,6,3,7,4and so on for each 
> pairs (a,b)).
> There is a specific function?
> How can i write my own function?

The following will do what you want, though I don't know whether
there is a simpler way to do it.

  z <- rbind(a,b) ; z <- as.vector(z)

For example:

> a <- c(1,3,5,7) ; b <- c(2,4,6,8)
> z <- rbind(a,b) ; z <- as.vector(z) ; z
[1] 1 2 3 4 5 6 7 8

Best wishes,
Ted.




E-Mail: (Ted Harding) <[EMAIL PROTECTED]>
Fax-to-email: +44 (0)870 167 1972
Date: 21-Oct-03   Time: 11:27:02
-- XFMail --

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Graphics overview

2003-10-21 Thread Jonathan Baron
On 10/21/03 12:22, Christoph Bier wrote:
>Hi,
>
>is there an graphics overview, where the graphic capabitlities 
>of R are shown with the corresponding code?

A very elementary overview like this is in our "Notes on R for
psychology experiments and questionnaires," in CRAN "contributed
documents" and in my R page below.  We expanded it a bit from the
even-more elementary version that was there before August.
-- 
Jonathan Baron, Professor of Psychology, University of Pennsylvania
Home page:http://www.sas.upenn.edu/~baron
R page:   http://finzi.psych.upenn.edu/

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] png() and/or jpeg(): line missing by using box(which="outer")

2003-10-21 Thread Prof Brian D Ripley
It is probably a bug: does it happen when you copy from the screen in png?
I would expect not, hence that may be a workaround for you.

When I have both time and access to a Windows machine I may be able it take
a closer look: meanwhile you do have access and have the source code so
please investigate it yourself and submit a patch.

On Tue, 21 Oct 2003, Pfaff, Bernhard wrote:

> Dear R list,
>
> I do encounter the following problem by generating either a png-file
> (example below) or a jpeg-file:
> By employing 'box(which="outer")' a box is drawn, except for the right line.
> If I generate the plot without the 'box(which="outer")', a line at the
> bottom in the graphics file still appears. However, both plots are displayed
> correctly in the R Graphics Device Window, i.e  with a box including the
> right side or one without any lines at the outer margins of the plot. Now, I
> want either a file - including the right side of box or one that has none on
> all sides.
>
> test <- rnorm(100)
> par(mar=c(6,4,6,4), oma=c(1,1,1,1))
> png("test1.png")
> plot(test)
> grid()
> box(which="outer")
> box(which="plot")
> dev.off()
>
> png("test2.png")
> plot(test)
> grid()
> box(which="plot")
> dev.off()
>
> Incidentally, both functions are calling .Internal(devga()). I have not
> encountered this problem with version R 1.7.1 (for which I used the binary
> distribution on CRAN). Now, I have source compiled R 1.8.0. Although,
> everything passed 'make check', I am wondering if it could be possible that
> 'devga.c' or any other necessary file for running png() or jpeg() have not
> been compiled 'correctly', or do I have simply to adjust a par()-argument?
>
> Any pointers or help is appreciated.
>
>
> Bernhard
>
>
> platform: "i386-pc-mingw32"
> arch: "i386"
> os: "mingw32"
> system: "i386, mingw32"
> major: "1"
> minor: "8.0"
> Windows NT 5.0
>
>
>
> 
> The information contained herein is confidential and is intended solely for the
> addressee. Access by any other party is unauthorised without the express
> written permission of the sender. If you are not the intended recipient, please
> contact the sender either via the company switchboard on +44 (0)20 7623 8000, or
> via e-mail return. If you have received this e-mail in error or wish to read our
> e-mail disclaimer statement and monitoring policy, please refer to
> http://www.drkw.com/disc/email/ or contact the sender.
>
> __
> [EMAIL PROTECTED] mailing list
> https://www.stat.math.ethz.ch/mailman/listinfo/r-help
>
>

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272860 (secr)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] run R under linux

2003-10-21 Thread Zhen Pang
We are not allowed to submit job directly, so I never type R to use R, just 
make a batch. How can I use try() to correct my codes? In the interactive 
mode, I know how to continue, but now I never enter the R window, where to 
find my results and save seed to continue?



From: Jason Turner <[EMAIL PROTECTED]>
To: Zhen Pang <[EMAIL PROTECTED]>
CC: [EMAIL PROTECTED]
Subject: Re: [R] run R under linux
Date: Tue, 21 Oct 2003 22:17:59 +1300
Zhen Pang wrote:
I want to do 200 simulations. I found during the fisrt 128 simulations, 
some parameters may be NAs, since I use if (abs(aold-anew)<1e-5) {print 
(anew)  break} to break the one estimation.
...
 > I want to know how to resume my program with the seeds saved, and do
like continueing the 130th one without break. If possible, the results of 
the first 128 simulations can be saved and combine with the remaining 
simulation.
help(try)

Cheers

Jason
--
Indigo Industrial Controls Ltd.
http://www.indigoindustrial.co.nz
64-21-343-545
[EMAIL PROTECTED]
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Strange behaviour

2003-10-21 Thread Vittorio
Paul Murrell [r-help] <20/10/03 09:13 +1300>:
> Hi
>. 
> The "nasty rectangles" are the output of the layout.show() function. 
> This function draws a simple diagram (consisting of nasty rectangles) to 
> indicate the regions that a call to layout() has set up.  It is designed 
> to help users to understand what on earth the layout() function is 
> doing.  (It is NOT a necessary part of setting up an arrangement of 
> plots using the layout() function.)
> 
> I suspect that the author of "simpleR" may have accidentally left the 
> layout.show() call in simple.scatterplot() when copying the example from 
> the layout() help file (apologies to John Verzani if this is an unfair 
> diagnosis).
> 
> So the immediate solution to your problem is to remove the line ...
> 
> layout.show(nf)
> 
> ... from simple.scatterplot().  The output should then be a single page 
> which should "include" ok in latex.
> 
> The larger problem of how to get at individual pages of output is 
> probably best solved using something like the "onefile" argument to 
> devices.  For example, look at the files produced by ...
> 
> pdf(onefile=FALSE)
> example(layout)
> 
> ... and at the help page for pdf() to see more about how to do this.
> 
> Hope that helps
>...

Yes, Paul, definitely it helps. Thanks!

I obtained what I wanted. 

Now, I want to control the output of the pdf() command making it write
a specific file chosen by me and not the system. After reading the
help page for the pdf, I was unable to do it.

E.g. I issued
 
onefile<-FALSE
pdf(file=ifelse(onefile,,"vic.pdf")
example(layout)


And I obtained a 5-pages vic.pdf with page 1-4 full of "nasty
rectangles" of any kind and page 5 with the right picture.

Please help

Ciao from Rome - Vittorio

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] Type III Sum of Squares Calculation

2003-10-21 Thread Subramanian Karthikeyan
HI All:

Can anyone give me the formulae/steps for calculating the type III sum of
squares for an unbalanced 2-way ANOVA design?  Eg. we are looking at 8
treatments x 4 doses, with unequal numbers of replications within the
groups.  I really need the stepwise calculation, as I would try to put it
in my own code (possibly in Visual Basic) to automate the task.

Thanks very much.

Karth.

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Strange behaviour

2003-10-21 Thread Prof Brian Ripley
On Tue, 21 Oct 2003, Vittorio wrote:

> Paul Murrell [r-help] <20/10/03 09:13 +1300>:
> > Hi
> >. 
> > The "nasty rectangles" are the output of the layout.show() function. 
> > This function draws a simple diagram (consisting of nasty rectangles) to 
> > indicate the regions that a call to layout() has set up.  It is designed 
> > to help users to understand what on earth the layout() function is 
> > doing.  (It is NOT a necessary part of setting up an arrangement of 
> > plots using the layout() function.)
> > 
> > I suspect that the author of "simpleR" may have accidentally left the 
> > layout.show() call in simple.scatterplot() when copying the example from 
> > the layout() help file (apologies to John Verzani if this is an unfair 
> > diagnosis).
> > 
> > So the immediate solution to your problem is to remove the line ...
> > 
> > layout.show(nf)
> > 
> > ... from simple.scatterplot().  The output should then be a single page 
> > which should "include" ok in latex.
> > 
> > The larger problem of how to get at individual pages of output is 
> > probably best solved using something like the "onefile" argument to 
> > devices.  For example, look at the files produced by ...
> > 
> > pdf(onefile=FALSE)
> > example(layout)
> > 
> > ... and at the help page for pdf() to see more about how to do this.
> > 
> > Hope that helps
> >...
> 
> Yes, Paul, definitely it helps. Thanks!
> 
> I obtained what I wanted. 
> 
> Now, I want to control the output of the pdf() command making it write
> a specific file chosen by me and not the system. After reading the
> help page for the pdf, I was unable to do it.
> 
> E.g. I issued
>  
> onefile<-FALSE
> pdf(file=ifelse(onefile,,"vic.pdf")
That's an error: it has a missing argument and a missing parenthesis.
> example(layout)

Note:

1) onefile is no longer set as an argument to pdf().

2) When you set onefile=FALSE, you will only get the last plot in your
file unless you give a file name of the type described on the help page.

3) *You* plotted the `nasty rectangles', so why are you aruprised you got 
them in the file?  If you don't want them, don't plot them!

> And I obtained a 5-pages vic.pdf with page 1-4 full of "nasty
> rectangles" of any kind and page 5 with the right picture.
> 
> Please help

Please follow more carefully the help you have already been given.

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: [R] report generator a la epiinfo

2003-10-21 Thread Christophe Declercq
Hi, Lucas

You should try Sweave in the 'tools' package (see
http://www.ci.tuwien.ac.at/~leisch/Sweave/).

You will have to get a TeX/LaTeX distribution and learn a little of LaTeX
but it is worth the effort.

I frequently use R with Sweave on EpiData files (http://www.epidata.dk/)
with great success.

Hope it helps.

Christophe
--
Christophe DECLERCQ, MD
Observatoire Régional de la Santé Nord-Pas-de-Calais
13, rue Faidherbe 59046 LILLE Cedex FRANCE
Phone +33 3 20 15 49 24
Fax   +33 3 20 55 92 30
E-mail [EMAIL PROTECTED]


> -Message d'origine-
> De : [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] la part de Lucas Gonzalez
> Santa Cruz
> Envoyé : mardi 21 octobre 2003 11:55
> À : [EMAIL PROTECTED]
> Objet : [R] report generator a la epiinfo
>
>
> Hi
>
> I'd like to use R in epidemiology and disease surveillance.
>
> In EpiInfo you can have a script (.pgm) which calls a predefined report
> (.rpt), where a table is calculated and values picked from that table
> and placed where the author of the report wants them, with text around
> those values. (Please see example below.)
>
> I've looked at manuals, faq, mail-search and google. The closest is an
> "R Report Generator" email that looked as if it wasn't followed after a
> couple of years.
>
> ##The script might have something like this:
> read.epiinfo("oswego.rec")
> report("oswego.rpt", output="oswego.txt")
>
> ##The predefined report might have this:
> #{ill}
> Exactly {"YES"} people fell ill, and {"NO"} people didn't.
> We don't know about the remaining [({}-{"YES"}-{"NO"})*100/{}] percent.
> #{icecream ill}
> We are specifically interested in the number of people who chose vanilla
> and didn't fall ill (all {"VANILLA", "YES"} of them).
>
> Is there anyway to do this with R? Any direction I should look into?
>
> Thanks in advance.
>
> Lucas
>
> __
> [EMAIL PROTECTED] mailing list
> https://www.stat.math.ethz.ch/mailman/listinfo/r-help
>

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Graphics overview

2003-10-21 Thread Christoph Bier
Prof Brian Ripley wrote:

Chapter 4 of MASS (the book) is a pretty comprehensive set of examples, 
but given that there are lots of plots associated with e.g. multivariate 
analysis (try chapter 11 of MASS) and time series (try chapter 14 of MASS) 
the scope is enormous.
I ordered it right now in our library.


   Another example, much more simpler (I hope): I want to get 
the sum of the values in a plot above the columns. Like this:

|   3
|  2_
|  _   | |
| | |  | |
|_|_|__|_|__
   AB
   RTMFs are welcome =/. But I read 'help(plot)' (plot is 
what I actually use for the graphic above¹) and 'help(par)', 


It looks like a barplot to me.
It's realised (without the sum of the values above the 
coliumns) via

> attach(data.frame)
> plot(variable.from.data.frame)
...

searched my introduction to S and S-Plus and I'm still waiting 
for "Introductory Statistics with R" (P. Dalgaard), that is 
not deliverable at the moment.


There is an example of that in the MASS package script ch04.R
The means to do it are described in `An Introduction to R' (and 
elsewhere).
I had a look at this script on my machine and have a print 
version of "An Introduction to R". Maybe I find out what to do 
with such scripts.

Thanks for your answer!

Best regards,

Christoph
--
Christoph Bier, Dipl.Oecotroph., Email: [EMAIL PROTECTED]
Universitaet Kassel, FG Oekologische Lebensmittelqualitaet und
Ernaehrungskultur \\ Postfach 12 52 \\ 37202 Witzenhausen
Tel.: +49 (0) 55 42 / 98 -17 21, Fax: -17 13
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Graphics overview

2003-10-21 Thread Peter Dalgaard
Prof Brian Ripley <[EMAIL PROTECTED]> writes:

> On Tue, 21 Oct 2003, Christoph Bier wrote:
> 
> > Hi,
> > 
> > is there an graphics overview, where the graphic capabitlities 
> > of R are shown with the corresponding code? I already tested 
> > 'demo(graphics)', that isn't that comprehensive, 
> > 'demo(image)', 'demo(lattice)', searched the Mailarchive, 
> > googled and the FAQ keeps silent, too.
> > For example, I know how a special graphic I need should 
> > look like, but I don't know how to realise it. I even don't 
> > know how to describe it =).
> 
> Chapter 4 of MASS (the book) is a pretty comprehensive set of examples, 
> but given that there are lots of plots associated with e.g. multivariate 
> analysis (try chapter 11 of MASS) and time series (try chapter 14 of MASS) 
> the scope is enormous.
> 
> > Another example, much more simpler (I hope): I want to get 
> > the sum of the values in a plot above the columns. Like this:
> > 
> > |   3
> > |  2_
> > |  _   | |
> > | | |  | |
> > |_|_|__|_|__
> > AB
> > 
> > RTMFs are welcome =/. But I read 'help(plot)' (plot is 
> > what I actually use for the graphic above¹) and 'help(par)', 

(Read the muckin' *what*?? ;-) )
 
> It looks like a barplot to me.
> 
> > searched my introduction to S and S-Plus and I'm still waiting 
> > for "Introductory Statistics with R" (P. Dalgaard), that is 
> > not deliverable at the moment.

That book tries rather hard to show only the basic procedure and
not to do fancy things, so this is not explicitly covered in there. It
does describe barplot() and text(), though. (Odd, BTW, www.springer.de
says it ships within 3 days).
 
> There is an example of that in the MASS package script ch04.R
> The means to do it are described in `An Introduction to R' (and 
> elsewhere).

Also, try 

par(ask=T); example(barplot)

The fourth example is fairly close to what you want to do (colSums
instead of colMeans should place the numbers at the end of the
columns). 

-- 
   O__   Peter Dalgaard Blegdamsvej 3  
  c/ /'_ --- Dept. of Biostatistics 2200 Cph. N   
 (*) \(*) -- University of Copenhagen   Denmark  Ph: (+45) 35327918
~~ - ([EMAIL PROTECTED]) FAX: (+45) 35327907

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Graphics overview

2003-10-21 Thread Christoph Bier
Jonathan Baron schrieb:
On 10/21/03 12:22, Christoph Bier wrote:

Hi,

is there an graphics overview, where the graphic capabitlities 
of R are shown with the corresponding code?


A very elementary overview like this is in our "Notes on R for
psychology experiments and questionnaires," in CRAN "contributed
documents" and in my R page below.  We expanded it a bit from the
even-more elementary version that was there before August.
I can't find any graphics neither in the document nor on your 
webpage. Maybe a missunderstanding what I'm looking for.

Thanks anyway!

Best regards,

Christoph
--
Christoph Bier, Dipl.Oecotroph., Email: [EMAIL PROTECTED]
Universitaet Kassel, FG Oekologische Lebensmittelqualitaet und
Ernaehrungskultur \\ Postfach 12 52 \\ 37202 Witzenhausen
Tel.: +49 (0) 55 42 / 98 -17 21, Fax: -17 13
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] R and Arcgis through VBA

2003-10-21 Thread Christophe Saint-Jean
Dear R experts,
 I am trying to use R with Arcgis Desktop 8.1.
 When i try to add a "StatConnectorGraphicsDevice" control to my form, VBA returns an 
specified error and nothing else.
 Does anybody has a successful experience with Arcgis and R ?
Thanks,
Christophe Saint-Jean.
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Graphics overview

2003-10-21 Thread Christoph Bier
Peter Dalgaard wrote:
On Tue, 21 Oct 2003, Christoph Bier wrote:
[...]

   RTMFs are welcome =/. But I read 'help(plot)' (plot is 
what I actually use for the graphic above¹) and 'help(par)', 


(Read the muckin' *what*?? ;-) )
Oops :-D

[...]

(Odd, BTW, www.springer.de says it ships within 3 days).
And our local book store said, that it's not deliverable. So
my colleague tried www.amazon.de, that says it ships within
11--12 days. We are still waiting ...
par(ask=T); example(barplot)
Nice!

The fourth example is fairly close to what you want to do (colSums
instead of colMeans should place the numbers at the end of the
columns). 
Yes, it is, thanks! But it seems only to work with arrays as
VADeaths. I don't have an array but a data.frame. And mainly
I'm a naive newbie, that gets more confused the more he wants
from R =(.
Best regards,

Christoph
--
Christoph Bier, Dipl.Oecotroph., Email: [EMAIL PROTECTED]
Universitaet Kassel, FG Oekologische Lebensmittelqualitaet und
Ernaehrungskultur \\ Postfach 12 52 \\ 37202 Witzenhausen
Tel.: +49 (0) 55 42 / 98 -17 21, Fax: -17 13
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Graphics overview

2003-10-21 Thread Jonathan Baron
On 10/21/03 14:38, Christoph Bier wrote:
>Jonathan Baron schrieb:
>> A very elementary overview like this is in our "Notes on R for
>> psychology experiments and questionnaires," in CRAN "contributed
>> documents" and in my R page below.  We expanded it a bit from the
>> even-more elementary version that was there before August.
>
>I can't find any graphics neither in the document nor on your 
>webpage. Maybe a missunderstanding what I'm looking for.

Perhaps.  I'm sorry.  I was referring to the chapter on
graphics.  Specifically
http://www.psych.upenn.edu/~baron/rpsych/rpsych.html#SECTION0006

Although this isn't what you wanted, it might be useful to
someone else who wants a "graphics overview" (the title of your
post).

Jon
-- 
Jonathan Baron, Professor of Psychology, University of Pennsylvania
Home page:http://www.sas.upenn.edu/~baron
R page:   http://finzi.psych.upenn.edu/

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: [R] nnet behaving oddly

2003-10-21 Thread Liaw, Andy
> From: Rajarshi Guha [mailto:[EMAIL PROTECTED] 
> 
> Hi,
>   I was trying to use the nnet library and am not sure of 
> whats going on. I am calling the nnet function as:
> 
>  n <- nnet(x,y,size=3,subset=sets[[1]], maxit=200)

Please give us output of something like:

str(x)
summary(y)

Also, I believe the subset argument is only meant for calls via 
formula; e.g.,

nnet(y ~ x, ...)

and needs to be a vector of logicals the same length as the number
of rows in x, indicating which rows to include in the fitting.
Please also tell us what sets[[1]] is.

Andy
 
> Where x is a 272x4 matrix of observations (examples) and y is 
> a 272x1 matrix of target values. However when I look at 
> nnet$residuals they are off by two orders of magnitude 
> (compared to the output from neural network code that I 
> already have). Looking at nnet$fitted.values shows all the 
> values to be 1 (whereas my target values range from 0 to 150).
> 
> Am I making an obvious mistake in the way I'm calling the 
> function? Is the fact that n$fitted.values is all 1's 
> indicating that the NN is doing a classification? If so how 
> can I make it do quantitation?
> 
> The man page mentions that if the response is a factor then 
> it defaults to quantitation. However my y matrix just contain 
> numbers - so it should'nt be doing classification.
> 
> Any pointers would be appreciated.
> 
> Thanks,
> 
> ---
> Rajarshi Guha <[EMAIL PROTECTED]> 
> GPG Fingerprint: 0CCA 8EE2 2EEB 25E2 AB04 06F7 1BB9 E634 9B87 56EE
> ---
> Psychology is merely producing habits out of rats.
> 
> __
> [EMAIL PROTECTED] mailing list 
> https://www.stat.math.ethz.ch/mailman/listinfo> /r-help
>

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Type III Sum of Squares Calculation

2003-10-21 Thread John Fox
Dear Karth,

The Anova function in the car package can calculate "type-III" sums of 
squares, though it doesn't do so by default. Be careful with the contrast 
coding or you will get nonsense (and you might want to think about whether 
you really want type-III SSs).

I hope that this helps,
 John
At 08:01 AM 10/21/2003 -0400, Subramanian Karthikeyan wrote:
HI All:

Can anyone give me the formulae/steps for calculating the type III sum of
squares for an unbalanced 2-way ANOVA design?  Eg. we are looking at 8
treatments x 4 doses, with unequal numbers of replications within the
groups.  I really need the stepwise calculation, as I would try to put it
in my own code (possibly in Visual Basic) to automate the task.
-
John Fox
Department of Sociology
McMaster University
Hamilton, Ontario, Canada L8S 4M4
email: [EMAIL PROTECTED]
phone: 905-525-9140x23604
web: www.socsci.mcmaster.ca/jfox
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] R and Arcgis through VBA

2003-10-21 Thread Roger Bivand
Christophe:

I suggest moving this discussion from r-help to r-sig-geo, there will be a 
higher density of people there who may have had successful experiences. 
There are so many different configuration issues that might impact this, 
that r-help is too broad a forum. I have also CC-ed this to the R(D)COM 
list, which is also relevant. Can you first confirm that you can use the 
StatConnector with say Excel?

Roger

On Tue, 21 Oct 2003, Christophe Saint-Jean wrote:

> Dear R experts,
>   I am trying to use R with Arcgis Desktop 8.1.
>   When i try to add a "StatConnectorGraphicsDevice" control to my form,
> VBA returns an specified error and nothing else.
>   Does anybody has a successful experience with Arcgis and R ?
> Thanks,
> Christophe Saint-Jean.
> 
> __
> [EMAIL PROTECTED] mailing list
> https://www.stat.math.ethz.ch/mailman/listinfo/r-help
> 

-- 
Roger Bivand
Economic Geography Section, Department of Economics, Norwegian School of
Economics and Business Administration, Breiviksveien 40, N-5045 Bergen,
Norway. voice: +47 55 95 93 55; fax +47 55 95 93 93
e-mail: [EMAIL PROTECTED]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] Indicator Kriging

2003-10-21 Thread Kenneth Cabrera
Hi R-Users

Is there any package in R that work Indicator Kriging?
Or somebody is working in a link between GSLIB library in
FORTRAN for R?
Thank you very much for your help.
--
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] Lines between coordinates

2003-10-21 Thread GWIGGNER Claus-Peter (EXT)
Hello,

Given x1, ..., xn and y1, ..., yn I'd like to draw n lines between xi,yi.
The xi, yi shoulfd be 2-D coordinates.
 
What is an elegant solution?
Thanks.



This message and any files transmitted with it are legally p...{{dropped}}

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: [R] Lines between coordinates

2003-10-21 Thread Liaw, Andy
I suspect you are looking for segments().

Andy

> From: GWIGGNER Claus-Peter (EXT) 
> 
> Hello,
> 
> Given x1, ..., xn and y1, ..., yn I'd like to draw n lines 
> between xi,yi. The xi, yi shoulfd be 2-D coordinates.
>  
> What is an elegant solution?
> Thanks.
> 
> 
> 
> This message and any files transmitted with it are legally 
> p...{{dropped}}
> 
> __
> [EMAIL PROTECTED] mailing list 
> https://www.stat.math.ethz.ch/mailman/listinfo> /r-help
>

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Polynomial lags

2003-10-21 Thread Thomas Lumley
On Tue, 21 Oct 2003, Spencer Graves wrote:

> Have you checked "www.r-project.org" -> search -> "R site search"?  I
> just got 15 hits for "polynomial lag".  If you haven't already tried
> this, I'd guess that some of these hits (though certainly not all) might
> help you.
>

Only one of these is about polynomial distributed lag models, and it's an
apparently unsuccessful request for information.

I have code
http://faculty.washington.edu/tlumley/pdl.R
and documentation
http://faculty.washington.edu/tlumley/pdlglm.html
for a version for generalised linear models.

Note that I have not used this for some years, so it may run into problems
with changes in R.  Also note that it is just a generalised linear model
-- it doesn't do anything about residual autocorrelation (which isn't a
problem in the application I was working on).


-thomas

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Polynomial lags

2003-10-21 Thread Roger Koenker
For what it is worth, I would have thought that expressing
the lag coefficients in a B-spline expansion would be preferable
to going back to Almon approach. This would give a relatively
simple lm() application.


url:www.econ.uiuc.edu/~roger/my.htmlRoger Koenker
email   [EMAIL PROTECTED]   Department of Economics
vox:217-333-4558University of Illinois
fax:217-244-6678Champaign, IL 61820

On Tue, 21 Oct 2003, Thomas Lumley wrote:

> On Tue, 21 Oct 2003, Spencer Graves wrote:
>
> > Have you checked "www.r-project.org" -> search -> "R site search"?  I
> > just got 15 hits for "polynomial lag".  If you haven't already tried
> > this, I'd guess that some of these hits (though certainly not all) might
> > help you.
> >
>
> Only one of these is about polynomial distributed lag models, and it's an
> apparently unsuccessful request for information.
>
> I have code
>   http://faculty.washington.edu/tlumley/pdl.R
> and documentation
>   http://faculty.washington.edu/tlumley/pdlglm.html
> for a version for generalised linear models.
>
> Note that I have not used this for some years, so it may run into problems
> with changes in R.  Also note that it is just a generalised linear model
> -- it doesn't do anything about residual autocorrelation (which isn't a
> problem in the application I was working on).
>
>
>   -thomas
>
> __
> [EMAIL PROTECTED] mailing list
> https://www.stat.math.ethz.ch/mailman/listinfo/r-help
>

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] code efficiency, extr. info from list

2003-10-21 Thread Tord Snall
Dear all, 
I try extracting information from a list with several levels, but I would
be happy for recommendation on writing more efficient code:

> h0<- seq(0,100, by = 20); expo<- seq(0.1, 0.5, l = 5)
> grid<- expand.grid(h0, expo)
> test<- apply(grid, 1, pcp, point.data = as.points(dat[,c("x","y")]),
poly.data = studyarea)

> test[1]
$"1"
$"1"$par
  s2  rho 
1.815343e-06 2.358788e-02 

$"1"$value
[1] 144.346

$"1"$counts
function gradient 
  65   NA 

$"1"$convergence
[1] 0

$"1"$message
NULL

I want to put the results together:
val<- c(test[[1]]$value, test[[2]]$value, test[[3]]$value, test[[4]]$value...)
s2<- c(test[[1]]$par[1], test[[2]]$par[1], test[[3]]$par[1],
test[[4]]$par[1]...)
rho<- ...
funct<- 
grad<- 
.

useful.df<- as.data.frame(cbind(val, s2), F)

However, as you can see 
> dim(grid)
[1] 30  2

the call rows 

val<- c(test[[1]]$value, test[[2]]$value, test[[3]]$value,
test[[4]]$value...)
etc.

will be long.

I would thus be happy for help with writing this code more efficient (and I
know will benefit from this knowing how to do this in the future).


Thanks in advance!

Sincerely,
Tord

---
Tord Snäll
Avd. f växtekologi, Evolutionsbiologiskt centrum, Uppsala universitet
Dept. of Plant Ecology, Evolutionary Biology Centre, Uppsala University
Villavägen 14   
SE-752 36 Uppsala, Sweden
Tel: 018-471 28 82 (int +46 18 471 28 82) (work)
Tel: 018-25 71 33 (int +46 18 25 71 33) (home)
Fax: 018-55 34 19 (int +46 18 55 34 19) (work)
E-mail: [EMAIL PROTECTED]
Check this: http://www.vaxtbio.uu.se/resfold/snall.htm!

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] summary - controling x-labels in xyplot (lattice) when x is POSIX object

2003-10-21 Thread john.gavin
Hi,

The solution to my problem is to use 
lattice:::calculateAxisComponents to calculate appropriate labels
for the time axis in trellis plots.

# For example, given

x <- seq.POSIXt(strptime("2003/01/01", format = "%Y/%m/%d"),
strptime("2003/10/01", format = "%Y/%m/%d"), by = "month")
y <- rnorm(length(x))
dat <- data.frame(x= x, y = y)

# the code 

xyplot(y ~ x, data = dat, type = "b")

# could be replaced with

labels <- lattice:::calculateAxisComponents(x)
labels$at <- ISOdate(1970,01,01) + as.numeric(x)
xyplot(y ~ x, data = dat, type = "b",
   scales = list(x = list(at = labels$at, labels = labels$labels)))

# to get the effect that I want.

This is essentially what I used to do (< 1.8.0) 
but the ':::' operator is now required.
Also, the 'at' component must be of class "POSIXt" rather than numeric,
as was the case before.

Thanks to Deepayan Sarkar <[EMAIL PROTECTED]> and
Martin Maechler <[EMAIL PROTECTED]>.

Regards,

John.

John Gavin <[EMAIL PROTECTED]>,
Quantitative Risk Models and Statistics,
UBS Investment Bank, 6th floor, 
100 Liverpool St., London EC2M 2RH, UK.
Phone +44 (0) 207 567 4289
Fax   +44 (0) 207 568 5352

Date: Mon, 20 Oct 2003 18:35:43 +0100
From: <[EMAIL PROTECTED]>
Subject: [R] controling x-labels in xyplot (lattice) when x is POSIX object
To: <[EMAIL PROTECTED]>

Hi,

V1.8.0 seems to allow DateTimeClasses as the x argument in xyplots (lattice).
For example:

x <- seq.POSIXt(strptime("2003/01/01", format = "%Y/%m/%d"),
strptime("2003/10/01", format = "%Y/%m/%d"), by = "month")
y <- rnorm(length(x))
dat <- data.frame(x= x, y = y)
xyplot(y ~ x, data = dat, type = "b")

However, the labelling for the x-axis is not what I want.
(I see only one tick mark and one label ('Oct').)
What is the recommended way to relabel the x-axis?
Ideally, I want to see several months (3-6) labelled along the x-axis.

Previously, I used 'calculateAxisComponents' to massage the labels manually
but that function (which I realise was internal to lattice) is no longer available.

I am on Windows XP, R 1.8.0.

Regards,

John.


Visit our website at http://www.ubs.com

This message contains confidential information and is intend...{{dropped}}

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] explaining curious result of aov

2003-10-21 Thread Bill Shipley
Hello.  I have come across a curious result that I cannot explain.
Hopefully, someone can explain this.  I am doing a 1-way ANOVA with 6
groups (example: summary(aov(y~A)) with A having 6 levels).  I get an F
of 0.899 with 5 and 15 df (p=0.51).  I then do the same analysis but
using data only corresponding to groups 5 and 6.  This is, of course,
equivalent to a t-test.  I now get an F of 142.3 with 1 and 3 degrees of
freedom and a null probability of 0.001.  I know that multiple
comparisons changes the model-wise error rate, but even if I did all 15
comparisons of the 6 groups, the Bonferroni correction to a 5% alpha is
0.003, yet the Bonferroni correction gives conservative rejection
levels.

How can such a result occur?  Any clues would be helpful.

Thanks.

 

Bill Shipley

Associate Editor, Ecology

North American Editor, Annals of Botany

Département de biologie, Université de Sherbrooke,

Sherbrooke (Québec) J1K 2R1 CANADA

[EMAIL PROTECTED]

 
http://callisto.si.usherb.ca:8080/bshipley/

 


[[alternative HTML version deleted]]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] do.call() and aperm()

2003-10-21 Thread Robin Hankin
Hi everyone

I've been playing with do.call() but I'm having problems understanding it.

I have a list of "n" elements, each one of which is "d" dimensional
[actually an n-by-n-by ... by-n array].  Neither n nor d is known in
advance.  I want to bind the elements together in a higher-dimensional
array.
Toy example follows with d=n=3.

f <- function(n){array(n,c(3,3,3))}
x <-  sapply(1:3,f,simplify=FALSE)
Then what I want is

ans <- abind(x[[1]] , x[[2]] , x[[3]]  , along=4)

[abind() is defined in library(abind)].

Note that dim(ans) is c(3,3,3,3), as required.

PROBLEM: how do I do tell do.call() that I want to give abind() the
extra argument along=4 (in general, I want
along=length(dim(x[[1]]))+1)?
Oblig Attempt:

jj <- function(...){abind(... , along=4)}
do.call("jj" , x)
This works, because I know that d=3 (and therefore use along=4), but
it doesn't generalize easily to arbitrary d.  I'm clearly missing
something basic.  Anyone?
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] code efficiency, extr. info from list

2003-10-21 Thread Anders Nielsen

Try using lapply()

For instance like:

val<-unlist(lapply(test, function(x)x$value))

You can also extend this by having your function return
everything you need from the list.

Cheers,

Anders.


On Tue, 21 Oct 2003, Tord Snall wrote:

> Dear all,
> I try extracting information from a list with several levels, but I would
> be happy for recommendation on writing more efficient code:
>
> > h0<- seq(0,100, by = 20); expo<- seq(0.1, 0.5, l = 5)
> > grid<- expand.grid(h0, expo)
> > test<- apply(grid, 1, pcp, point.data = as.points(dat[,c("x","y")]),
> poly.data = studyarea)
>
> > test[1]
> $"1"
> $"1"$par
>   s2  rho
> 1.815343e-06 2.358788e-02
>
> $"1"$value
> [1] 144.346
>
> $"1"$counts
> function gradient
>   65   NA
>
> $"1"$convergence
> [1] 0
>
> $"1"$message
> NULL
>
> I want to put the results together:
> val<- c(test[[1]]$value, test[[2]]$value, test[[3]]$value, test[[4]]$value...)
> s2<- c(test[[1]]$par[1], test[[2]]$par[1], test[[3]]$par[1],
> test[[4]]$par[1]...)
> rho<- ...
> funct<- 
> grad<-
> .
>
> useful.df<- as.data.frame(cbind(val, s2), F)
>
> However, as you can see
> > dim(grid)
> [1] 30  2
>
> the call rows
>
> val<- c(test[[1]]$value, test[[2]]$value, test[[3]]$value,
> test[[4]]$value...)
> etc.
>
> will be long.
>
> I would thus be happy for help with writing this code more efficient (and I
> know will benefit from this knowing how to do this in the future).
>
>
> Thanks in advance!
>
> Sincerely,
> Tord
>
> ---
> Tord Snäll
> Avd. f växtekologi, Evolutionsbiologiskt centrum, Uppsala universitet
> Dept. of Plant Ecology, Evolutionary Biology Centre, Uppsala University
> Villavägen 14
> SE-752 36 Uppsala, Sweden
> Tel: 018-471 28 82 (int +46 18 471 28 82) (work)
> Tel: 018-25 71 33 (int +46 18 25 71 33) (home)
> Fax: 018-55 34 19 (int +46 18 55 34 19) (work)
> E-mail: [EMAIL PROTECTED]
> Check this: http://www.vaxtbio.uu.se/resfold/snall.htm!
>
> __
> [EMAIL PROTECTED] mailing list
> https://www.stat.math.ethz.ch/mailman/listinfo/r-help
>
>

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] do.call() and aperm()

2003-10-21 Thread Tony Plate
I've also been thinking about how to specify that 'along' should be 
length(dim)+1.  At the moment one can specify any number from 0 up to 
length(dim)+1, but as you point out you have to spell out length(dim)+1 as 
the value for the along argument.  It would possible to make abind() 
automatically calculate along=length(dim)+1 when given along=NA, or 
along=-1, or along="+1".  Any preferences?

-- Tony Plate

At Tuesday 04:48 PM 10/21/2003 +0100, Robin Hankin wrote:
Hi everyone

I've been playing with do.call() but I'm having problems understanding it.

I have a list of "n" elements, each one of which is "d" dimensional
[actually an n-by-n-by ... by-n array].  Neither n nor d is known in
advance.  I want to bind the elements together in a higher-dimensional
array.
Toy example follows with d=n=3.

f <- function(n){array(n,c(3,3,3))}
x <-  sapply(1:3,f,simplify=FALSE)
Then what I want is

ans <- abind(x[[1]] , x[[2]] , x[[3]]  , along=4)

[abind() is defined in library(abind)].

Note that dim(ans) is c(3,3,3,3), as required.

PROBLEM: how do I do tell do.call() that I want to give abind() the
extra argument along=4 (in general, I want
along=length(dim(x[[1]]))+1)?
Oblig Attempt:

jj <- function(...){abind(... , along=4)}
do.call("jj" , x)
This works, because I know that d=3 (and therefore use along=4), but
it doesn't generalize easily to arbitrary d.  I'm clearly missing
something basic.  Anyone?
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] do.call() and aperm()

2003-10-21 Thread Tony Plate
> do.call("abind" c(list.of.arrays, list(along=4)))

This reminds me that I had been meaning to submit an enhancement of abind() 
that allows the first argument to be a list of arrays so that you could 
simply do abind(list.of.arrays, along=4), as I find this is a very common 
pattern.

-- Tony Plate

At Tuesday 04:48 PM 10/21/2003 +0100, Robin Hankin wrote:
Hi everyone

I've been playing with do.call() but I'm having problems understanding it.

I have a list of "n" elements, each one of which is "d" dimensional
[actually an n-by-n-by ... by-n array].  Neither n nor d is known in
advance.  I want to bind the elements together in a higher-dimensional
array.
Toy example follows with d=n=3.

f <- function(n){array(n,c(3,3,3))}
x <-  sapply(1:3,f,simplify=FALSE)
Then what I want is

ans <- abind(x[[1]] , x[[2]] , x[[3]]  , along=4)

[abind() is defined in library(abind)].

Note that dim(ans) is c(3,3,3,3), as required.

PROBLEM: how do I do tell do.call() that I want to give abind() the
extra argument along=4 (in general, I want
along=length(dim(x[[1]]))+1)?
Oblig Attempt:

jj <- function(...){abind(... , along=4)}
do.call("jj" , x)
This works, because I know that d=3 (and therefore use along=4), but
it doesn't generalize easily to arbitrary d.  I'm clearly missing
something basic.  Anyone?
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] explaining curious result of aov

2003-10-21 Thread Prof Brian Ripley
The ANOVA assumes equal variances in the groups.  Suppose groups 5 and 6 
had much lower variances than groups 1 to 4, and group 6 had a different 
mean from the other 5 (which were about equal)?

Given how small the groups apperat to be, this could happen.

On Tue, 21 Oct 2003, Bill Shipley wrote:

> Hello.  I have come across a curious result that I cannot explain.
> Hopefully, someone can explain this.  I am doing a 1-way ANOVA with 6
> groups (example: summary(aov(y~A)) with A having 6 levels).  I get an F
> of 0.899 with 5 and 15 df (p=0.51).  I then do the same analysis but
> using data only corresponding to groups 5 and 6.  This is, of course,
> equivalent to a t-test.  I now get an F of 142.3 with 1 and 3 degrees of
> freedom and a null probability of 0.001.  I know that multiple
> comparisons changes the model-wise error rate, but even if I did all 15
> comparisons of the 6 groups, the Bonferroni correction to a 5% alpha is
> 0.003, yet the Bonferroni correction gives conservative rejection
> levels.
> 
> How can such a result occur?  Any clues would be helpful.

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] explaining curious result of aov

2003-10-21 Thread Peter Dalgaard
"Bill Shipley" <[EMAIL PROTECTED]> writes:

> Hello.  I have come across a curious result that I cannot explain.
> Hopefully, someone can explain this.  I am doing a 1-way ANOVA with 6
> groups (example: summary(aov(y~A)) with A having 6 levels).  I get an F
> of 0.899 with 5 and 15 df (p=0.51).  I then do the same analysis but
> using data only corresponding to groups 5 and 6.  This is, of course,
> equivalent to a t-test.  I now get an F of 142.3 with 1 and 3 degrees of
> freedom and a null probability of 0.001.  I know that multiple
> comparisons changes the model-wise error rate, but even if I did all 15
> comparisons of the 6 groups, the Bonferroni correction to a 5% alpha is
> 0.003, yet the Bonferroni correction gives conservative rejection
> levels.
> 
> How can such a result occur?  Any clues would be helpful.

It's a question of assumptions. 

Notice first that you have some very small groups there. Comparing two
groups with 3df means that there are five observations in all,
presumably two in one group and three in the other (although it could
be 4-1).

The joint F test assumes that all the groups have a similar
(theoretical) SD, whereas the two group comparison only assumes that
those two groups are similar.

Suppose one of the other groups had a huge SD; then a joint comparison
would clearly lose power if the actual differences were between some
of the groups with a smaller SD. 

On the other hand, the test on 3df is extremely dependent on
distributional assumptions, and if data are non-normally distributed,
there may be an increased probability of getting a very small variance
(quantization can do that, e.g.) and thus a falsely significant
result.

I.e. I'd take a closer look at the SD's for the 6 groups and perhaps
make a dotplot.

-- 
   O__   Peter Dalgaard Blegdamsvej 3  
  c/ /'_ --- Dept. of Biostatistics 2200 Cph. N   
 (*) \(*) -- University of Copenhagen   Denmark  Ph: (+45) 35327918
~~ - ([EMAIL PROTECTED]) FAX: (+45) 35327907

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] report generator a la epiinfo

2003-10-21 Thread Peter Wolf
Lucas Gonzalez Santa Cruz wrote:

Hi

I'd like to use R in epidemiology and disease surveillance.

In EpiInfo you can have a script (.pgm) which calls a predefined report
(.rpt), where a table is calculated and values picked from that table
and placed where the author of the report wants them, with text around
those values. (Please see example below.)
I've looked at manuals, faq, mail-search and google. The closest is an
"R Report Generator" email that looked as if it wasn't followed after a
couple of years.
##The script might have something like this:
read.epiinfo("oswego.rec")
report("oswego.rpt", output="oswego.txt")
##The predefined report might have this:
#{ill}
Exactly {"YES"} people fell ill, and {"NO"} people didn't.
We don't know about the remaining [({}-{"YES"}-{"NO"})*100/{}] percent.
#{icecream ill}
We are specifically interested in the number of people who chose vanilla
and didn't fall ill (all {"VANILLA", "YES"} of them).
Is there anyway to do this with R? Any direction I should look into?

Thanks in advance.

Lucas

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
 

One way is to use Sweave, another one is given by my R function ff.
ff allows you to substitute expressions by evaluated results in a raw 
report.

See: 
http://www.wiwi.uni-bielefeld.de/~wolf/software/R-wtools/formfill/ff.html
or http://www.wiwi.uni-bielefeld.de/~wolf/software/R-wtools/formfill/ff.rd

Peter Wolf

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] code efficiency, extr. info from list

2003-10-21 Thread Tord Snäll
Dear Anders and Paulo,

Thanks very much for your recommendations! 

I did it this way:

test2 <- unlist(lapply(test, function(x)
cbind(x$par[1], x$par[2], x$value, x$conv)))
m<- as.data.frame(matrix(test2, nrow = dim(grid)[1], ncol = 4, byrow = T))
names(m) <- c("s2", "rho", "value", "conv")

but I sure there are better ways


Sincerely,
Tord

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: [R] explaining curious result of aov

2003-10-21 Thread Ted Harding
On 21-Oct-03 Bill Shipley wrote:
> Hello.  I have come across a curious result that I cannot explain.
> Hopefully, someone can explain this.  I am doing a 1-way ANOVA with 6
> groups (example: summary(aov(y~A)) with A having 6 levels).  I get an F
> of 0.899 with 5 and 15 df (p=0.51).  I then do the same analysis but
> using data only corresponding to groups 5 and 6.  This is, of course,
> equivalent to a t-test.  I now get an F of 142.3 with 1 and 3 degrees
> of freedom and a null probability of 0.001.  I know that multiple
> comparisons changes the model-wise error rate, but even if I did all 15
> comparisons of the 6 groups, the Bonferroni correction to a 5% alpha is
> 0.003, yet the Bonferroni correction gives conservative rejection
> levels.
> 
> How can such a result occur?  Any clues would be helpful.

It's not obvious from your description. However, one possibility (which
I very strongly suspect) is apparent heterogeneity of variance, coupled
with paucity of data.

To wit: The denominator in F is the residual sum of squares (divided by
its degrees of freedom -- 15 in your first case, 3 in your second).

If the data in groups 5 and 6 are very close to their group means,
the group means themselves being more widely separated, then you can
indeed get a large F. The very moderate F that you get from the full
set of groups is quite compatible with the extreme result from the
two-group analysis if the data happen to be more widely spread about
their group means than they happen to be in G5+G6. This is the
"heterogeneity of variance" side of it.

Your denominator df = 3 for the two-group case indicates that you
only have 5 data values altogether in these two groups. Your df = 15
for the six-group case indicates that you have only 21 data all told.
At an average of 3.5 data per group you have a very thin data set.
Your 2.5 data per group in G5+G6 is even thinner. I would be very
cautious about interpreting the results in such a case.

Perhaps if you told us more about your data we could give a more
focussed diagnosis.

Best wishes,
Ted.



E-Mail: (Ted Harding) <[EMAIL PROTECTED]>
Fax-to-email: +44 (0)870 167 1972
Date: 21-Oct-03   Time: 17:56:53
-- XFMail --

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] do.call() and aperm()

2003-10-21 Thread Gabor Grothendieck


I suggest following APL as that is a well thought out system.
In APL terms there are two operations here called:

- catenation. In abind, this occurs when along = 1,2,...,length(dim)
- lamination.  In abind, this occurs when along = length(dim) + 1

however, the latter is really only one case of lamination in 
which the added dimension comes at the end.  To do it in full
generality would require that one can add the new dimension
at any spot including before the first, between the first and
the second, ..., after the last.

In APL notation, if along has a fractional part then the new
dimension is placed between floor(along) and ceiling(along).
Thus along=1.1 would put the new dimension between the first
and second.  The actual value of the fractional part is not material.

---
From: Tony Plate <[EMAIL PROTECTED]>
 
I've also been thinking about how to specify that 'along' should be 
length(dim)+1. At the moment one can specify any number from 0 up to 
length(dim)+1, but as you point out you have to spell out length(dim)+1 as 
the value for the along argument. It would possible to make abind() 
automatically calculate along=length(dim)+1 when given along=NA, or 
along=-1, or along="+1". Any preferences?

-- Tony Plate



___
No banners. No pop-ups. No kidding.
Introducing My Way - http://www.myway.com

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] do.call() and aperm()

2003-10-21 Thread Gabor Grothendieck

Please ignore this email.  I just reread the abind
documentation and it already does this!




 --- On Tue 10/21, Gabor Grothendieck < [EMAIL PROTECTED] > wrote:
From: Gabor Grothendieck [mailto: [EMAIL PROTECTED]
To: [EMAIL PROTECTED], [EMAIL PROTECTED], [EMAIL PROTECTED]
Date: Tue, 21 Oct 2003 13:22:28 -0400 (EDT)
Subject: Re: [R] do.call() and aperm()

I suggest following APL as that is a well thought out system.In APL terms 
there are two operations here called:- catenation. In abind, this occurs when 
along = 1,2,...,length(dim)- lamination.  In abind, this occurs when along = 
length(dim) + 1however, the latter is really only one case of lamination in 
which the added dimension comes at the end.  To do it in fullgenerality would 
require that one can add the new dimensionat any spot including before the first, 
between the first andthe second, ..., after the last.In APL notation, if 
along has a fractional part then the newdimension is placed between floor(along) 
and ceiling(along).Thus along=1.1 would put the new dimension between the 
firstand second.  The actual value of the fractional part is not 
material.---From: Tony Plate <[EMAIL PROTECTED]> I've also been 
thinking about how to specify that 'along' should be length(dim)+!
 1. At the moment one can specify any number from 0 up to length(dim)+1, but as 
you point out you have to spell out length(dim)+1 as the value for the along 
argument. It would possible to make abind() automatically calculate 
along=length(dim)+1 when given along=NA, or along=-1, or along="+1". Any 
preferences?-- Tony 
Plate___No banners. No 
pop-ups. No kidding.Introducing My Way - 
http://www.myway.com__[EMAIL 
PROTECTED] mailing listhttps://www.stat.math.ethz.ch/mailman/listinfo/r-help

___
No banners. No pop-ups. No kidding.
Introducing My Way - http://www.myway.com

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] do.call() and aperm()

2003-10-21 Thread Tony Plate
Thanks, I appreciate knowing that.

abind() can currently take a fractional value for along, and behaves as per 
your description of 'catenation' in APL.

Does APL supply any hints as to what sort of value to give 'along' to tell 
abind() to perform 'lamination'?

-- Tony Plate

At Tuesday 01:22 PM 10/21/2003 -0400, Gabor Grothendieck wrote:


I suggest following APL as that is a well thought out system.
In APL terms there are two operations here called:
- catenation. In abind, this occurs when along = 1,2,...,length(dim)
- lamination.  In abind, this occurs when along = length(dim) + 1
however, the latter is really only one case of lamination in
which the added dimension comes at the end.  To do it in full
generality would require that one can add the new dimension
at any spot including before the first, between the first and
the second, ..., after the last.
In APL notation, if along has a fractional part then the new
dimension is placed between floor(along) and ceiling(along).
Thus along=1.1 would put the new dimension between the first
and second.  The actual value of the fractional part is not material.
---
From: Tony Plate <[EMAIL PROTECTED]>
I've also been thinking about how to specify that 'along' should be
length(dim)+1. At the moment one can specify any number from 0 up to
length(dim)+1, but as you point out you have to spell out length(dim)+1 as
the value for the along argument. It would possible to make abind()
automatically calculate along=length(dim)+1 when given along=NA, or
along=-1, or along="+1". Any preferences?
-- Tony Plate



___
No banners. No pop-ups. No kidding.
Introducing My Way - http://www.myway.com
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] summary of "explaining curious results of aov"

2003-10-21 Thread Bill Shipley
Earlier, I had posted the following question to the group :

> Hello.  I have come across a curious result that I cannot explain. 

> Hopefully, someone can explain this.  I am doing a 1-way ANOVA with 6 

> groups (example: summary(aov(y~A)) with A having 6 levels).  I get an 

> F of 0.899 with 5 and 15 df (p=0.51).  I then do the same analysis but


> using data only corresponding to groups 5 and 6.  This is, of course, 

> equivalent to a t-test.  I now get an F of 142.3 with 1 and 3 degrees 

> of freedom and a null probability of 0.001.  I know that multiple 

> comparisons changes the model-wise error rate, but even if I did all 

> 15 comparisons of the 6 groups, the Bonferroni correction to a 5% 

> alpha is 0.003, yet the Bonferroni correction gives conservative 

> rejection levels.

> 

> How can such a result occur?  Any clues would be helpful.

 

Brian Ripley, Robert Balshaw, Peter Dalgaard and Ted Harding all
responded.  The answer was basically the same from all:  If there is
heterogeneity of variances between the groups, and the variances of
groups 5 and 6 are smaller than the others, then my result could occur
because the average within-group variance over all groups in the general
ANOVA is higher than the within-group variance when looking only at
groups 5 and 6.  Combine this with the very small sample size and
unequal group membership.

A number of reference books state that ANOVA is fairly robust to
moderate degrees of heterogeneity of variance but not what constitutes
“moderate”!  

 

Bill Shipley

Associate Editor, Ecology

North American Editor, Annals of Botany

Département de biologie, Université de Sherbrooke,

Sherbrooke (Québec) J1K 2R1 CANADA

[EMAIL PROTECTED]

 
http://callisto.si.usherb.ca:8080/bshipley/

 


[[alternative HTML version deleted]]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] SOM library for R

2003-10-21 Thread Hisaji Ono
Thank you for your responses, Sean, Professor Ripley, Liaw.
(B
(B Sorry for my thanks.
(B
(B I'll try to use "SOM" in both "class" and "(Gene)som".
(B
(BHowever, I couldn't know how to draw 2D hexagon maps using these packages.
(B
(B Could you give any suggestion?
(B
(B Thanks.
(B
(B__
(B[EMAIL PROTECTED] mailing list
(Bhttps://www.stat.math.ethz.ch/mailman/listinfo/r-help

Re: [R] Strange behaviour

2003-10-21 Thread Paul Murrell
Hi

Vittorio wrote:
Paul Murrell [r-help] <20/10/03 09:13 +1300>:

Hi
. 
The "nasty rectangles" are the output of the layout.show() function. 
This function draws a simple diagram (consisting of nasty rectangles) to 
indicate the regions that a call to layout() has set up.  It is designed 
to help users to understand what on earth the layout() function is 
doing.  (It is NOT a necessary part of setting up an arrangement of 
plots using the layout() function.)

I suspect that the author of "simpleR" may have accidentally left the 
layout.show() call in simple.scatterplot() when copying the example from 
the layout() help file (apologies to John Verzani if this is an unfair 
diagnosis).

So the immediate solution to your problem is to remove the line ...

   layout.show(nf)

... from simple.scatterplot().  The output should then be a single page 
which should "include" ok in latex.

The larger problem of how to get at individual pages of output is 
probably best solved using something like the "onefile" argument to 
devices.  For example, look at the files produced by ...

   pdf(onefile=FALSE)
   example(layout)
... and at the help page for pdf() to see more about how to do this.

Hope that helps
...


Yes, Paul, definitely it helps. Thanks!

I obtained what I wanted. 

Now, I want to control the output of the pdf() command making it write
a specific file chosen by me and not the system. After reading the
help page for the pdf, I was unable to do it.
E.g. I issued
 
onefile<-FALSE
pdf(file=ifelse(onefile,,"vic.pdf")
example(layout)

And I obtained a 5-pages vic.pdf with page 1-4 full of "nasty
rectangles" of any kind and page 5 with the right picture.


You need something like ...

pdf("yourname%03d.pdf", onefile=FALSE)
example(layout)
Paul
--
Dr Paul Murrell
Department of Statistics
The University of Auckland
Private Bag 92019
Auckland
New Zealand
64 9 3737599 x85392
[EMAIL PROTECTED]
http://www.stat.auckland.ac.nz/~paul/
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] Denominator Degrees of Freedom in lme() -- Adjusting and Understanding Them

2003-10-21 Thread Ken Kelley
Hello all.

I was wondering if there is any way to adjust the denominator degrees of 
freedom in lme(). It seems to me that there is only one method that can be 
used. As has been pointed out previously on the list, the denominator 
degrees of freedom given by lme() do not match those given by SAS Proc 
Mixed or HLM5. Proc Mixed, for example, offers five different options for 
computing the denominator degrees of freedom. Is there anyway to make such 
specifications in lme(), so that the degrees of freedom will correspond 
with the output given from Proc Mixed.

I've looked at Pinheiro and Bates' Mixed-Effects Models book (especially p. 
91), but I still don't quite understand the method used for determining the 
degrees of freedom in lme(). When analyzing longitudinal data with the 
straight-line growth model (intercept and slope both have fixed and random 
effects), the degrees of freedom seem to be N*T-N-1, where N is total 
sample size and T is the number of timepoints (at least when data are 
balanced). In the Pinheiro and Bates book (p. 91), the degrees of freedom 
are given as m_i-(m_1-1+pi), where m_i is the number of groups at the ith 
level, m_0=1 if an intercept is included and p_i is the sum of the degrees 
of freedom corresponding to the terms estimated. I'm not sure how the 
N*T-N-1 matches up with the formula given on page 91. It seems to me the 
number of "groups" (i.e., m_i) would be equal to N, the number of 
individuals (note that this is what is given as the "number of groups" in 
the summary of the lme() object.). However, as more occasions of 
measurements are added, the number of degrees of freedom gets larger, 
making it seems as though m_i represents the total number of observations, 
not the "number of groups."

For example, if N=2 and T=3, you end up with 3 degrees of freedom using 
lme(). Increasing T to 10 has not changed the number of groups (i.e., N 
still equals 2), but the degrees of freedom increases to 17. In such a 
situation SAS Proc Mixed would still have 1 degree of freedom (N-1) 
regardless of T, as the number of "groups" have not changed (just the 
number of observations per group have changed).

Any insight into understanding the denominator degrees of freedom for the 
fixed effects would be appreciated. Since the degrees of freedom given by 
lme() can be made to be arbitrarily larger than those given by PROC MIXED 
(i.e., by having an arbitrarily large number of measurement occasions for 
each individual), and since the degrees of freedom affect the standard 
errors, then the hypothesis tests, then the p values, the differences 
between the methods is surprising. It seems one of the methods would be 
better than the other since they can potentially be so much different.

Thanks and have a good one,
Ken
P.S. I have posted this to both the R and Multilevel Modeling list.
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] R - S compatibility table

2003-10-21 Thread David Brahm
Purvis Bedenbaugh <[EMAIL PROTECTED]> wrote:
> I started looking for an R-S compatibility table but didn't find it.
> Examples:
>   'stdev' is now 'sd'  - is it exactly the same computation ?
>   couldn't find a built-in for error.bar()
>   syntax that is an error in R: param(thisframe,"b") <- value

It's a moving target!  I wrote such a list in October 2001, and revised it a
year later, but I haven't updated it since then (I no longer use S-Plus).  Some
people (e.g. Paul Gilbert) replied that they also had lists which didn't
overlap much with mine, suggesting we were each seeing just a small part of the
puzzle.  So maintaining a complete and current list would be quite a challenge.

Paul also mentioned to me a mailing list "R-sig-S" on the topic:
  
but it seems dead since May 2001.

That said, here's my year-old list.  Note "S" means "S-Plus 6.1.2" (not "The S
Language") and "R" probably means "R-1.6.0".

 ***   R vs. S (DB 10/28/02)  ***

Language differences:
- Scoping rules differ.  In R, functions see the functions they're in.  Try:
f1 <- function() {x <- 1; f2 <- function() print(x); f2()};  f1()
- Data must be loaded explicitly in R, can be attach()'ed in S.
Addressed by my contributed package "g.data".
- R has a character-type NA, so LETTERS[c(NA,2)] = c(NA,"B") not c("","B")
- paste("a","b", sep="|", sep=".") is an error in R; ok in S.
- for() loops more efficient in R.

Graphics differences:
- Log scale indicated in S with par(xaxt)=="l", in R with par("xlog")==T.
- R has cex.main, col.lab, font.axis, etc.  Thus title("Hi", cex=4) fails.
- R has plotmath and Hershey vector fonts.
- R has palette(rainbow(10)) to define colors (both screen and printer).

Functions missing from R:
- unpaste, slice.index, colVars

Functions missing from S:
- strsplit, sub, gsub, chartr, formatC

Functions that work differently:
- system() has no "input" argument in R.
- substring(s,"x") <- "X" only works in S, but R has s <- gsub("x","X",s).
- scan expects numbers by default in R.
- which() converts to logical in S, is an error in R.
- The NULL returned by if(F){...} is invisible in R, visible in S.
- The NULL returned by return() is visible in R, invisible in S.
- Args to "var" differ, and R has "cov".  S na.method="a" ~ R use="p".
- var (or cov) drops dimensions in S, not R.
- cut allows labels=F in R, not in S (also left.include=T becomes right=F).
- Last argument of a replacement function must be named "value" in R.
- tapply(1:3, c("a","b","a"), sum) is a 1D-array in R, a vector in S.
- probability distribution fcn's have arg "log.x" in R (ref: Spencer Graves)

-- 
  -- David Brahm ([EMAIL PROTECTED])

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: [R] How to upgrade R

2003-10-21 Thread David Brahm
Andy Liaw <[EMAIL PROTECTED]> wrote:

> What's not clear to me is a good way of keeping two versions of R
> simultaneously (for ease of transition).  Can anyone suggest a good strategy
> for doing that on *nix?

I'm not sure what you mean, but I'll tell you what we do.  We have built
/res/R/R-1.6.2, /res/R/R-1.7.1, /res/R/R-1.8.0, etc. (note we never run "make
install").  Anyone who doesn't want to upgrade (e.g. frozen production code)
just hard-codes one of those paths.  Everyone else uses a symbolic link
/res/R/R, which right now points to R-1.7.1 but will change to R-1.8.0 when we
feel we're "ready".

Just to add another layer of complication, actually we have another symbolic
link /res/R/Rdb, which I ("db") use, because I like to upgrade faster.  So Rdb
currently points to R-1.8.0.  Symbolic links are your friend.
-- 
  -- David Brahm ([EMAIL PROTECTED])

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] do.call() and aperm()

2003-10-21 Thread Thomas Lumley
On Tue, 21 Oct 2003, Robin Hankin wrote:

> Hi everyone
>
> I've been playing with do.call() but I'm having problems understanding it.
>
> I have a list of "n" elements, each one of which is "d" dimensional
> [actually an n-by-n-by ... by-n array].  Neither n nor d is known in
> advance.  I want to bind the elements together in a higher-dimensional
> array.
>
> Toy example follows with d=n=3.
>
> f <- function(n){array(n,c(3,3,3))}
> x <-  sapply(1:3,f,simplify=FALSE)
>
> Then what I want is
>
> ans <- abind(x[[1]] , x[[2]] , x[[3]]  , along=4)
>
> [abind() is defined in library(abind)].
>
> Note that dim(ans) is c(3,3,3,3), as required.
>
> PROBLEM: how do I do tell do.call() that I want to give abind() the
> extra argument along=4 (in general, I want
> along=length(dim(x[[1]]))+1)?
>
>
> Oblig Attempt:
>
> jj <- function(...){abind(... , along=4)}
> do.call("jj" , x)
>
> This works, because I know that d=3 (and therefore use along=4), but
> it doesn't generalize easily to arbitrary d.  I'm clearly missing
> something basic.  Anyone?
>

If I have understood correctly, then
  d<-length(dim(x[[1]]))
  do.call("abind",c(x,along=d+1))
should work.


-thomas

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: [R] How to upgrade R

2003-10-21 Thread Liaw, Andy
It's actually easier than I thought (and perhaps than you described).  What
Prof. Ripley suggested, combined with the options to the configure script,
should make this fairly straight-forward, even with "make install".

Cheers,
Andy

> From: David Brahm [mailto:[EMAIL PROTECTED] 
> 
> Andy Liaw <[EMAIL PROTECTED]> wrote:
> 
> > What's not clear to me is a good way of keeping two versions of R 
> > simultaneously (for ease of transition).  Can anyone suggest a good 
> > strategy for doing that on *nix?
> 
> I'm not sure what you mean, but I'll tell you what we do.  We 
> have built /res/R/R-1.6.2, /res/R/R-1.7.1, /res/R/R-1.8.0, 
> etc. (note we never run "make install").  Anyone who doesn't 
> want to upgrade (e.g. frozen production code) just hard-codes 
> one of those paths.  Everyone else uses a symbolic link 
> /res/R/R, which right now points to R-1.7.1 but will change 
> to R-1.8.0 when we feel we're "ready".
> 
> Just to add another layer of complication, actually we have 
> another symbolic link /res/R/Rdb, which I ("db") use, because 
> I like to upgrade faster.  So Rdb currently points to 
> R-1.8.0.  Symbolic links are your friend.
> -- 
>   -- David Brahm ([EMAIL PROTECTED])
>

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] run R under linux

2003-10-21 Thread Jason Turner
Zhen Pang wrote:

We are not allowed to submit job directly, so I never type R to use R, 
just make a batch. How can I use try() to correct my codes? In the 
interactive mode, I know how to continue, but now I never enter the R 
window, where to find my results and save seed to continue?

Like you'd program any exception handling.  A toy example to get you 
started might look like this:

(somewhere inside your script file)
...
results <- vector(length=200,mode="numeric") #or whatever you use
set.seed(123)
...
errors <- list()
for(ii in 1:200) {
  foo <- try(some.simulation.thingy(your.params))
  if(inherits(foo,"try-error")) { #something bombed
results[ii] <- NA
errors[[as.character(ii)]] <- foo
  } else { #it worked
results[ii] <- foo
  }
}
...
save(results,errors,file="results+errors.RData")
...
The end.
Play with that, and see if you get some useful ideas.

Cheers

Jason
--
Indigo Industrial Controls Ltd.
http://www.indigoindustrial.co.nz
64-21-343-545
[EMAIL PROTECTED]
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] Patches for DBI/RMySQL "valueClass" problem?

2003-10-21 Thread Barnet Wagman
According David Jame's response to my earlier question, there is a 
problem with setGeneric.setMethod in R 1.8.0 that affects DBI and RMySQL. 

Is there a fix for this?  David Jame's refers to an 'R-patched version' 
but I haven't seen anything like this on  CRAN.  Would going back to an 
older version of R solve the problem?

Thanks,

Barnet Wagman

"David James wrote:

However, there is a problem in the released version of R 1.8.0 that affects
the DBI and other packages (has something to do with methods
that use the "valueClass" argument in the setGeneric/setMethod functions).
In this case one needs to use the R-patched version. "
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Denominator Degrees of Freedom in lme() -- Adjusting and Understanding Them

2003-10-21 Thread Douglas Bates
Contributions of code to provide alternative calculations of
denominator degrees of freedom are welcome :-)

I think it would be good to bear in mind that the use of the t and F
distributions for models with mixed effects is already an
approximation.  If your design is such that you end up with a very few
denominator degrees of freedom then the whole question of whether you
should be using F or t distributions in the first place becomes
problematic.   If the number of denominator degrees of freedom is
moderate than the distinction between alternative methods becomes
unimportant.

-- 
Douglas Bates[EMAIL PROTECTED]
Statistics Department608/262-2598
University of Wisconsin - Madisonhttp://www.stat.wisc.edu/~bates/

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] R - S compatibility table

2003-10-21 Thread Douglas Bates
David Brahm  <[EMAIL PROTECTED]> writes:

> Language differences:
> - Scoping rules differ.  In R, functions see the functions they're in.  Try:
> f1 <- function() {x <- 1; f2 <- function() print(x); f2()};  f1()

It may be more accurate to say "the functions they're defined in".
It's lexical scoping, not dynamic scoping.

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] Custom Device

2003-10-21 Thread Arend P. van der Veen
Hi,

Can anyone point in the direction of how to write a custom output device
in R.  I currently generate output using PS and JPEG but need to produce
output in our own vector graphics language.

Thanks in advance,
Arend van der Veen

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] R-help login page error

2003-10-21 Thread "Héctor Villafuerte D."
Hi all,
I'm trying to access my account at
https://www.stat.math.ethz.ch/mailman/options/r-help
but the following appears:
*Error: */Authentication failed./
Yes, I've already checked my password. Is it a problem
with mailman or something?
Thanks in advance.
Hector
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Custom Device

2003-10-21 Thread Paul Murrell
Hi

Arend P. van der Veen wrote:
Hi,

Can anyone point in the direction of how to write a custom output device
in R.  I currently generate output using PS and JPEG but need to produce
output in our own vector graphics language.


Look at ...

R/src/include/R_ext/GraphicsDevice.h

... for the in-progress API and ...

R/src/modules/X11/devX11.c

... for a device template to model yourself on.

And if you decide to go ahead with something, keep in touch because this 
stuff will be undergoing changes in the near future.

Paul
--
Dr Paul Murrell
Department of Statistics
The University of Auckland
Private Bag 92019
Auckland
New Zealand
64 9 3737599 x85392
[EMAIL PROTECTED]
http://www.stat.auckland.ac.nz/~paul/
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Polynomial lags

2003-10-21 Thread Francisco Vergara
Thanks to Thomas, Roger and Spencer on your reply!

I am trying to replicate/validate the results from the "black box" 
Econometrics software Eviews, compared to R and EViews uses the Almon 
approach.  I will try the code kindly offered by Thomas.
I would be interested in the way to apply the B-spline expansion in lm()

Thanks!

Francisco


From: Roger Koenker <[EMAIL PROTECTED]>
To: Thomas Lumley <[EMAIL PROTECTED]>
CC: r-help <[EMAIL PROTECTED]>
Subject: Re: [R] Polynomial lags
Date: Tue, 21 Oct 2003 09:57:10 -0500 (CDT)
For what it is worth, I would have thought that expressing
the lag coefficients in a B-spline expansion would be preferable
to going back to Almon approach. This would give a relatively
simple lm() application.
url:www.econ.uiuc.edu/~roger/my.htmlRoger Koenker
email   [EMAIL PROTECTED]   Department of Economics
vox:217-333-4558University of Illinois
fax:217-244-6678Champaign, IL 61820
On Tue, 21 Oct 2003, Thomas Lumley wrote:

> On Tue, 21 Oct 2003, Spencer Graves wrote:
>
> > Have you checked "www.r-project.org" -> search -> "R site search"?  I
> > just got 15 hits for "polynomial lag".  If you haven't already tried
> > this, I'd guess that some of these hits (though certainly not all) 
might
> > help you.
> >
>
> Only one of these is about polynomial distributed lag models, and it's 
an
> apparently unsuccessful request for information.
>
> I have code
> 	http://faculty.washington.edu/tlumley/pdl.R
> and documentation
> 	http://faculty.washington.edu/tlumley/pdlglm.html
> for a version for generalised linear models.
>
> Note that I have not used this for some years, so it may run into 
problems
> with changes in R.  Also note that it is just a generalised linear model
> -- it doesn't do anything about residual autocorrelation (which isn't a
> problem in the application I was working on).
>
>
> 	-thomas
>
> __
> [EMAIL PROTECTED] mailing list
> https://www.stat.math.ethz.ch/mailman/listinfo/r-help
>

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
_
Surf and talk on the phone at the same time with broadband Internet access. 
Get high-speed for as low as $29.95/month (depending on the local service 
providers in your area).  https://broadband.msn.com

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Denominator Degrees of Freedom in lme() -- Adjusting and Understanding Them

2003-10-21 Thread Spencer Graves
Prof. Bates may be able to give us more recent references on this, but 
the best literature I know on this is Pinhiero and Bates (2000) 
Mixed-Effects Models in S and S-Plus (Springer, sec. 2.4).  This 
includes description of a "simulate.lme" function, which you can use to 
generate random numbers according to a given assumed model and then 
compare some results with a reference distribution.  Something like this 
could be used to answer your question of what is the correct number of 
degrees of freedom to use for any particular model. 

hope this helps.  spencer graves

Douglas Bates wrote:

Contributions of code to provide alternative calculations of
denominator degrees of freedom are welcome :-)
I think it would be good to bear in mind that the use of the t and F
distributions for models with mixed effects is already an
approximation.  If your design is such that you end up with a very few
denominator degrees of freedom then the whole question of whether you
should be using F or t distributions in the first place becomes
problematic.   If the number of denominator degrees of freedom is
moderate than the distinction between alternative methods becomes
unimportant.
 

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] Excel to R

2003-10-21 Thread Gabor Grothendieck


I have Excel files containing data that I would like to move to R.
They are in the standard form of a one row header followed by 
rows of data, one record per row EXCEPT that there are a few
rows of comments before the header.  The number of rows of comments
varies.  For Excel files of this form without comments I have had
success with:

require(RODBC)
z <- odbcConnectExcel("C:/myspread.xls")
z.df <- sqlFetch(z,"Sheet1")
close(z)

but the comments interfere with this.  

I don't want to manually delete the rows but want the entire
process from Excel file to R to be automatic.

I can accomplish this with a free utility, Baird's dataload that 
I found on the net.  This will convert the Excel files to text 
and then the text can be processed using R to locate the start of 
the header and only process the remainder of the file.  (There is
also another free utility called xlhtml that I don't use, but could 
have, that does this too.) Thus at this point I have an 
adequate automated solution.

Nevertheless, I was wondering, for sake of interest, if there is 
some solution in R that does not involve such an external program
such as dataload or xlhtml.

Thanks.

(I am using Windows 2000.)

___
No banners. No pop-ups. No kidding.
Introducing My Way - http://www.myway.com

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: [R] run R under linux

2003-10-21 Thread Liaw, Andy
> From: Jason Turner [mailto:[EMAIL PROTECTED] 
> 
> Zhen Pang wrote:
> 
> > We are not allowed to submit job directly, so I never type 
> R to use R,
> > just make a batch. How can I use try() to correct my codes? In the 
> > interactive mode, I know how to continue, but now I never 
> enter the R 
> > window, where to find my results and save seed to continue?
> > 
> 
> Like you'd program any exception handling.  A toy example to get you 
> started might look like this:

Or see ?tryCatch in R-1.8.0...

Andy
>

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Excel to R

2003-10-21 Thread Dirk Eddelbuettel
On Tue, Oct 21, 2003 at 08:31:16PM -0400, Gabor Grothendieck wrote:
> 
> 
> I have Excel files containing data that I would like to move to R.
> They are in the standard form of a one row header followed by 
> rows of data, one record per row EXCEPT that there are a few
> rows of comments before the header.  The number of rows of comments
> varies.  For Excel files of this form without comments I have had
> success with:
> 
> require(RODBC)
> z <- odbcConnectExcel("C:/myspread.xls")
> z.df <- sqlFetch(z,"Sheet1")
> close(z)
> 
> but the comments interfere with this.  
> 
> I don't want to manually delete the rows but want the entire
> process from Excel file to R to be automatic.
> 
> I can accomplish this with a free utility, Baird's dataload that 
> I found on the net.  This will convert the Excel files to text 
> and then the text can be processed using R to locate the start of 
> the header and only process the remainder of the file.  (There is
> also another free utility called xlhtml that I don't use, but could 
> have, that does this too.) Thus at this point I have an 
> adequate automated solution.

There is also Spreadsheet::ParseExcel, which comes with a simple xls2csv
which I once extended, and posted here (or maybe only to BDR following a
discussion here).  Being Perl, it can easily be automated, and will cope
with your comment lines. If I recall, ActiveState provides this as well for
win* platforms.

> Nevertheless, I was wondering, for sake of interest, if there is 
> some solution in R that does not involve such an external program
> such as dataload or xlhtml.

There are a few candidates for a cross-platform solution:

- GNU Gretl (an econometric program with a nice Gnome GUI, and a win32 port)
  has code for this, taken from a C program xls2csv as well as from Gnumeric. 
  I had planned to look into this for R, but never got around to it.
  
- Gnumeric just added a standalone tool 'ssconvert', this may compile on
  Windows.  
  
- Also, OpenOffice has code for this which one could extract, but I am not
  familiar with the details.
  
Someone just has to sit down and do it. Typically the person with the
greatest urge wins. As I nowadays get all my data directly from databases
systems, I will probably not be the one.

Hth, Dirk

-- 
Those are my principles, and if you don't like them... well, I have others.
-- Groucho Marx

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] questions about axis

2003-10-21 Thread yyh
Dear helper.

I am a beginer. 
I have difficulties to handle axis. I want to draw axis label such that axis has range 
of [-0.4,0.4] with intervel 0.2 for x and y axis.
Some part of range do not have data points. Thus, plot does not show whole range. How 
can I enforce plot to depict the whole range regardless of existence of data points ?

Another problem is that when I depict axis labels, some labels are overlapped because 
interval is very small. In this case, I'd like to put one of label into insde the box 
which is drawn by plot. 
How can I do this ?

If I can see related source code, it will be very helpful.

Thanks a lot,
Yunho Hong
 
[[alternative HTML version deleted]]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help