Re: [R] graphing plots of plots

2010-08-21 Thread Bernard Leemon
I've now tried Babtiste's code and my reaction is WOW!  it shows me how to
do just what I need to do.  I know enough to follow all the code but it
would have taken me a LOOO time to generate it.  thank you Babtiste!

gary

On Sat, Aug 21, 2010 at 2:15 PM, baptiste auguie <
baptiste.aug...@googlemail.com> wrote:

> Hi,
>
>
> I think you could do it quite easily with lattice,
>
> library(lattice)
>
> latticeGrob <- function(p, ...){
>   grob(p=p, ..., cl="lattice")
> }
> drawDetails.lattice <- function(x, recording=FALSE){
>   lattice:::plot.trellis(x$p, newpage=FALSE)
> }
>
> plots <- replicate(4, xyplot(rnorm(10)~rnorm(10),xlab="",ylab=""),
> simplify=F)
>
> my.vp <- function(x,y)
> viewport(x=x,y=y,default.units="native",width=unit(1, "cm"),
> height=unit(1,"cm"))
>
> my.panel = function(x, y, ...){
>  ind <- seq_along(x)
>  for (ii in ind){
>g <- latticeGrob(plots[[ii]], vp=my.vp(x[ii],y[ii]))
>grid.draw(g)
>  }
> }
>
> xyplot(1:4~1:4, panel = my.panel)
>
> HTH,
>
> baptiste
>
> On 21 August 2010 22:11, Barry Rowlingson 
> wrote:
> > On Sat, Aug 21, 2010 at 8:48 PM, r.ookie  wrote:
> >> I'm trying to understand your question because when I think of a graph,
> I think of one canvas, on which, various functions are plotted (a function
> can be one point for example).
> >>
> >> So, when you say each 'element' do you mean each function?
> >> If so, then that seems to be asking how to plot a function per graph
> (which is probably obvious and not what you're asking)
> >>
> >> How about you clarify first :)
> >>
> >
> >  Sounded to me a bit like plotting pie charts at the locations of
> > countries on a map. Or something better (not hard).
> >
> >  subplot from Hmisc?
> >
> >  library(Hmisc)
> >  example(subplot)
> >
> > Barry
> >
> > __
> > R-help@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> >
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] graphing plots of plots

2010-08-21 Thread Bernard Leemon
many useful suggestions that I'll work on, especially babtiste's detailed
code. yes, I want something like Fig 1.7, or 7.18, or 7.22, but where the
x,y values are characteristics of the mini-histogram that is plotted.
 attached (if it makes it through) is what i'm trying to do in R.



On Sat, Aug 21, 2010 at 4:08 PM, Dennis Murphy  wrote:

> Once you load
>
> library(grid)
>
> the rest works. Nice job :)
>
> Dennis
>
> On Sat, Aug 21, 2010 at 1:15 PM, baptiste auguie <
> baptiste.aug...@googlemail.com> wrote:
>
> > Hi,
> >
> >
> > I think you could do it quite easily with lattice,
> >
> > library(lattice)
> >
> > latticeGrob <- function(p, ...){
> >   grob(p=p, ..., cl="lattice")
> > }
> > drawDetails.lattice <- function(x, recording=FALSE){
> >   lattice:::plot.trellis(x$p, newpage=FALSE)
> > }
> >
> > plots <- replicate(4, xyplot(rnorm(10)~rnorm(10),xlab="",ylab=""),
> > simplify=F)
> >
> > my.vp <- function(x,y)
> > viewport(x=x,y=y,default.units="native",width=unit(1, "cm"),
> > height=unit(1,"cm"))
> >
> > my.panel = function(x, y, ...){
> >  ind <- seq_along(x)
> >  for (ii in ind){
> >g <- latticeGrob(plots[[ii]], vp=my.vp(x[ii],y[ii]))
> >grid.draw(g)
> >  }
> > }
> >
> > xyplot(1:4~1:4, panel = my.panel)
> >
> > HTH,
> >
> > baptiste
> >
> > On 21 August 2010 22:11, Barry Rowlingson 
> > wrote:
> > > On Sat, Aug 21, 2010 at 8:48 PM, r.ookie  wrote:
> > >> I'm trying to understand your question because when I think of a
> graph,
> > I think of one canvas, on which, various functions are plotted (a
> function
> > can be one point for example).
> > >>
> > >> So, when you say each 'element' do you mean each function?
> > >> If so, then that seems to be asking how to plot a function per graph
> > (which is probably obvious and not what you're asking)
> > >>
> > >> How about you clarify first :)
> > >>
> > >
> > >  Sounded to me a bit like plotting pie charts at the locations of
> > > countries on a map. Or something better (not hard).
> > >
> > >  subplot from Hmisc?
> > >
> > >  library(Hmisc)
> > >  example(subplot)
> > >
> > > Barry
> > >
> > > __
> > > R-help@r-project.org mailing list
> > > https://stat.ethz.ch/mailman/listinfo/r-help
> > > PLEASE do read the posting guide
> > http://www.R-project.org/posting-guide.html
> > > and provide commented, minimal, self-contained, reproducible code.
> > >
> >
> > __
> > R-help@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide
> > http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> >
>
> [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] graphing plots of plots

2010-08-21 Thread Bernard Leemon
I want to make a graph where each element plotted is itself a graph.  I can
see how to use par(fig=) and viewport to do that, but they require (i think)
me to do my own scaling as they are scaled to the graphics window.  any
advice on which approach I should take (just bite the bullet and do my own
scaling), or is there something else I should try, or any examples I should
look at.  many thanks for any pointers.

bernie leemon (aka gary mcclelland)

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] nls() newbie convergence problem

2008-06-04 Thread Bernard Leemon
I'm sure this must be a nls() newbie question, but I'm stumped.
I'm trying to do the example from Draper
and Yang (1997).  They give this snippet of S-Plus code:

Specify the weight function:
weight < - function(y,x1,x2,b0,b1,b2)
{
pred <-  b0+b1*x1 + b2*x2
parms <- abs(b1*b2)^(1/3)
(y-pred)/parms
}
Fit the model
gmfit < -nls(~weight(y,x1,x2,b0,b1,b2), observe,list("starting value"))

in converting this to R, I left the weight function alone and replaced the
nls() with

gmfit <-
 
nls(~weight(y,x1,x2,b0,b1,b2),data=dydata,trace=TRUE,start=list(b0=1,b1=1,b2=1))

where dydata is the appropriate data.frame.

nls() quickly (6 iterations) finds the exact values from Draper & Yang for
b0, b1, and b2 but
despite reporting a discrepancy of only 3.760596e-29 by the 7th iteration,
it merrily goes on
to 50 iterations and thinks it never converged.  how do I tell nls() that
I'm actually quite
happy with 3.760596e-29 and it need not work further?

thanks for any suggestions.

gary mcclelland (aka bernie)
univ of colorado

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] nls() newbie convergence question

2008-06-04 Thread Bernard Leemon
I'm sure this must be a nls() newbie question, but I'm stumped.
I'm trying to do the example from Draper
and Yang (1997).  They give this snippet of S-Plus code:

Specify the weight function:
weight < - function(y,x1,x2,b0,b1,b2)
{
pred <-  b0+b1*x1 + b2*x2
parms <- abs(b1*b2)^(1/3)
(y-pred)/parms
}
Fit the model
gmfit < -nls(~weight(y,x1,x2,b0,b1,b2), observe,list("starting value"))

in converting this to R, I left the weight function alone and replaced the
nls() with

gmfit <-
 
nls(~weight(y,x1,x2,b0,b1,b2),data=dydata,trace=TRUE,start=list(b0=1,b1=1,b2=1))

where dydata is the appropriate data.frame.

nls() quickly (6 iterations) finds the exact values from Draper & Yang for
b0, b1, and b2 but
despite reporting a discrepancy of only 3.760596e-29 by the 7th iteration,
it merrily goes on
to 50 iterations and thinks it never converged.  how do I tell nls() that
I'm actually quite
happy with 3.760596e-29 and it need not work further?

I've attached the full file if you want to play with it.

thanks,
  gary mcclelland (aka bernie)
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Using R in a university course: dealing with proposal comments

2008-02-11 Thread Bernard Leemon
Hi Arin,
Others have commented wisely an your first issue.  As for your 2nd issue, I
had my own concerns about using R in undergraduate teaching because I had
always used a point-and-click program for that level.  I should not have
worried.  The current generation has been typing on their keyboards and
their phones for a long time; they are very skilled.  They LIKE a
command-line interface, so long as someone gives them an initial cheat sheet
to get them going.  They like the price, they like having it on their own
computers, and they like that they can use it other courses.  Some students
are sometimes upset that no one has ever told them about R before.  Two
hours after the first lab in which I had students download R to their
laptops, I received an email from a student telling me about how she had
used R to do her physics homework.  I like the (almost)
platform-independence of R.  I've resisted using Rcmdr and JGR because I
want students to be able to use base R well.  If they want to customize
later, then fine.  But what I teach them will apply wherever they next
encounter R, whereas if were to use a lot of packages--especially one I
would be tempted to create to match my teaching more closely--then they
wouldn't be sure what to expect later.

gary mcclelland
Colorado

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R programming style

2008-02-11 Thread Bernard Leemon
I just got a copy of
A First Course in Statistical Programming with R by W. John Braun and Duncan
J. Murdoch.  Cambridge.  at amazon:
 http://www.amazon.com/First-Course-Statistical-Programming-R/dp/0521694248/

first couple of chapters are base R that most everyone would know before
wanting to program but then the other chapters on programming itself seem
pretty good so far.

gary mcclelland
colorado

On Mon, Feb 11, 2008 at 3:47 AM, David Scott <[EMAIL PROTECTED]> wrote:

>
> I am aware of one (unofficial) guide to style for R programming:
> http://www1.maths.lth.se/help/R/RCC/
> from Henrik Bengtsson.
>
> Can anyone provide further pointers to good style?
>
> Views on Bengtsson's ideas would interest me as well.
>
> David Scott
>
>
>
> _
> David Scott Department of Statistics, Tamaki Campus
>The University of Auckland, PB 92019
>Auckland 1142,NEW ZEALAND
> Phone: +64 9 373 7599 ext 86830 Fax: +64 9 373 7000
> Email:  [EMAIL PROTECTED]
>
> Graduate Officer, Department of Statistics
> Director of Consulting, Department of Statistics
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Do I need to use dropterm()??

2008-02-10 Thread Bernard Leemon
Hi Dani,
it would be better to start with a question you are trying to ask of your
data rather than trying to figure out what a particular function does.  with
your variables and model, even if the component terms were not significant,
they must in the model or the product of sunlight and aspect will NOT
represent the interaction.  also note that the tests of your components are
probably not what you think they are.  in general, tests of components of
interactions test the simple effect of that variable when the other variable
is 0.  hence, your 'significant' result for aspect pertains to when log
sunlight is 0, which probably isn't what you want to be testing.  what the
significant effect for sunlight means depends on how aspect was coded.  you
should check to see what code was used to know what zero means.

gary mcclelland
colorado

On Sun, Feb 10, 2008 at 6:40 AM, DaniWells <[EMAIL PROTECTED]>
wrote:

>
> Hello,
>
> I'm having some difficulty understanding the useage of the "dropterm()"
> function in the MASS library. What exactly does it do? I'm very new to R,
> so
> any pointers would be very helpful. I've read many definitions of what
> dropterm() does, but none seem to stick in my mind or click with me.
>
> I've coded everything fine for an interaction that runs as follows: two
> sets
> of data (one for North aspect, one for Southern Aspect) and have a
> logscale
> on the x axis, with survival on the y. After calculating my anova results
> i
> have all significant results (ie aspect = sig, logscale of sunlight = sig,
> and aspect:llight = sig).
>
> When i have all significant results in my ANOVA table, do i need
> dropterm(),
> or is that just to remove insignificant terms?
>
> Many thanks,
>
> Dani
> --
> View this message in context:
> http://www.nabble.com/Do-I-need-to-use-dropterm%28%29---tp15396151p15396151.html
> Sent from the R help mailing list archive at Nabble.com.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R on Mac PRO does anyone have experience with R on such a platform ?

2008-02-09 Thread Bernard Leemon
I have R on all sorts of Macs, including one's a lot whimpier than the one
you are describing and it works great on all of them.
gary mcclelland
colorado

On Sat, Feb 9, 2008 at 6:29 PM, Maura E Monville <[EMAIL PROTECTED]>
wrote:

> I saw there exists an R version for Mac/OS.
> I'd like to hear from someone who is running R on a Mac/OS before
> venturing
> on getting  the following  computer system.
> I am in the process of choosing a powerful laptop 17" MB PRO
> 2.6GHZ(dual-core)  4GBRAM 
>
> Thank you so much,
> --
> Maura E.M
>
>[[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] correlation

2008-02-08 Thread Bernard Leemon
It is easy to worry too much about using numbers to represent order when
using statistics like the correlation.  this little example shows that the
correlation is essentially a rank-order correlation itself:
> x <- 1:20
> y <- x^2
> cor(x,y)
[1] 0.9713482

x and y are definitely not linearly related, yet the correlation is
extremely high.  As Peter suggests, you could be 'safe' using a Spearman
correlation, which is identical to cor(rank(x), rank(y)).  But the rank
transform may be more destructive to your data than need be.


gary mcclelland
colorado

On Fri, Feb 8, 2008 at 9:14 AM, <[EMAIL PROTECTED]> wrote:

> Dear list
>
> I would like to compare two measurements of disease severity (M1 and
> M2), one of the is continuous (M1 ranging from 1 to 10) and the other
> is ordinal (M2 takes Low, Medium, high and very high). Do you think is
> ok to use cor() function to test whether the two agree, i.e correlate?
> I am afraid that if I set M2 to 1,2,3 and 4, the function cor() will
> take them as continuous and therefore lose intrepretation.
>
> Thanks for your commments
>
> David
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] a kinder view of Type III SS

2008-02-07 Thread Bernard Leemon
A young colleague (Matthew Keller) who is an ardent fan of R is teaching me
much about R and discussions surrounding its use.  He recently showed me
some of the sometimes heated discussions about Type I and Type III errors
that have taken place over the years on this listserve.  I'm presumptive
enough to believe I might add a little clarity.  I write this from the
perspective of someone old enough to have been grateful that the stat
programmers (sometimes me coding in Fortran) thought to provide me with
model tests I had not asked for when I carried heavy boxes of punched cards
across campus to the card reader window only to be told to come back a day
or two later for my output.  I'm also modern enough to know that
anova(model1, model2), where model2 is a proper subset of model1, is all
that I need and allows me to ask any question of my data that I want to ask
rather than being constrained to those questions that the SAS or SPSS
programmer thought I might want to ask.  I could end there, and we would
probably all agree with what I have said to this point, but I want to push
the issue a bit and say: it seems that Type III Sums of Squares are being
unfairly maligned among the R cognoscenti. And the practical ramification of
this is that it creates a good deal of confusion among those migrating from
SAS/SPSS land into R - not that this should ever be a reason to introduce a
flawed technique into R, but my argument is that type III sums of squares
are not a flawed technique.

In my reading of the prior discussions on this list, my conclusion is that
the Type I/Type III issue is a red herring that has generated unnecessary
heat.  Base R readily provides both types.  summary(lm( y ~ x + w + z))
provides estimates and tests consistent with Type III sums of squares (it
doesn't provide the SS directly but they are easily derived from the output)
and anova(lm(y ~ x + w + z)) provides tests consistent with Type I sums of
squares.  The names Type I and III are dreadful "gifts" from SAS and others.
 I'd prefer "conditional tests" for those provided by summary() because what
is estimated and tested are x|w,zw|x,z   and  z|x,w [read these as "x
conditional on w and z being in the model"] and "sequential" for those
provided by anova(), being x, w|x, and z|x,w.  None of these tests is more
or less valid or useful than any of the others.  It depends on which
questions researchers want to ask of their data.

Things get more interesting when z  represents the interaction between x and
w, such that z = x * w = xw.  Fundamentally everything is the same in terms
of the above tests.  However, one must be careful to understand what the
coefficient and test for x|w,xw and w|x,xw mean.  That is, x|w,xw tests the
relationship between x and y when and only when w = 0.  A very, very common
mistake, due to an overgeneralization of traditional anova models, is to
refer to x|w,xw as the "main effect."  In my list of ten statistical
commandments I include: "Thou shalt never utter the phrase main effect"
 because it causes so much unnecessary confusion.  In this case, x|w,xw is
the SIMPLE effect of x when w = 0.  This means among other things that if
instead we use w' = w - k so as to change the 0 point on the w' scale, we
will get a different estimate and test for x|w',xw'. Many correctly argue
that the main effect is largely meaningless in the presence of an
interaction because it implies there is no common average effect.  However,
that does not invalidate x|w,xw because it is NOT a "main" (sense
"principal" or "chief") effect but only a "simple" effect for a particular
level of w.  A useful strategy for testing a variety of simple effects is to
subtract different constants k from w so as to change the 0 value to focus
the test on particular simple effects.


 If x and w are both contrast codes (-1 or 1) for the two factors of a 2 x 2
design, then x|w,xw is the simple effect of x when w = 0.   While w never
equals 0, in a balanced design w does equal 0 on average.  In that one very
special case, the simple effect of x when w = 0 equals the average of all
the simple effects and in that one special case one might call it the "main
effect."  However, in all other situations it is only the simple effect when
w = 0.  If we discard the term "main effect", then a lot of unnecessary
confusion goes away.  Again, if one is interested in the simple effect of x
for a particular level of w, then one might want to use, instead of a
contrast code, a dummy code where the value of 0 is assigned to the level of
w of interest and 1 to the other level.

When factors have multiple levels, it is best to have orthogonal contrast
codes to provide 1-df tests of questions of interest.  Products of those
codes are easily interpreted as the simple difference for one contrast when
the other contrast is fixed at some level.  Multiple degree of freedom
omnibus tests are troublesome but are only of interest if we are fixated on
concepts like 'main effect.'

gary m

Re: [R] how to calculate chisq value in R

2008-02-07 Thread Bernard Leemon
On Thu, Feb 7, 2008 at 1:06 PM, John Kane <[EMAIL PROTECTED]> wrote:

> ?chisq.test
> --- jinjin <[EMAIL PROTECTED]> wrote:
>
> >
> > for example, an expression such as chisq(df=1,ncp=0)
> > ?
> >


perhaps pchisq(chisqvalue, df=1, ncp=0) is what you are looking for to
evaluate the probability for a given chi-squre value from its distribution
function.

or if by chisq.test you mean how to compute a contingency table chi-sq
value, see

http://psych.colorado.edu/~mcclella/psych3101h/StatFinder/twoWayChiSquare.html

gary mcclelland
Colorado

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.